Re: [Architecture] How can we improve our profiles story?

2016-10-10 Thread Afkham Azeez
Yes we should have separate deployment.properties files and not duplicate
the config files.

On Tue, Oct 11, 2016 at 10:31 AM, Muhammed Shariq  wrote:

> The issue with packing all configs file according to a particular profile
> is that we'll end up packing many duplicate files, which might not be
> optimal. One way to get around this problem would be to use the
> ConfigResolver - deployment.properties file introduced in C5 to override
> only the required configs, this way we would have a deployment.properties
> file per profile. We'll need to make some changes in the ConfigResolver to
> pick the profile specific property file.
>
> Also for the osgi we could use bundles.info file as you suggested, but I
> was thinking if its possible to package jars required to particular profile
> in sub directories. So inside plugins/ directory we'll have a commons/ dir
> for the jars common to all the profiles and sub directories to hold profile
> specific jars. IINM mistaken, we'll have to modify the carbon-p2-plugin to
> do this.
>
> WDYT?
>
>
> On Mon, Oct 10, 2016 at 2:42 PM, Kishanthan Thangarajah <
> kishant...@wso2.com> wrote:
>
>> I like the idea of providing a descriptor about a profile (describes what
>> artifacts that should be included) and use a tool or a script to create the
>> profile specific runtime pack. But we also need to consider what Sameera
>> mentioned. I see the issue can come only with configuration repo because,
>> different profiles have different configurations (eg axis2.xml). The base
>> distribution should pack all the config files according to different
>> profile and from the descriptor, we can point to the correct config
>> file(s). Other repositories (artifacts and osgi) do not have such issue and
>> for osgi, we could use the bundles.info file as profile specific at
>> descriptor, which will then read by the tool here.
>>
>> WDYT?
>>
>> Thanks,
>> Kishanthan.
>>
>> On Fri, Oct 7, 2016 at 7:50 AM, Muhammed Shariq  wrote:
>>
>>> Hi.
>>>
>>> I had a chat with Azeez and Lakmal to discuss some ideas on improving
>>> the profile support by taking the Cloud / container architecture into
>>> consideration.
>>>
>>> We discussed an approach similar to Option-2 in the previous mail where
>>> we package all artifacts into one distribution and provide a tool to create
>>> the profile specific bear minimum pack. The tool should provide the
>>> following functionality;
>>>
>>> 1. List the available profiles / runtimes for a given distro
>>> 2. Once the user selects a profile, the tool should create the minimum
>>> pack
>>> - The pack should provide the meta info needed to create the pack,
>>> we could use a yaml config for example;
>>>type: deployment
>>> webapp:
>>>   - admin.war
>>>   - axc.war
>>> jaggeryapps:
>>>- storex
>>>- abcdx
>>> services:
>>>- foo.aar
>>> - Similarly we'll need some meta descriptors for the jars and
>>> configs as well, we could either use the bundles.info to parse the
>>> required jars or look for a better option. We might also have to define a
>>> way to specify configs in a profile specific manner so we can simplify the
>>> process of creating the bear minimum runtime.
>>>
>>> 3. Provide the option create docker-compose yaml that would take the
>>> created archives and deploy it in containers. Ideally we should be able to
>>> provide a fully functional docker based distributed deployment (with host
>>> entries, databases etc) OOTB.
>>>
>>> If we are to go with this approach, we need to rethink our packaging
>>> stricture to be able to create the profile specific pack. With this
>>> approach, I feel we are moving away from P2 based profiles, so maybe we can
>>> refer to the minimum packs as a "runtime" of product.
>>>
>>> Any thoughts, suggestions?
>>>
>>>
>>> On Wed, Oct 5, 2016 at 11:16 AM, Muhammed Shariq 
>>> wrote:
>>>
 Hi,

 If we are to reduce the pack size to bear minimum and pack only the
 essential artifacts, we can use one of the following approaches;

 1. Build a bear minimum distribution (with only the required jars,
 config and artifacts) at build time (maven build)
 2. Pack all the artifacts into one distribution (like we do now) and
 provide a script / tool to create the bear minimum pack at runtime.

 If we go ahead with Option-1, then we will be creating multiple
 distributions for a single product so a product like API-M will have many
 different distributions. This I feel will make the simple case too
 complicated, specially for a user trying to get started.

 With Option-2, we can still have the default profile as it is for the
 simplest case, but provide users the ability to create profile specific
 distributions for larger deployments. Users can then use these profile
 specific distributions to create their images.

 In both cases, I feel we are moving 

Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Lakmal Warusawithana
Basically this has 3 parts.

   1. Enable network load balancer to dynamically create containers based
   on first request
   2. Terminate containers if idle.
   3. Improve startup time by having pool of running containers and
   dynamically allocate to tenants ( this is what Sanjeewa mention)


On Tue, Oct 11, 2016 at 10:43 AM, Lakmal Warusawithana 
wrote:

>
>
> On Tue, Oct 11, 2016 at 10:22 AM, Lakmal Warusawithana 
> wrote:
>
>>
>> On Tue, Oct 11, 2016 at 10:06 AM, Manjula Rathnayake 
>> wrote:
>>
>>> Hi Sanjeewa,
>>>
>>> Are you suggesting an API manager deployment pattern using containers?
>>> Container per tenant and per gateway, key manager etc?
>>>
>>
>> Yes. With C5 APIM we will have per tenant gateway, store, publisher,
>> keymanger etc.
>>
>>
>>>
>>> thank you.
>>>
>>> On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:
>>>
 Hi Sanjeewa,

 My understanding is gateway pool is not tenant specific and will not be
 returned but rather terminated?

 On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
 wrote:

> Hi All,
> Starting this mail thread to continue discussion on "speedup instance
> activate time when we move ahead with container based deployments". As of
> now all of us are working on speedup server start time and deploy 
> instances
> on demand with the help of load balancer. Please note that this is not
> alternative/replacement to effort on starting server faster(2 secs or
> less). This is about make request serving more faster even with small
> server startup time.
>
> When we do container based deployment standard approach we discussed
> so far was,
>
>- At the first request check the tenant and service from URL and
>do lookup for running instances.
>- If matching instance available route traffic to that.
>- Else spawn new instance using template(or image).  When we spawn
>this new instance we need to let it know what is the current tenant and
>data sources, configurations it should use.
>- Then route requests to new node.
>- After some idle time this instance may terminate.
>
> *Suggestion*
> If we maintain hot pool(started and ready to serve requests) of
> servers for each server type(API Gateway, Identity Server etc) then we can
> cutoff server startup time + IaaS level spawn time from above process. 
> Then
> when requests comes to wso2.com tenants API Gateway we can pick
> instance from gateway instance pool and set wso2.com tenant context
> and data source using service call(assuming setting context and
> configurations is much faster).
>
> *Implementation*
> For this we need to implement some plug-in to instance spawn process.
> Then instead of spawning new instance it will pick one instance from
> the pool and configure it to behave as specific tenant.
> For this each instance running in pool can open up port, so load
> balancer or scaling component can call it and tell what is the tenant and
> configurations.
> Once it configured server close that configuration port and start
> traffic serving.
> After some idle time this instance may terminate.
>
> This approach will help us if we met following condition.
> (Instance loading time + Server startup time + Server Lookup) *>*
> (Server Lookup + Loading configuration and tenant of running server from
> external call)
>
> Any thoughts on this?
>

>> My concern is practical issues with adding these intelligent into current
>> Load Balancers ( not all loadbalancers support extendibility ). And people
>> can used deferent loadbalancer with their preferences.
>>
>
> If we are doing this, we should do this in network loadbalancer level (in
> k8s/docker-swarm/dc-os). Ideally if we done this correctly we can do
> vertical scaling as well. Then no need to worry about sessions etc.  Other
> third party load balancers anyway fronted these network load balancers.
>
>
>>
>>
> Thanks,
> sanjeewa.
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> blog
> :http://sanjeewamalalgoda.blogspot.com/
> 
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


 --

 Best Regards,

 Malaka Silva
 Senior Technical Lead
 M: +94 777 219 791
 Tel : 94 11 214 5345
 Fax :94 11 2145300
 Skype : malaka.sampath.silva
 LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
 Blog : http://mrmalakasilva.blogspot.com/

 WSO2, Inc.
 lean . 

Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Lakmal Warusawithana
On Tue, Oct 11, 2016 at 10:22 AM, Lakmal Warusawithana 
wrote:

>
> On Tue, Oct 11, 2016 at 10:06 AM, Manjula Rathnayake 
> wrote:
>
>> Hi Sanjeewa,
>>
>> Are you suggesting an API manager deployment pattern using containers?
>> Container per tenant and per gateway, key manager etc?
>>
>
> Yes. With C5 APIM we will have per tenant gateway, store, publisher,
> keymanger etc.
>
>
>>
>> thank you.
>>
>> On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:
>>
>>> Hi Sanjeewa,
>>>
>>> My understanding is gateway pool is not tenant specific and will not be
>>> returned but rather terminated?
>>>
>>> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
>>> wrote:
>>>
 Hi All,
 Starting this mail thread to continue discussion on "speedup instance
 activate time when we move ahead with container based deployments". As of
 now all of us are working on speedup server start time and deploy instances
 on demand with the help of load balancer. Please note that this is not
 alternative/replacement to effort on starting server faster(2 secs or
 less). This is about make request serving more faster even with small
 server startup time.

 When we do container based deployment standard approach we discussed so
 far was,

- At the first request check the tenant and service from URL and do
lookup for running instances.
- If matching instance available route traffic to that.
- Else spawn new instance using template(or image).  When we spawn
this new instance we need to let it know what is the current tenant and
data sources, configurations it should use.
- Then route requests to new node.
- After some idle time this instance may terminate.

 *Suggestion*
 If we maintain hot pool(started and ready to serve requests) of servers
 for each server type(API Gateway, Identity Server etc) then we can cutoff
 server startup time + IaaS level spawn time from above process. Then when
 requests comes to wso2.com tenants API Gateway we can pick instance
 from gateway instance pool and set wso2.com tenant context and data
 source using service call(assuming setting context and configurations is
 much faster).

 *Implementation*
 For this we need to implement some plug-in to instance spawn process.
 Then instead of spawning new instance it will pick one instance from
 the pool and configure it to behave as specific tenant.
 For this each instance running in pool can open up port, so load
 balancer or scaling component can call it and tell what is the tenant and
 configurations.
 Once it configured server close that configuration port and start
 traffic serving.
 After some idle time this instance may terminate.

 This approach will help us if we met following condition.
 (Instance loading time + Server startup time + Server Lookup) *>*
 (Server Lookup + Loading configuration and tenant of running server from
 external call)

 Any thoughts on this?

>>>
> My concern is practical issues with adding these intelligent into current
> Load Balancers ( not all loadbalancers support extendibility ). And people
> can used deferent loadbalancer with their preferences.
>

If we are doing this, we should do this in network loadbalancer level (in
k8s/docker-swarm/dc-os). Ideally if we done this correctly we can do
vertical scaling as well. Then no need to worry about sessions etc.  Other
third party load balancers anyway fronted these network load balancers.


>
>
 Thanks,
 sanjeewa.
 --

 *Sanjeewa Malalgoda*
 WSO2 Inc.
 Mobile : +94713068779

 blog
 :http://sanjeewamalalgoda.blogspot.com/
 



 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


>>>
>>>
>>> --
>>>
>>> Best Regards,
>>>
>>> Malaka Silva
>>> Senior Technical Lead
>>> M: +94 777 219 791
>>> Tel : 94 11 214 5345
>>> Fax :94 11 2145300
>>> Skype : malaka.sampath.silva
>>> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
>>> Blog : http://mrmalakasilva.blogspot.com/
>>>
>>> WSO2, Inc.
>>> lean . enterprise . middleware
>>> https://wso2.com/signature
>>> http://www.wso2.com/about/team/malaka-silva/
>>> 
>>> https://store.wso2.com/store/
>>>
>>> Don't make Trees rare, we should keep them with care
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Manjula Rathnayaka
>> Technical Lead
>> WSO2, Inc.
>> Mobile:+94 77 743 1987
>>
>> 

Re: [Architecture] How can we improve our profiles story?

2016-10-10 Thread Muhammed Shariq
The issue with packing all configs file according to a particular profile
is that we'll end up packing many duplicate files, which might not be
optimal. One way to get around this problem would be to use the
ConfigResolver - deployment.properties file introduced in C5 to override
only the required configs, this way we would have a deployment.properties
file per profile. We'll need to make some changes in the ConfigResolver to
pick the profile specific property file.

Also for the osgi we could use bundles.info file as you suggested, but I
was thinking if its possible to package jars required to particular profile
in sub directories. So inside plugins/ directory we'll have a commons/ dir
for the jars common to all the profiles and sub directories to hold profile
specific jars. IINM mistaken, we'll have to modify the carbon-p2-plugin to
do this.

WDYT?


On Mon, Oct 10, 2016 at 2:42 PM, Kishanthan Thangarajah  wrote:

> I like the idea of providing a descriptor about a profile (describes what
> artifacts that should be included) and use a tool or a script to create the
> profile specific runtime pack. But we also need to consider what Sameera
> mentioned. I see the issue can come only with configuration repo because,
> different profiles have different configurations (eg axis2.xml). The base
> distribution should pack all the config files according to different
> profile and from the descriptor, we can point to the correct config
> file(s). Other repositories (artifacts and osgi) do not have such issue and
> for osgi, we could use the bundles.info file as profile specific at
> descriptor, which will then read by the tool here.
>
> WDYT?
>
> Thanks,
> Kishanthan.
>
> On Fri, Oct 7, 2016 at 7:50 AM, Muhammed Shariq  wrote:
>
>> Hi.
>>
>> I had a chat with Azeez and Lakmal to discuss some ideas on improving the
>> profile support by taking the Cloud / container architecture into
>> consideration.
>>
>> We discussed an approach similar to Option-2 in the previous mail where
>> we package all artifacts into one distribution and provide a tool to create
>> the profile specific bear minimum pack. The tool should provide the
>> following functionality;
>>
>> 1. List the available profiles / runtimes for a given distro
>> 2. Once the user selects a profile, the tool should create the minimum
>> pack
>> - The pack should provide the meta info needed to create the pack, we
>> could use a yaml config for example;
>>type: deployment
>> webapp:
>>   - admin.war
>>   - axc.war
>> jaggeryapps:
>>- storex
>>- abcdx
>> services:
>>- foo.aar
>> - Similarly we'll need some meta descriptors for the jars and configs
>> as well, we could either use the bundles.info to parse the required jars
>> or look for a better option. We might also have to define a way to specify
>> configs in a profile specific manner so we can simplify the process of
>> creating the bear minimum runtime.
>>
>> 3. Provide the option create docker-compose yaml that would take the
>> created archives and deploy it in containers. Ideally we should be able to
>> provide a fully functional docker based distributed deployment (with host
>> entries, databases etc) OOTB.
>>
>> If we are to go with this approach, we need to rethink our packaging
>> stricture to be able to create the profile specific pack. With this
>> approach, I feel we are moving away from P2 based profiles, so maybe we can
>> refer to the minimum packs as a "runtime" of product.
>>
>> Any thoughts, suggestions?
>>
>>
>> On Wed, Oct 5, 2016 at 11:16 AM, Muhammed Shariq  wrote:
>>
>>> Hi,
>>>
>>> If we are to reduce the pack size to bear minimum and pack only the
>>> essential artifacts, we can use one of the following approaches;
>>>
>>> 1. Build a bear minimum distribution (with only the required jars,
>>> config and artifacts) at build time (maven build)
>>> 2. Pack all the artifacts into one distribution (like we do now) and
>>> provide a script / tool to create the bear minimum pack at runtime.
>>>
>>> If we go ahead with Option-1, then we will be creating multiple
>>> distributions for a single product so a product like API-M will have many
>>> different distributions. This I feel will make the simple case too
>>> complicated, specially for a user trying to get started.
>>>
>>> With Option-2, we can still have the default profile as it is for the
>>> simplest case, but provide users the ability to create profile specific
>>> distributions for larger deployments. Users can then use these profile
>>> specific distributions to create their images.
>>>
>>> In both cases, I feel we are moving away from using profiles as we used
>>> to use them since we are creating a pack with only the required jars and
>>> artifacts.
>>>
>>> Considering these factors, should we look to creating only the container
>>> friendly bear minimum distribution (Option-1) or provide the ability to a
>>> 

Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Lakmal Warusawithana
On Tue, Oct 11, 2016 at 10:06 AM, Manjula Rathnayake 
wrote:

> Hi Sanjeewa,
>
> Are you suggesting an API manager deployment pattern using containers?
> Container per tenant and per gateway, key manager etc?
>

Yes. With C5 APIM we will have per tenant gateway, store, publisher,
keymanger etc.


>
> thank you.
>
> On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:
>
>> Hi Sanjeewa,
>>
>> My understanding is gateway pool is not tenant specific and will not be
>> returned but rather terminated?
>>
>> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
>> wrote:
>>
>>> Hi All,
>>> Starting this mail thread to continue discussion on "speedup instance
>>> activate time when we move ahead with container based deployments". As of
>>> now all of us are working on speedup server start time and deploy instances
>>> on demand with the help of load balancer. Please note that this is not
>>> alternative/replacement to effort on starting server faster(2 secs or
>>> less). This is about make request serving more faster even with small
>>> server startup time.
>>>
>>> When we do container based deployment standard approach we discussed so
>>> far was,
>>>
>>>- At the first request check the tenant and service from URL and do
>>>lookup for running instances.
>>>- If matching instance available route traffic to that.
>>>- Else spawn new instance using template(or image).  When we spawn
>>>this new instance we need to let it know what is the current tenant and
>>>data sources, configurations it should use.
>>>- Then route requests to new node.
>>>- After some idle time this instance may terminate.
>>>
>>> *Suggestion*
>>> If we maintain hot pool(started and ready to serve requests) of servers
>>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>>> server startup time + IaaS level spawn time from above process. Then when
>>> requests comes to wso2.com tenants API Gateway we can pick instance
>>> from gateway instance pool and set wso2.com tenant context and data
>>> source using service call(assuming setting context and configurations is
>>> much faster).
>>>
>>> *Implementation*
>>> For this we need to implement some plug-in to instance spawn process.
>>> Then instead of spawning new instance it will pick one instance from the
>>> pool and configure it to behave as specific tenant.
>>> For this each instance running in pool can open up port, so load
>>> balancer or scaling component can call it and tell what is the tenant and
>>> configurations.
>>> Once it configured server close that configuration port and start
>>> traffic serving.
>>> After some idle time this instance may terminate.
>>>
>>> This approach will help us if we met following condition.
>>> (Instance loading time + Server startup time + Server Lookup) *>*
>>> (Server Lookup + Loading configuration and tenant of running server from
>>> external call)
>>>
>>> Any thoughts on this?
>>>
>>
My concern is practical issues with adding these intelligent into current
Load Balancers ( not all loadbalancers support extendibility ). And people
can used deferent loadbalancer with their preferences.


>>> Thanks,
>>> sanjeewa.
>>> --
>>>
>>> *Sanjeewa Malalgoda*
>>> WSO2 Inc.
>>> Mobile : +94713068779
>>>
>>> blog
>>> :http://sanjeewamalalgoda.blogspot.com/
>>> 
>>>
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> Best Regards,
>>
>> Malaka Silva
>> Senior Technical Lead
>> M: +94 777 219 791
>> Tel : 94 11 214 5345
>> Fax :94 11 2145300
>> Skype : malaka.sampath.silva
>> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
>> Blog : http://mrmalakasilva.blogspot.com/
>>
>> WSO2, Inc.
>> lean . enterprise . middleware
>> https://wso2.com/signature
>> http://www.wso2.com/about/team/malaka-silva/
>> 
>> https://store.wso2.com/store/
>>
>> Don't make Trees rare, we should keep them with care
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Manjula Rathnayaka
> Technical Lead
> WSO2, Inc.
> Mobile:+94 77 743 1987
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Lakmal Warusawithana
Director - Cloud Architecture; WSO2 Inc.
Mobile : +94714289692
Blog : http://lakmalsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Manjula Rathnayake
Hi Sanjeewa,

Are you suggesting an API manager deployment pattern using containers?
Container per tenant and per gateway, key manager etc?

thank you.

On Mon, Oct 10, 2016 at 9:06 PM, Malaka Silva  wrote:

> Hi Sanjeewa,
>
> My understanding is gateway pool is not tenant specific and will not be
> returned but rather terminated?
>
> On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
> wrote:
>
>> Hi All,
>> Starting this mail thread to continue discussion on "speedup instance
>> activate time when we move ahead with container based deployments". As of
>> now all of us are working on speedup server start time and deploy instances
>> on demand with the help of load balancer. Please note that this is not
>> alternative/replacement to effort on starting server faster(2 secs or
>> less). This is about make request serving more faster even with small
>> server startup time.
>>
>> When we do container based deployment standard approach we discussed so
>> far was,
>>
>>- At the first request check the tenant and service from URL and do
>>lookup for running instances.
>>- If matching instance available route traffic to that.
>>- Else spawn new instance using template(or image).  When we spawn
>>this new instance we need to let it know what is the current tenant and
>>data sources, configurations it should use.
>>- Then route requests to new node.
>>- After some idle time this instance may terminate.
>>
>> *Suggestion*
>> If we maintain hot pool(started and ready to serve requests) of servers
>> for each server type(API Gateway, Identity Server etc) then we can cutoff
>> server startup time + IaaS level spawn time from above process. Then when
>> requests comes to wso2.com tenants API Gateway we can pick instance from
>> gateway instance pool and set wso2.com tenant context and data source
>> using service call(assuming setting context and configurations is much
>> faster).
>>
>> *Implementation*
>> For this we need to implement some plug-in to instance spawn process.
>> Then instead of spawning new instance it will pick one instance from the
>> pool and configure it to behave as specific tenant.
>> For this each instance running in pool can open up port, so load balancer
>> or scaling component can call it and tell what is the tenant and
>> configurations.
>> Once it configured server close that configuration port and start traffic
>> serving.
>> After some idle time this instance may terminate.
>>
>> This approach will help us if we met following condition.
>> (Instance loading time + Server startup time + Server Lookup) *>*
>> (Server Lookup + Loading configuration and tenant of running server from
>> external call)
>>
>> Any thoughts on this?
>>
>> Thanks,
>> sanjeewa.
>> --
>>
>> *Sanjeewa Malalgoda*
>> WSO2 Inc.
>> Mobile : +94713068779
>>
>> blog
>> :http://sanjeewamalalgoda.blogspot.com/
>> 
>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> Best Regards,
>
> Malaka Silva
> Senior Technical Lead
> M: +94 777 219 791
> Tel : 94 11 214 5345
> Fax :94 11 2145300
> Skype : malaka.sampath.silva
> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
> Blog : http://mrmalakasilva.blogspot.com/
>
> WSO2, Inc.
> lean . enterprise . middleware
> https://wso2.com/signature
> http://www.wso2.com/about/team/malaka-silva/
> 
> https://store.wso2.com/store/
>
> Don't make Trees rare, we should keep them with care
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Manjula Rathnayaka
Technical Lead
WSO2, Inc.
Mobile:+94 77 743 1987
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Feature requirements on IS to be the sole Key Manager of API Manager

2016-10-10 Thread Abimaran Kugathasan
Hi Nuwan,

I have some clarifications on this.

On Mon, Oct 10, 2016 at 6:18 PM, Nuwan Dias  wrote:

> Hi,
>
> With the current efforts on moving to C5 based architecture, API Manager
> plans to rely on standalone IS (without installing features) so that it can
> operate as the Key Manager for the API Gateway. In order to achieve this,
> there are a few feature gaps in IS we have identified earlier that need to
> be filled in. Please see the list below.
>
> 1. A Dynamic Client Registration Endpoint
>
> When users create Applications and Keys on the API Store, we need to call
> an Endpoint on IS to register the Application. Once an Application is
> registered, API Manager also requires an endpoint to retrieve the
> Application's information by querying using the Application name.
>

Does this mean, API Manager will no longer keep any data related to OAuth
Application? It will keep only the OAuth Application ID/Name against API
subscription? And for API Manager will call IS all the OAuth application
related queries?

What will the workaround for any third party Key Managers which don't have
above capabilities (storing OAuth application related information
themselves) ? Also, with this effort, we no longer have API Manager as Key
Manager scenario, only Identity Server can act as Key Manager?


>
> 2. A Resource Registration Endpoint
>
> When defining scopes and associating Resources to scopes, it is required
> to register these scopes on IS. Scopes should also have a role (or similar)
> binding so that we can perform RBAC (at a minimal) for scopes. It is ideal
> to make this an extensible framework so that others could associate thing
> like permissions to scope as well.
>
> 3. A Resource Validation Endpoint against scopes
>
> When the Gateway grants access on a particular token to a resource, it
> needs to check if the given token bears the necessary scope to access that
> resource.
>
> At the moment we have identified the above 3 as mandatory features to be
> supported by IS if the said integration is to be feasible. We would be
> grateful if these could be taken into consideration when IS is being built
> on C5.
>
> Thanks,
> NuwanD.
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Thanks
Abimaran Kugathasan
Senior Software Engineer - API Technologies

Email : abima...@wso2.com
Mobile : +94 773922820


  
  
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Feature requirements on IS to be the sole Key Manager of API Manager

2016-10-10 Thread Johann Nallathamby
+1. It is important we support this in next major IS release from day 1. In
fact most of these APIs are part of UMA standard. Farasath did UMA backend
implementation as a GSoC project. You can find more details on the
implementation at [1].

[1] [GSoC] User-Managed Access (UMA) Profile for OAuth2

On Mon, Oct 10, 2016 at 6:18 PM, Nuwan Dias  wrote:

> Hi,
>
> With the current efforts on moving to C5 based architecture, API Manager
> plans to rely on standalone IS (without installing features) so that it can
> operate as the Key Manager for the API Gateway. In order to achieve this,
> there are a few feature gaps in IS we have identified earlier that need to
> be filled in. Please see the list below.
>
> 1. A Dynamic Client Registration Endpoint
>
> When users create Applications and Keys on the API Store, we need to call
> an Endpoint on IS to register the Application. Once an Application is
> registered, API Manager also requires an endpoint to retrieve the
> Application's information by querying using the Application name.
>
> 2. A Resource Registration Endpoint
>
> When defining scopes and associating Resources to scopes, it is required
> to register these scopes on IS. Scopes should also have a role (or similar)
> binding so that we can perform RBAC (at a minimal) for scopes. It is ideal
> to make this an extensible framework so that others could associate thing
> like permissions to scope as well.
>
> 3. A Resource Validation Endpoint against scopes
>
> When the Gateway grants access on a particular token to a resource, it
> needs to check if the given token bears the necessary scope to access that
> resource.
>
> At the moment we have identified the above 3 as mandatory features to be
> supported by IS if the said integration is to be feasible. We would be
> grateful if these could be taken into consideration when IS is being built
> on C5.
>
> Thanks,
> NuwanD.
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 
Thanks & Regards,

*Johann Dilantha Nallathamby*
Technical Lead & Product Lead of WSO2 Identity Server
Governance Technologies Team
WSO2, Inc.
lean.enterprise.middleware

Mobile - *+9476950*
Blog - *http://nallaa.wordpress.com *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Malaka Silva
Hi Sanjeewa,

My understanding is gateway pool is not tenant specific and will not be
returned but rather terminated?

On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
wrote:

> Hi All,
> Starting this mail thread to continue discussion on "speedup instance
> activate time when we move ahead with container based deployments". As of
> now all of us are working on speedup server start time and deploy instances
> on demand with the help of load balancer. Please note that this is not
> alternative/replacement to effort on starting server faster(2 secs or
> less). This is about make request serving more faster even with small
> server startup time.
>
> When we do container based deployment standard approach we discussed so
> far was,
>
>- At the first request check the tenant and service from URL and do
>lookup for running instances.
>- If matching instance available route traffic to that.
>- Else spawn new instance using template(or image).  When we spawn
>this new instance we need to let it know what is the current tenant and
>data sources, configurations it should use.
>- Then route requests to new node.
>- After some idle time this instance may terminate.
>
> *Suggestion*
> If we maintain hot pool(started and ready to serve requests) of servers
> for each server type(API Gateway, Identity Server etc) then we can cutoff
> server startup time + IaaS level spawn time from above process. Then when
> requests comes to wso2.com tenants API Gateway we can pick instance from
> gateway instance pool and set wso2.com tenant context and data source
> using service call(assuming setting context and configurations is much
> faster).
>
> *Implementation*
> For this we need to implement some plug-in to instance spawn process.
> Then instead of spawning new instance it will pick one instance from the
> pool and configure it to behave as specific tenant.
> For this each instance running in pool can open up port, so load balancer
> or scaling component can call it and tell what is the tenant and
> configurations.
> Once it configured server close that configuration port and start traffic
> serving.
> After some idle time this instance may terminate.
>
> This approach will help us if we met following condition.
> (Instance loading time + Server startup time + Server Lookup) *>* (Server
> Lookup + Loading configuration and tenant of running server from external
> call)
>
> Any thoughts on this?
>
> Thanks,
> sanjeewa.
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> blog :http://sanjeewamalalgoda.
> blogspot.com/ 
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Best Regards,

Malaka Silva
Senior Technical Lead
M: +94 777 219 791
Tel : 94 11 214 5345
Fax :94 11 2145300
Skype : malaka.sampath.silva
LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
Blog : http://mrmalakasilva.blogspot.com/

WSO2, Inc.
lean . enterprise . middleware
https://wso2.com/signature
http://www.wso2.com/about/team/malaka-silva/

https://store.wso2.com/store/

Don't make Trees rare, we should keep them with care
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-10 Thread Sanjeewa Malalgoda
Hi All,
Starting this mail thread to continue discussion on "speedup instance
activate time when we move ahead with container based deployments". As of
now all of us are working on speedup server start time and deploy instances
on demand with the help of load balancer. Please note that this is not
alternative/replacement to effort on starting server faster(2 secs or
less). This is about make request serving more faster even with small
server startup time.

When we do container based deployment standard approach we discussed so far
was,

   - At the first request check the tenant and service from URL and do
   lookup for running instances.
   - If matching instance available route traffic to that.
   - Else spawn new instance using template(or image).  When we spawn this
   new instance we need to let it know what is the current tenant and data
   sources, configurations it should use.
   - Then route requests to new node.
   - After some idle time this instance may terminate.

*Suggestion*
If we maintain hot pool(started and ready to serve requests) of servers for
each server type(API Gateway, Identity Server etc) then we can cutoff
server startup time + IaaS level spawn time from above process. Then when
requests comes to wso2.com tenants API Gateway we can pick instance from
gateway instance pool and set wso2.com tenant context and data source using
service call(assuming setting context and configurations is much faster).

*Implementation*
For this we need to implement some plug-in to instance spawn process.
Then instead of spawning new instance it will pick one instance from the
pool and configure it to behave as specific tenant.
For this each instance running in pool can open up port, so load balancer
or scaling component can call it and tell what is the tenant and
configurations.
Once it configured server close that configuration port and start traffic
serving.
After some idle time this instance may terminate.

This approach will help us if we met following condition.
(Instance loading time + Server startup time + Server Lookup) *>* (Server
Lookup + Loading configuration and tenant of running server from external
call)

Any thoughts on this?

Thanks,
sanjeewa.
-- 

*Sanjeewa Malalgoda*
WSO2 Inc.
Mobile : +94713068779

blog
:http://sanjeewamalalgoda.blogspot.com/

___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Feature requirements on IS to be the sole Key Manager of API Manager

2016-10-10 Thread Nuwan Dias
Hi,

With the current efforts on moving to C5 based architecture, API Manager
plans to rely on standalone IS (without installing features) so that it can
operate as the Key Manager for the API Gateway. In order to achieve this,
there are a few feature gaps in IS we have identified earlier that need to
be filled in. Please see the list below.

1. A Dynamic Client Registration Endpoint

When users create Applications and Keys on the API Store, we need to call
an Endpoint on IS to register the Application. Once an Application is
registered, API Manager also requires an endpoint to retrieve the
Application's information by querying using the Application name.

2. A Resource Registration Endpoint

When defining scopes and associating Resources to scopes, it is required to
register these scopes on IS. Scopes should also have a role (or similar)
binding so that we can perform RBAC (at a minimal) for scopes. It is ideal
to make this an extensible framework so that others could associate thing
like permissions to scope as well.

3. A Resource Validation Endpoint against scopes

When the Gateway grants access on a particular token to a resource, it
needs to check if the given token bears the necessary scope to access that
resource.

At the moment we have identified the above 3 as mandatory features to be
supported by IS if the said integration is to be feasible. We would be
grateful if these could be taken into consideration when IS is being built
on C5.

Thanks,
NuwanD.

-- 
Nuwan Dias

Software Architect - WSO2, Inc. http://wso2.com
email : nuw...@wso2.com
Phone : +94 777 775 729
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] Life cycle Management

2016-10-10 Thread Prasanna Dangalla
Hi Rajith,

On Mon, Oct 10, 2016 at 3:30 PM, Rajith Roshan  wrote:

> Hi,
>>
> please find the images below
>
>> On Mon, Oct 10, 2016 at 3:16 PM, Rajith Roshan  wrote:
>>
>>> Hi all,
>>>
>>> Life cycle is an integral part in any product. Each SOA governance
>>> related artifacts can have their own life cycle. Capabilities provided in
>>> order to manage life cycles not only allow you to properly organize your
>>> assets but also provides many extensibilities (For ex through custom
>>> executors)
>>>
>>> RequirementCurrent API life cycle of API manager completely depends  on
>>> the life cycle implementation provided by the registry. Since we are moving
>>> away from the registry concept we need a completely new life cycle
>>> management framework which cater for life cycle management within API
>>> Manager and should be able to ship with other products which require life
>>> cycle management.
>>>
>>> Proposed SolutionThe basic idea is to completely decouple the life
>>> cycle framework from the system to which its provide life cycle
>>> capabilities. I.e the system which uses life cycle will not change the
>>> behavior of the life cycle framework and only vice versa will be
>>> applicable. The framework should exist independent of the asset type to
>>> which it provides life cycle capability
>>>
>>>
>>> ​
>>>
>>>
>>> The mapping should be maintained by asset type in order to associate
>>> with life cycle data. To be more specific in database schema level each
>>> asset type should update their tables (add extra columns to maintain
>>> mapping with life cycle data) in order to map  life cycle data.
>>>
>>> The external systems which uses life cycle service should connect with
>>> the service in order to have a unique life cycle id which is generated
>>> by the service. This id should be stored in the external system in order to
>>> maintain the mapping (Each asset should have its own life cycle id). On the
>>> other hand life cycle framework will also store this id in order to provide
>>> all the life cycle related operations for a particular asset.
>>>
>>>
>>> ​
>>>
>>> Basically, we will be providing a class (ManagedLifecycle) with all the
>>> required basic operations. Any asset which requires life cycle management
>>> can extend this class.
>>> Further this can be extended to support features like check list items
>>> as well
>>> Features supported
>>>
>>>-
>>>
>>>Custom Inputs - User should  provide with the capability in order to
>>>save custom values per each state, which can be passed to executors .
>>>- Executors - Which are custom classes executed during life cycle
>>>state change operation
>>>
>>>
>>> Please provide you valuable inputs.
>>>
>>> Thanks!
>>> Rajith
>>>
>>> --
>>> Rajith Roshan
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94-72-642-8350 <%2B94-71-554-8430>
>>>
>>
>
>
> --
> Rajith Roshan
> Software Engineer, WSO2 Inc.
> Mobile: +94-72-642-8350 <%2B94-71-554-8430>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>
Since this proposed feature is running separately, are we keeping the db
tables in a separate database? If so, we can separate them from products
and bind them to the feature. IMHO it will be an advantage.

Regards,

*Prasanna Dangalla*
Senior Software Engineer, WSO2, Inc.; http://wso2.com/
lean.enterprise.middleware


*cell: +94 718 11 27 51*
*twitter: @prasa77*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] Life cycle Management

2016-10-10 Thread Rajith Roshan
>
> Hi,
>
please find the images below

> On Mon, Oct 10, 2016 at 3:16 PM, Rajith Roshan  wrote:
>
>> Hi all,
>>
>> Life cycle is an integral part in any product. Each SOA governance
>> related artifacts can have their own life cycle. Capabilities provided in
>> order to manage life cycles not only allow you to properly organize your
>> assets but also provides many extensibilities (For ex through custom
>> executors)
>>
>> RequirementCurrent API life cycle of API manager completely depends  on
>> the life cycle implementation provided by the registry. Since we are moving
>> away from the registry concept we need a completely new life cycle
>> management framework which cater for life cycle management within API
>> Manager and should be able to ship with other products which require life
>> cycle management.
>>
>> Proposed SolutionThe basic idea is to completely decouple the life cycle
>> framework from the system to which its provide life cycle capabilities. I.e
>> the system which uses life cycle will not change the behavior of the life
>> cycle framework and only vice versa will be applicable. The framework
>> should exist independent of the asset type to which it provides life cycle
>> capability
>>
>>
>> ​
>>
>>
>> The mapping should be maintained by asset type in order to associate with
>> life cycle data. To be more specific in database schema level each asset
>> type should update their tables (add extra columns to maintain mapping with
>> life cycle data) in order to map  life cycle data.
>>
>> The external systems which uses life cycle service should connect with
>> the service in order to have a unique life cycle id which is generated
>> by the service. This id should be stored in the external system in order to
>> maintain the mapping (Each asset should have its own life cycle id). On the
>> other hand life cycle framework will also store this id in order to provide
>> all the life cycle related operations for a particular asset.
>>
>>
>> ​
>>
>> Basically, we will be providing a class (ManagedLifecycle) with all the
>> required basic operations. Any asset which requires life cycle management
>> can extend this class.
>> Further this can be extended to support features like check list items as
>> well
>> Features supported
>>
>>-
>>
>>Custom Inputs - User should  provide with the capability in order to
>>save custom values per each state, which can be passed to executors .
>>- Executors - Which are custom classes executed during life cycle
>>state change operation
>>
>>
>> Please provide you valuable inputs.
>>
>> Thanks!
>> Rajith
>>
>> --
>> Rajith Roshan
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-72-642-8350 <%2B94-71-554-8430>
>>
>


-- 
Rajith Roshan
Software Engineer, WSO2 Inc.
Mobile: +94-72-642-8350 <%2B94-71-554-8430>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [APIM] Life cycle Management

2016-10-10 Thread Rajith Roshan
Hi all,

Life cycle is an integral part in any product. Each SOA governance related
artifacts can have their own life cycle. Capabilities provided in order to
manage life cycles not only allow you to properly organize your assets but
also provides many extensibilities (For ex through custom executors)

RequirementCurrent API life cycle of API manager completely depends  on the
life cycle implementation provided by the registry. Since we are moving
away from the registry concept we need a completely new life cycle
management framework which cater for life cycle management within API
Manager and should be able to ship with other products which require life
cycle management.

Proposed SolutionThe basic idea is to completely decouple the life cycle
framework from the system to which its provide life cycle capabilities. I.e
the system which uses life cycle will not change the behavior of the life
cycle framework and only vice versa will be applicable. The framework
should exist independent of the asset type to which it provides life cycle
capability




The mapping should be maintained by asset type in order to associate with
life cycle data. To be more specific in database schema level each asset
type should update their tables (add extra columns to maintain mapping with
life cycle data) in order to map  life cycle data.

The external systems which uses life cycle service should connect with the
service in order to have a unique life cycle id which is generated by the
service. This id should be stored in the external system in order to
maintain the mapping (Each asset should have its own life cycle id). On the
other hand life cycle framework will also store this id in order to provide
all the life cycle related operations for a particular asset.



Basically, we will be providing a class (ManagedLifecycle) with all the
required basic operations. Any asset which requires life cycle management
can extend this class.
Further this can be extended to support features like check list items as
well
Features supported

   -

   Custom Inputs - User should  provide with the capability in order to
   save custom values per each state, which can be passed to executors .
   - Executors - Which are custom classes executed during life cycle state
   change operation


Please provide you valuable inputs.

Thanks!
Rajith

-- 
Rajith Roshan
Software Engineer, WSO2 Inc.
Mobile: +94-72-642-8350 <%2B94-71-554-8430>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] How can we improve our profiles story?

2016-10-10 Thread Kishanthan Thangarajah
I like the idea of providing a descriptor about a profile (describes what
artifacts that should be included) and use a tool or a script to create the
profile specific runtime pack. But we also need to consider what Sameera
mentioned. I see the issue can come only with configuration repo because,
different profiles have different configurations (eg axis2.xml). The base
distribution should pack all the config files according to different
profile and from the descriptor, we can point to the correct config
file(s). Other repositories (artifacts and osgi) do not have such issue and
for osgi, we could use the bundles.info file as profile specific at
descriptor, which will then read by the tool here.

WDYT?

Thanks,
Kishanthan.

On Fri, Oct 7, 2016 at 7:50 AM, Muhammed Shariq  wrote:

> Hi.
>
> I had a chat with Azeez and Lakmal to discuss some ideas on improving the
> profile support by taking the Cloud / container architecture into
> consideration.
>
> We discussed an approach similar to Option-2 in the previous mail where we
> package all artifacts into one distribution and provide a tool to create
> the profile specific bear minimum pack. The tool should provide the
> following functionality;
>
> 1. List the available profiles / runtimes for a given distro
> 2. Once the user selects a profile, the tool should create the minimum pack
> - The pack should provide the meta info needed to create the pack, we
> could use a yaml config for example;
>type: deployment
> webapp:
>   - admin.war
>   - axc.war
> jaggeryapps:
>- storex
>- abcdx
> services:
>- foo.aar
> - Similarly we'll need some meta descriptors for the jars and configs
> as well, we could either use the bundles.info to parse the required jars
> or look for a better option. We might also have to define a way to specify
> configs in a profile specific manner so we can simplify the process of
> creating the bear minimum runtime.
>
> 3. Provide the option create docker-compose yaml that would take the
> created archives and deploy it in containers. Ideally we should be able to
> provide a fully functional docker based distributed deployment (with host
> entries, databases etc) OOTB.
>
> If we are to go with this approach, we need to rethink our packaging
> stricture to be able to create the profile specific pack. With this
> approach, I feel we are moving away from P2 based profiles, so maybe we can
> refer to the minimum packs as a "runtime" of product.
>
> Any thoughts, suggestions?
>
>
> On Wed, Oct 5, 2016 at 11:16 AM, Muhammed Shariq  wrote:
>
>> Hi,
>>
>> If we are to reduce the pack size to bear minimum and pack only the
>> essential artifacts, we can use one of the following approaches;
>>
>> 1. Build a bear minimum distribution (with only the required jars, config
>> and artifacts) at build time (maven build)
>> 2. Pack all the artifacts into one distribution (like we do now) and
>> provide a script / tool to create the bear minimum pack at runtime.
>>
>> If we go ahead with Option-1, then we will be creating multiple
>> distributions for a single product so a product like API-M will have many
>> different distributions. This I feel will make the simple case too
>> complicated, specially for a user trying to get started.
>>
>> With Option-2, we can still have the default profile as it is for the
>> simplest case, but provide users the ability to create profile specific
>> distributions for larger deployments. Users can then use these profile
>> specific distributions to create their images.
>>
>> In both cases, I feel we are moving away from using profiles as we used
>> to use them since we are creating a pack with only the required jars and
>> artifacts.
>>
>> Considering these factors, should we look to creating only the container
>> friendly bear minimum distribution (Option-1) or provide the ability to a
>> create profile specific pack from the default distribution?
>>
>>
>> On Wed, Oct 5, 2016 at 9:03 AM, Lakmal Warusawithana 
>> wrote:
>>
>>>
>>>
>>> On Tue, Oct 4, 2016 at 5:39 PM, Afkham Azeez  wrote:
>>>
 At runtime, there should be profile specific packs shouldn't have
 anything extra other than the bear minimum. This is to make it container
 friendly as well.

>>>
>>> Yes, reducing image size is critical to support container native
>>> architecture.
>>>
>>>

 On Tue, Oct 4, 2016 at 5:27 PM, Muhammed Shariq 
 wrote:

> Hi folks,
>
> I had a chat with Sameera and Jayanga on how we can improve support
> for managing configurations for a particular profile.
>
> We were discussing the possibility of extending the ConfigResolver [1]
> concept to manage profile specific configurations. With ConfigResolver, we
> have the ability to override certain configurations using the
> deployment.properties file, this way we only need modify a