Re: [Architecture] [APIM] Support to Import/Export Throttling Policies Using the API Controller

2020-11-03 Thread Uvindra Dias Jayasinha
Thanks for your feedback Harsha. Yes the points you have made can be valid.

The current feedback we have gotten from the team is, since an admin user
is responsible for defining and updating throttle policies, supporting this
via apictl is not a priority. If we do get more requests for supporting
this via apictl we can take this under consideration.

On Tue, 3 Nov 2020 at 16:40, Harsha Kumara  wrote:

> I do agree that importing policies would be a one time task. But it's not
> always true in general scenarios. Users should be able to update policies
> and providing this support isn't hard. There can be situations which
> require increasing the limits and even modify the visibility settings of
> policies. If API flow is fully automated, update functionality can become
> more usable. If the user modifies existing tiers? not providing update
> features may become problematic.
>
> Ideally you should think on minimizing the use of UI or API if the user
> wants a fully automated API deployment flow.
>
> Bulk update is a good to have feature.
>
> Thanks,
> Harsha
>
> On Tue, Nov 3, 2020 at 9:59 PM Wasura Wattearachchi 
> wrote:
>
>> Hi,
>>
>> On Tue, Nov 3, 2020 at 4:08 PM Harsha Kumara  wrote:
>>
>>> What are the plans for the updates for the policies?
>>>
>>> It's required to revisit this feature in the perspective of CI/CD
>>> pipeline building. AFAIK APIs now can smoothly transit between the
>>> environments. How this capability should work with the CI/CD pipeline also
>>> should be investigated. In a CI/CD pipeline, I think it allows to change
>>> the throttle policies via placeholders.
>>>
>> Yes, what you have stated is correct. The throttling policies can be
>> changed via placeholders. But there are some limitations as explained in
>> the section *Limitations in the Current Behaviour. *So our main aim is
>> to address those which will ultimately smoothen the support for CI/CD.
>>
>> Any plans for bulk import and export the policies? I think this option
>>> would be more usable in the perspective of throttling policies.
>>>
>> As explained before, since importing/exporting throttling policies is a
>> one-time procedure per environment I don't think providing import/export
>> functionality for policies as a bulk will have a significant impact. @Uvindra
>> Dias Jayasinha  can provide more insight into this.
>>
>> Another concern is that policies in lower environments can be different
>>> in the upper environments. Think more on how this should be addressed while
>>> APIs are moving across the environments.
>>>
>> Yes, different environments can have different policies. So when
>> importing an API, by changing the api.yaml file or swagger.yaml with the
>> corresponding throttling policy(ies), the users can change the policies
>> according to the environment.
>>
>> Thanks,
>> Wasura
>>
>>
>>> Thanks,
>>> Harsha
>>>
>>> On Tue, Nov 3, 2020 at 4:08 PM Wasura Wattearachchi 
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> As per the discussion, we will be addressing only the shortcomings
>>>> mentioned in the section *Limitations in the Current Behaviour*, to
>>>> make the behaviour more consistent and user friendly. Since
>>>> importing/exporting throttling policies is a one-time procedure per
>>>> environment, we decided not to support full export/import functionality
>>>> with new commands.
>>>>
>>>> Thanks,
>>>> Wasura
>>>>
>>>> On Mon, Nov 2, 2020 at 5:43 PM Uvindra Dias Jayasinha 
>>>> wrote:
>>>>
>>>>> +Amila De Silva 
>>>>>
>>>>> On Mon, 2 Nov 2020 at 17:42, Uvindra Dias Jayasinha 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, 2 Nov 2020 at 17:36, Nuwan Dias  wrote:
>>>>>>
>>>>>>> Is the purpose of this feature to move throttling policies across
>>>>>>> environments or across product versions?
>>>>>>>
>>>>>>> Across environments
>>>>>>
>>>>>> On Mon, Nov 2, 2020 at 5:02 PM Wasura Wattearachchi 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi all,
>>>>>>>>
>>>>>>>> This is regarding the feature mentioned in [1] which requests to
>>>>>>>> support importing and exporting throttlin

Re: [Architecture] [APIM] Support to Import/Export Throttling Policies Using the API Controller

2020-11-02 Thread Uvindra Dias Jayasinha
+Amila De Silva 

On Mon, 2 Nov 2020 at 17:42, Uvindra Dias Jayasinha 
wrote:

>
>
> On Mon, 2 Nov 2020 at 17:36, Nuwan Dias  wrote:
>
>> Is the purpose of this feature to move throttling policies across
>> environments or across product versions?
>>
>> Across environments
>
> On Mon, Nov 2, 2020 at 5:02 PM Wasura Wattearachchi 
>> wrote:
>>
>>> Hi all,
>>>
>>> This is regarding the feature mentioned in [1] which requests to support
>>> importing and exporting throttling policies using the API Controller.
>>> Before discussing this new feature, let’s look at the current/existing
>>> behaviour of importing/exporting throttling policies and identify the
>>> limitations.
>>>
>>> Limitations in the Current Behaviour
>>>
>>> We have three (3) types of throttling policies.
>>>
>>>1.
>>>
>>>Advanced Throttling Policies
>>>
>>> These can be added for an API or for a particular resource of an API.
>>>
>>>1.
>>>
>>>When we export an API using the API Controller, if the Advanced
>>>Throttling Policy was added to the whole API, then the throttling
>>>policy name will be included in api.yaml file.
>>>2.
>>>
>>>When we export an API using the API Controller, if the Advanced
>>>Throttling Policy was added to a particular resource, then the
>>>throttling policy name will be included in the swagger.yaml file under 
>>> the
>>>particular resource name.
>>>3.
>>>
>>>When the user is importing an API which was exported as mentioned
>>>above, the Advanced Throttling Policies will be assigned to the API or to
>>>the resource as expected if the policy currently exists in the API 
>>> Manager
>>>instance.
>>>4.
>>>
>>>But, if the user is importing an API with an Advanced Throttling
>>>Policy which is not currently available in the API Manager, the import 
>>> will
>>>fail.
>>>
>>> (This behaviour is ideal to handle all the types of throttling policies
>>> which will be discussed next)
>>>
>>>
>>>1.
>>>
>>>Application-level Throttling Tiers/Policies
>>>
>>> These policies can be added for Applications.
>>>
>>>1.
>>>
>>>When we export an Application with an Application-level Throttling
>>>Policy using the API Controller, the throttling policy name will be
>>>included in the .json file inside the exported 
>>> directory.
>>>2.
>>>
>>>When the user is importing an Application which was exported as
>>>mentioned above, the Application-level Throttling Tier/Policy will be
>>>assigned to the Application as expected if the policy currently exists in
>>>the API Manager instance.
>>>3.
>>>
>>>But currently, even though the Application-level Throttling
>>>Tier/Policy is not in the API Manager instance, the application will be
>>>imported to that instance, which is wrong. Further, if a user login to 
>>> the
>>>Devportal and check for the imported application, it will display an 
>>> error
>>>as well. (The API Manager log will state that the particular
>>>Application-level Throttling Tier/Policy is nowhere to be found)
>>>
>>> The behaviour of above C point should be changed and handled as stated
>>> under the section Advanced Throttling Policies D.
>>>
>>>
>>>1.
>>>
>>>Subscription-level Throttling Tiers/Policies
>>>
>>> These policies can be added for APIs.
>>>
>>>1.
>>>
>>>When we export an API with a Subscription-level Throttling
>>>Tier/Policy using the API Controller, the throttling policy details
>>>will be included in the api.yaml file inside the exported directory as an
>>>array (Refer the below example).
>>>
>>>
>>> availableTiers:
>>>
>>>  -
>>>
>>>   name: MyPolicy
>>>
>>>   displayName: MyPolicy
>>>
>>>   description: Testing
>>>
>>>   tierAttributes: {}
>>>
>>>   requestsPerMin: 1
>>>
>>>   requestCount: 1
>>>
>>>   unitTime: 1
>>>
>>>   timeUnit: min
>>>
>>>   tierPlan

Re: [Architecture] [APIM] Support to Import/Export Throttling Policies Using the API Controller

2020-11-02 Thread Uvindra Dias Jayasinha
On Mon, 2 Nov 2020 at 17:36, Nuwan Dias  wrote:

> Is the purpose of this feature to move throttling policies across
> environments or across product versions?
>
> Across environments

On Mon, Nov 2, 2020 at 5:02 PM Wasura Wattearachchi  wrote:
>
>> Hi all,
>>
>> This is regarding the feature mentioned in [1] which requests to support
>> importing and exporting throttling policies using the API Controller.
>> Before discussing this new feature, let’s look at the current/existing
>> behaviour of importing/exporting throttling policies and identify the
>> limitations.
>>
>> Limitations in the Current Behaviour
>>
>> We have three (3) types of throttling policies.
>>
>>1.
>>
>>Advanced Throttling Policies
>>
>> These can be added for an API or for a particular resource of an API.
>>
>>1.
>>
>>When we export an API using the API Controller, if the Advanced
>>Throttling Policy was added to the whole API, then the throttling
>>policy name will be included in api.yaml file.
>>2.
>>
>>When we export an API using the API Controller, if the Advanced
>>Throttling Policy was added to a particular resource, then the
>>throttling policy name will be included in the swagger.yaml file under the
>>particular resource name.
>>3.
>>
>>When the user is importing an API which was exported as mentioned
>>above, the Advanced Throttling Policies will be assigned to the API or to
>>the resource as expected if the policy currently exists in the API Manager
>>instance.
>>4.
>>
>>But, if the user is importing an API with an Advanced Throttling
>>Policy which is not currently available in the API Manager, the import 
>> will
>>fail.
>>
>> (This behaviour is ideal to handle all the types of throttling policies
>> which will be discussed next)
>>
>>
>>1.
>>
>>Application-level Throttling Tiers/Policies
>>
>> These policies can be added for Applications.
>>
>>1.
>>
>>When we export an Application with an Application-level Throttling
>>Policy using the API Controller, the throttling policy name will be
>>included in the .json file inside the exported 
>> directory.
>>2.
>>
>>When the user is importing an Application which was exported as
>>mentioned above, the Application-level Throttling Tier/Policy will be
>>assigned to the Application as expected if the policy currently exists in
>>the API Manager instance.
>>3.
>>
>>But currently, even though the Application-level Throttling
>>Tier/Policy is not in the API Manager instance, the application will be
>>imported to that instance, which is wrong. Further, if a user login to the
>>Devportal and check for the imported application, it will display an error
>>as well. (The API Manager log will state that the particular
>>Application-level Throttling Tier/Policy is nowhere to be found)
>>
>> The behaviour of above C point should be changed and handled as stated
>> under the section Advanced Throttling Policies D.
>>
>>
>>1.
>>
>>Subscription-level Throttling Tiers/Policies
>>
>> These policies can be added for APIs.
>>
>>1.
>>
>>When we export an API with a Subscription-level Throttling
>>Tier/Policy using the API Controller, the throttling policy details
>>will be included in the api.yaml file inside the exported directory as an
>>array (Refer the below example).
>>
>>
>> availableTiers:
>>
>>  -
>>
>>   name: MyPolicy
>>
>>   displayName: MyPolicy
>>
>>   description: Testing
>>
>>   tierAttributes: {}
>>
>>   requestsPerMin: 1
>>
>>   requestCount: 1
>>
>>   unitTime: 1
>>
>>   timeUnit: min
>>
>>   tierPlan: FREE
>>
>>   stopOnQuotaReached: true
>>
>>  -
>>
>>   name: Silver
>>
>>   displayName: Silver
>>
>>   description: Allows 2000 requests per minute
>>
>>   requestsPerMin: 2000
>>
>>   requestCount: 2000
>>
>>   unitTime: 1
>>
>>   timeUnit: min
>>
>>   tierPlan: FREE
>>
>>   stopOnQuotaReached: true
>>
>> This array should contain only the names of the Subscription-level
>> Throttling Tiers/Policies. (In other words, this should be similar to
>> what happened in Advanced Throttling Policies and Application-level
>> Throttling Tiers/Policies wherein those scenarios only the names were
>> exported)
>>
>> This should be fixed.
>>
>>1.
>>
>>When the user is importing an API which was exported as mentioned
>>above, the Subscription-level Throttling Tiers/Policies will be assigned 
>> to
>>the API as expected if the policy exists in the API Manager instance. This
>>behaviour is expected.
>>2.
>>
>>But, if the user is importing an API with a Subscription-level
>>Throttling Tier/Policy which is not currently available in the API 
>> Manager,
>>the import will fail. This behaviour is expected as well.
>>
>>
>> New Requirements  and Features to Eliminate the Limitations
>>
>> As discussed above, the user should be able to import an API with a
>> particular throttling policy only if 

Re: [Architecture] Automatically updating API Products when underlying APIs are updated

2020-09-16 Thread Uvindra Dias Jayasinha
Hi Chamila

On Wed, 16 Sep 2020 at 09:15, Chamila Adhikarinayake 
wrote:

> Hi Uvindra,
> I feel like we should introduce a versioning capability to API product and
> deploy a new version of the product if the underline API is changed. If we
> automatically change the API product resources, existing users might get
> affected. I think this is what we do when we need to modify an existing API
> in a production environment. We create a new API version and role out the
> new changes. So, It is not a good practice to change the APIs once it is
> published. (introduce a new version instead of changing the API). In that
> case, existing API product should not be changed as well :)
>

By having a separate UI tab that will allow Publishers to view API Products
that are eligible to be updated and allowing them to decide when the API
Product will be republished, should address this concern. The changes to
the API Product will not be pushed blindly. Sometimes users want to update
published artifacts deliberately. As you mentioned, versioning for API
Products is an important aspect that we want to address in the future as
well.

>
>
> On Tue, Sep 15, 2020 at 9:44 PM Saranki Magenthirarajah 
> wrote:
>
>> Thank you for the details Uvindra.
>>
>> Regards,
>>
>> On Tue, Sep 15, 2020 at 12:40 PM Uvindra Dias Jayasinha 
>> wrote:
>>
>>> Thanks for your feedback Saranki.
>>>
>>> On Mon, 14 Sep 2020 at 18:53, Saranki Magenthirarajah 
>>> wrote:
>>>
>>>> Hi Uvindra,
>>>>
>>>> +1 for this idea on auto updating the API products with underlying API
>>>> changes.
>>>>
>>>> This requirement has been raised by several of our customers in the
>>>> past as they find it difficult to update the relevant API products via the
>>>> API definition everytime the underlying API gets updated.
>>>> Some additional improvements I would propose in terms of API Product in
>>>> APIM-3.X.X are as follows.
>>>>
>>>>1. The edit option in API Definition for API product gets displayed
>>>>only after refreshing the page which is lacking in user experience.
>>>>2. When the role assigned to a scope is deleted or renamed it
>>>>throws error in both the underlying API(s) as well as in the API 
>>>> product.
>>>>Unless we remove it via the API/API Product definition or create the 
>>>> same
>>>>role again the issue still persists. If we can address this as well in 
>>>> the
>>>>product it will add more value to this feature.
>>>>
>>>> The ability to handle changes that take place to roles that are
>>> assigned to scopes is a limitation that we currently have unfortunately.
>>> Fixing this will require migration of existing data so it is a bit outside
>>> the scope of what we are trying to address here.
>>>
>>>>
>>>>1. In the suggested API Publisher UI, will there be options to
>>>>update all the dependent API products at once and to update 
>>>> individually as
>>>>well? Because some customers might prefer to do the auto update
>>>> without getting their concern each and every time(one-time concern: 
>>>> maybe
>>>>a configuration via deployment.toml to be applied globally) as there 
>>>> could
>>>>be large no of Products which might be time consuming to be selected
>>>>individually for updates.
>>>>
>>>> Yes this is a good point, we can have a select all option to allow all
>>> products to be updated in one go.
>>>
>>> Regards,
>>>>
>>>> On Mon, Sep 14, 2020 at 5:42 PM Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>
>>>>> As part of an effort in improving the API Products feature we are
>>>>> considering automatically updating an API Product when any of its
>>>>> underlying APIs are updated.
>>>>>
>>>>> Currently for example, if there is a change in an API Resources scope,
>>>>> those changes will not be reflected in an API Product that uses those
>>>>> resources unless the API Product is explicitly saved. Due to the fact that
>>>>> API Products are created and managed explicitly by API Publishers and APIs
>>>>> are created solely by API Developers the following scenario needs to be
>>>>> taken into consideration when propagating updates.
>>>>>
>>&g

Re: [Architecture] Automatically updating API Products when underlying APIs are updated

2020-09-15 Thread Uvindra Dias Jayasinha
Thanks for your feedback Saranki.

On Mon, 14 Sep 2020 at 18:53, Saranki Magenthirarajah 
wrote:

> Hi Uvindra,
>
> +1 for this idea on auto updating the API products with underlying API
> changes.
>
> This requirement has been raised by several of our customers in the past
> as they find it difficult to update the relevant API products via the API
> definition everytime the underlying API gets updated.
> Some additional improvements I would propose in terms of API Product in
> APIM-3.X.X are as follows.
>
>1. The edit option in API Definition for API product gets displayed
>only after refreshing the page which is lacking in user experience.
>2. When the role assigned to a scope is deleted or renamed it throws
>error in both the underlying API(s) as well as in the API product. Unless
>we remove it via the API/API Product definition or create the same role
>again the issue still persists. If we can address this as well in the
>product it will add more value to this feature.
>
> The ability to handle changes that take place to roles that are assigned
to scopes is a limitation that we currently have unfortunately. Fixing this
will require migration of existing data so it is a bit outside the scope of
what we are trying to address here.

>
>1. In the suggested API Publisher UI, will there be options to update
>all the dependent API products at once and to update individually as well?
>Because some customers might prefer to do the auto update  without getting
>their concern each and every time(one-time concern: maybe a configuration
>via deployment.toml to be applied globally) as there could be large no of
>Products which might be time consuming to be selected individually for
>updates.
>
> Yes this is a good point, we can have a select all option to allow all
products to be updated in one go.

Regards,
>
> On Mon, Sep 14, 2020 at 5:42 PM Uvindra Dias Jayasinha 
> wrote:
>
>> As part of an effort in improving the API Products feature we are
>> considering automatically updating an API Product when any of its
>> underlying APIs are updated.
>>
>> Currently for example, if there is a change in an API Resources scope,
>> those changes will not be reflected in an API Product that uses those
>> resources unless the API Product is explicitly saved. Due to the fact that
>> API Products are created and managed explicitly by API Publishers and APIs
>> are created solely by API Developers the following scenario needs to be
>> taken into consideration when propagating updates.
>>
>> By default an API Product will be in published state. If an API Developer
>> changes the scope of an API resource that is used by the API Product
>> leading to the API Product being updated then,
>>
>> *1. Who has updated the API Product?* - Technically the API Developer
>> has updated the API Product since the change to the underlying API Resource
>> was initiated by them. But this seems to break the permission model because
>> it allows an API Developer to modify an API Product indirectly.
>>
>> *2. To what gateway environment should the API Product be republished to?*
>> - Determining the gateway environment the API Product is published to is
>> the responsibility of the API Publisher. Currently though the ability to
>> choose this is hidden, it will be available when Life Cycles are supported
>> for API Products. Any updates that occur to an API Product due to an API
>> resource change will not apply till the API Product is republished. But
>> without a API Publishers intervention we are  unable to determine which
>> gateway environments to publish to. Publishing to all environments
>> available by default might go against what the API Publisher expects.
>>
>> *Possible solution*
>> Managing API Product updates due to changes in an API resource is a valid
>> requirement. But in order to solve it without crossing the boundaries
>> established for API Developers and API Publishers a new window in the API
>> Publisher UI could be introduced to show API Products that have dependent
>> API updates. This would allow API Publishers to identify API Products that
>> need to be republished and enable them to initiate the process.
>>
>> Please give your feedback on the above.
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>
>
> --
> Saranki Magenthirarajah | Software Engineer | WSO2 Inc.
> (m) +94

[Architecture] Automatically updating API Products when underlying APIs are updated

2020-09-14 Thread Uvindra Dias Jayasinha
As part of an effort in improving the API Products feature we are
considering automatically updating an API Product when any of its
underlying APIs are updated.

Currently for example, if there is a change in an API Resources scope,
those changes will not be reflected in an API Product that uses those
resources unless the API Product is explicitly saved. Due to the fact that
API Products are created and managed explicitly by API Publishers and APIs
are created solely by API Developers the following scenario needs to be
taken into consideration when propagating updates.

By default an API Product will be in published state. If an API Developer
changes the scope of an API resource that is used by the API Product
leading to the API Product being updated then,

*1. Who has updated the API Product?* - Technically the API Developer has
updated the API Product since the change to the underlying API Resource was
initiated by them. But this seems to break the permission model because it
allows an API Developer to modify an API Product indirectly.

*2. To what gateway environment should the API Product be republished to?*
- Determining the gateway environment the API Product is published to is
the responsibility of the API Publisher. Currently though the ability to
choose this is hidden, it will be available when Life Cycles are supported
for API Products. Any updates that occur to an API Product due to an API
resource change will not apply till the API Product is republished. But
without a API Publishers intervention we are  unable to determine which
gateway environments to publish to. Publishing to all environments
available by default might go against what the API Publisher expects.

*Possible solution*
Managing API Product updates due to changes in an API resource is a valid
requirement. But in order to solve it without crossing the boundaries
established for API Developers and API Publishers a new window in the API
Publisher UI could be introduced to show API Products that have dependent
API updates. This would allow API Publishers to identify API Products that
need to be republished and enable them to initiate the process.

Please give your feedback on the above.


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [Vote] Release of WSO2 API Manager Tooling v3.2.0 RC2

2020-08-26 Thread Uvindra Dias Jayasinha
Tested the following,

   - Exporting/import APIs, API Products, Apps.
   - Environment specific parameters
   - Dynamic data scenarios

*[+] Stable - Go ahead and release*

On Tue, 25 Aug 2020 at 15:45, Naduni Pamudika  wrote:

> Hi All,
>
> Tested the following with API Manager Tooling 3.2.0 RC2 with API Manager
> 3.2.0 RC6 in the Linux environment.
>
>- Export/Import APIs, Applications and API Products from/to different
>APIM environments with latest APICTL.
>- Generate keys with APICTL.
>- Delete APIs, Applications (with keys, subscriptions) and API
>Products in APIM via APICTL.
>- Change the lifecycle status of an API deployed in APIM via APICTL.
>
> No blockers found.
> +1 to proceed with the release.
>
> On Tue, Aug 25, 2020 at 10:16 AM Lakshitha Gunasekara 
> wrote:
>
>> Hi All,
>>
>> Tested the following in APICTL 3.2.0-rc2 with WSO2 APIM 3.2.0-rc6.
>>
>>
>>1. Setting up environments (add/remove/listing)
>>2. API import with vcs deploy.
>>3. API/APP listing with limit parameters and searching with the query
>>parameters.
>>4. Formating the listing outputs to different formats.
>>5. Working with CA-signed certificates directly imported from the OS.
>>
>> No blockers found. +1 to proceed with the release.
>>
>> Thanks,
>> Lakshitha
>>
>>
>> On Tue, Aug 25, 2020 at 9:49 AM Shehani Rathnayake 
>> wrote:
>>
>>> Hi All,
>>>
>>> Tested the following in APICTL 3.2.0 - RC2, APIM 3.2.0 RC5.
>>>
>>> 1. Setting up/ listing and removing environments.
>>> 2. Importing and Exporting APIs
>>> 3. Install/ Uninstall api-operator
>>> 4. Kubernetes mode.
>>> 5. Add/Update/Delete APIs in Kubernetes cluster.
>>>
>>> No blockers found.
>>>
>>> *[+] Stable - Go ahead and release.*
>>>
>>> Thanks.
>>>
>>> On Tue, Aug 25, 2020 at 9:39 AM Hiranya Abeyrathne 
>>> wrote:
>>>
 Hi all,

 Tested the following in APICTL 3.2.0 - RC2.


 1. Setting up environments (add/remove/listing)
 2. API/APP import/export and updating
 3. API listing and searching
 4. API Product support basic functionalities
 5. Configuring environment specific params
 6. Lifecycle update of deployed API

 No blockers found.

 [+ Stable] +1 to proceed with the release.

 Thanks!
 Hiranya Abeyrathne
 Software Engineer,

 *WSO2, Inc. *

 lean. enterprise. middleware
 Mob: +94 70210 8657
 LinkedIn: https://www.linkedin.com/in/hiranya-kavishani/

 


 On Sat, Aug 22, 2020 at 12:55 PM Praminda Jayawardana <
 prami...@wso2.com> wrote:

> Tested pushing Microgateway APIs to API Manager using apictl.
> Tested compatibility of x-wso2 extensions.
>
> Didn't find any blockers.
>
> *[+] Stable - Go ahead and release.*
>
> On Mon, Aug 17, 2020 at 7:53 PM Wasura Wattearachchi 
> wrote:
>
>> Hi all,
>>
>> I tested the following scenarios with APIM 3.2.0 RC5.
>>
>>- Support for API Products from API Controller. (Tested with the
>>following users)
>>- Super tenant and tenant users with *Internal/devops* role
>>   (which was newly introduced in APIM 3.2.0)
>>   - Super tenant and tenant users with *admin* role
>>   - Super tenant and tenant users who do not have full privileges
>>   - Test scenarios basically include the following.
>>  - Import an API Product
>>  - Export an API Product
>>  - Generate Keys for an API Product
>>  - Delete an API Product
>>  - List API Products
>>   - Support for Different Endpoint Types from API Controller.
>>   - HTTP/REST (including load balancing and failover policies)
>>   - HTTP/SOAP (including load balancing and failover policies)
>>   - Dynamic
>>   - AWS Lambda
>>- GIT Integration with API Controller. (Tested the primary use
>>cases associated with the following commands.)
>>- apictl vcs init command
>>   - apictl vcs status command
>>   - apictl vcs deploy command (with APIs, API Products and
>>   Applications)
>>   - apictl set --vcs-deletion-enabled=true command and related
>>   use cases
>>   - apictl set --vcs-config-path /home/foo/vcs-config.yaml
>>   commanf and related use cases
>>   - Configuring Environment Specific Parameters
>>- List Environments, APIs and Applications
>>- Generate Keys for an API
>>- Initialize an API (including the dev first approach)
>>
>> Testing environment - Ubuntu 20.04 LTS, JDK 1.8.0_251
>>
>> No blockers found.
>>
>> *[+] Stable - Go ahead and release.*
>>
>> Thanks,
>> Wasura
>>
>> On Mon, Aug 17, 2020 at 10:56 AM Chamindu Udakara 
>> wrote:
>>
>>> Hi all,
>>>
>>> I have tested the following in APICTL 3.2.0 - RC2 with 

Re: [Architecture] Harvesting API's from AWS API Gateway to the WSO2 developers portal

2020-08-07 Thread Uvindra Dias Jayasinha
Hi Akshitha,

Based on what you have said so far seems that this can easily be achieved
by chaining these 5 commands together in a script. Cloud you give some
details about the kind of automation you are trying to introduce with your
project?

On Thu, 6 Aug 2020 at 13:40, Akshitha Dalpethado (Intern) 
wrote:

> Basically what we are trying to achieve through this project is to give
> our wso2 apim users the ability to extract API's created on AWS API Gateway
> from a specific account to the WSO2 developers portal by extracting the
> swagger definition of an API created in the AWS using the AWS Command Line
> Interface (AWS CLI).
>
> First we will be requesting for the API's a user has created in the AWS
> APIG which will return the API's created in the AWS APIG (from a specific
> account) and their ID's and also we will be requesting for the API Stage
> names which is also required to extract the swagger definition of a API
> from AWS APIG using AWS CLI.
>
>
> *Command01: aws apigateway get-rest-apisCommand02: aws apigateway
> get-stages --rest-api-id Command03: aws apigateway get-export
> --rest-api-id  --stage-name  --export-type swagger
> /path/file_name.json*
>
> Then we initialize an API project using WSO2 API CTL using the swagger
> definition we extracted from the AWS APIG.
> *Command: ./apictl init  --oas path/file_name.json*
>
> Then we can import the initialized API project to the WSO2 Developers
> portal as an API by using the following API CTL command.
> *Command: ./apictl import-api -f  -e  -k*
>
> This is how it can be done manually but our goal is to automate this
> process and allow users to extract API's they have created in the AWS APIG
> to the WSO2 Developers portal.
>
>
>
>
>
> --
> *Akshitha Dalpethado* | Intern | WSO2 Inc.
>
> (m) :0770284542 | Email : akshi...@wso2.com
>
> [image: http://wso2.com/signature] 
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] Improving the functionality to delete applications using the API Controller by considering the -o (--owner) flag

2020-07-06 Thread Uvindra Dias Jayasinha
I don't think this is the purpose of the owner flag. Even APIM does not
support users in the same tenant deleting Apps created by another user. I
think the reason the owner flag is because different users can create Apps
having the same name.

For example, all users have a DefaultApplication created for them when they
login to the store. So you can have UserA and UserB both having their own
DefaultApplication.

So if you try to delete the DefaultApplication from apictl, you need to
specify the owner correctly so that APIM will know which instance of the
DefaultApplication that it needs to delete.

On Mon, 6 Jul 2020 at 16:40, Wasura Wattearachchi  wrote:

> Hi all,
>
> Currently, the API Controller provides “apictl delete app” command which
> consists of the below flags [1].
>
> Flags:
>
>   -e, --environment string   Environment from which the Application should
> be deleted
>
>   -h, --help   help for app
>
>   -n, --name stringName of the Application to be deleted
>
>   -o, --owner string   Owner of the Application to be deleted
>
> In this mail, we will be focussing on the functionality of the -o
> (--owner) flag. The expected functionality of this flag is to allow a
> user (assume User A) to provide the facility to delete an application
> created by another user (assume User B) who is in the same tenant. But,
> the current REST APIs do not provide adequate support for this
> functionality [2].
>
> Deleting an application consists of two (2) main steps and for those two
> (2) steps, two (2) REST API resources are being used currently, which have
> some drawbacks when it comes to fulfilling the functionality expected from
> the -o (--owner) flag.
>
>
> Step
>
> REST API
>
> Drawback
>
> Solution(s)
>
> 1. Retrieving the applicationId based on the application name (-n/--name
> flag) and the owner’s name (-o/--owner flag)
>
> Store v1 GET /applications
>
> This resource only provides the facility to retrieve an application by
> querying using the application name. Support to query by the owner’s name
> is not provided here. We need the functionality to query by both the
> application name and the owner’s name. But, searching by anyone else’s
> name is not suitable to have in Store REST API. Thus proves that we need to
> have another REST API resource that has the expected functionality which
> can be defined in Admin v1.
>
> Solution 1
>
> There is an existing resource in Admin v1 as GET /applications which has
> the ability to “Retrieve a list of all applications of a certain subscriber
> (but not the owner)”. The name of the subscriber can be passed to this as a
> parameter specified by “user=”. We can enhance this further by providing
> the ability to pass the owner’s name as “owner=” as a new optional
> parameter. WDYT?
>
> Solution 2
>
> Define a new REST API resource in Admin v1 without changing any existing
> resources as mentioned in Solution 1. WDYT?
>
> 2. Deleting the application specified by the applicationId.
>
> Store v1 DELETE /applications/{applicationid}
>
> This resource does not allow us to delete applications that belong to
> other users. It provides an output as
>
> {"code":403,"message":"Forbidden","description":"You don't have permission
> to access the application with Id ","moreInfo":"","error":[]}
>
> when we try to delete anyone else’s application.
>
> Solution
>
> Define a new REST API resource that allows deleting applications belong
> to other users who are in the same tenant. WDYT?
>
>
>
> It would be much appreciated if you can share your thoughts when deciding
> the solutions to the above two (2) steps. Please feel free to include any
> new/additional solutions if you have any.
>
> [1]
> https://apim.docs.wso2.com/en/next/learn/api-controller/getting-started-with-wso2-api-controller/#delete-an-apiapi-productapplication-in-an-environment
>
> [2] https://github.com/wso2/product-apim-tooling/issues/335
>
> Thank you!
> --
> *Wasura Wattearachchi* | Software Engineer | WSO2 Inc.
> (m) +94775396038 | (e) was...@wso2.com | (b) Medium
> 
> [image: http://wso2.com/signature] 
>
>
>

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] Support for API Products from API Controller

2020-05-19 Thread Uvindra Dias Jayasinha
Sorry Wasura, what you are talking about is not clear to me, the context is
a bit mixed up.

On Tue, 19 May 2020 at 22:19, Wasura Wattearachchi  wrote:

> Hi,
>
> Yes, we can use *--import-apis*. This is nearly similar to the flag
> *--with-apis* that we thought to use at the beginning.
>
> But I have another doubt to get clarified. If we are using the above flag
> to differentiate whether a particular user is allowed to import (by
> creating) dependent APIs, then again we can go with the old two (2)
> resources (One for import and the other one for export), right?
>
> Also, we can have a scope named apim:api_product_import_export and
> assigned it to both of the resources. When a user who has the above scope
> tries to import an API Product, if the --import-apis flag is true then
> that user will be allowed to import dependent APIs along with the API
> Product.
>
> I was thinking whether this satisfies our concern about the restriction
> that we thought to do using the scopes because any user who has
> apim:api_product_import_export scope can still import an API Product
> either with the dependent APIs or not, using the flag --import-apis. Does
> this accomplish our goal? WDYT?
>
> Thanks,
>
> Wasura
>
> On Tue, May 19, 2020 at 7:49 PM Uvindra Dias Jayasinha 
> wrote:
>
>>
>>
>> On Tue, 19 May 2020 at 19:04, Wasura Wattearachchi 
>> wrote:
>>
>>> Hi,
>>>
>>> On Tue, May 19, 2020 at 12:56 PM Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>>
>>>>
>>>> On Tue, 19 May 2020 at 11:31, Malintha Amarasinghe 
>>>> wrote:
>>>>
>>>>> Implementation wise, would it be a hit if we go with option 2?
>>>>> 1. The resources seem a bit duplicated.
>>>>> 2. It is difficult to find good names for the two resources as they
>>>>> are mostly the same.
>>>>>
>>>>
>>>> We could go with,  POST import/api-product and *POST* import
>>>> */api-product-with-apis* which would make them very clear and
>>>> different.
>>>>
>>>>
>>>>> 3. Need additional code at the client-side, to switch between APIs
>>>>> based on the scopes of the token.
>>>>>
>>>>
>>>> Had an offline chat with Malintha about this as well. I believe we will
>>>> need to decide which resource will be called based on the flag that is
>>>> provided in the call(--update-apis or --update-api-products). So the apictl
>>>> will not need to do any scope validation. Scope validation will occur at
>>>> API level as usual.
>>>>
>>>
>>> Here, when we are importing an API Product, should not we decide whether
>>> the user is allowed to *create* APIs or not, instead of just only
>>> update.
>>> For example,
>>>
>>>- If a user has the permission to create an API, then when importing
>>>the API Product, he/she should be allowed to import the dependent APIs as
>>>well.
>>>- If a user does not have permission to create APIs, he/she only can
>>>import the API Product.
>>>
>>> IMO, I do not think we can handle this from the flags --update-apis or
>>> --update-api-product, because initially without doing any update to the API
>>> Product or the API, this problem arises whether we should allow the user to 
>>> *create
>>> APIs*.
>>>
>>> Any thoughts on this?
>>>
>>
>> I get your point. If we consider  --update-apis and --update-api-product
>> flags as purely used for determining if existing respective artifacts are
>> going to be updated then we need a 3rd flag for the scenario you are
>> talking about. How about *--import-apis*? If this flag is there we will
>> import both dependent APIs and API Product. If not specified we will only
>> import API Product. So the flag evaluation logic could be something like,
>>
>> if --import-apis == true {
>>// Import the API Product and its dependent APIs
>> } else If --update-apis == true {
>>// Update th dependent APIs  *AND* the respective API Product
>> } else if --update-api-products == true {
>>   // Only update the respective API Product
>> } else {
>>  // Only import the API Product
>> }
>>
>> So the default scenario with no flags only imports the API Product. WDYT?
>>
>>
>>
>>
>>> Thanks,
>>> Wasura
>>>
>>>
>>>
>>>
>>>&g

Re: [Architecture] [APIM] Support for API Products from API Controller

2020-05-19 Thread Uvindra Dias Jayasinha
On Tue, 19 May 2020 at 19:04, Wasura Wattearachchi  wrote:

> Hi,
>
> On Tue, May 19, 2020 at 12:56 PM Uvindra Dias Jayasinha 
> wrote:
>
>>
>>
>> On Tue, 19 May 2020 at 11:31, Malintha Amarasinghe 
>> wrote:
>>
>>> Implementation wise, would it be a hit if we go with option 2?
>>> 1. The resources seem a bit duplicated.
>>> 2. It is difficult to find good names for the two resources as they are
>>> mostly the same.
>>>
>>
>> We could go with,  POST import/api-product and *POST* import
>> */api-product-with-apis* which would make them very clear and different.
>>
>>
>>> 3. Need additional code at the client-side, to switch between APIs based
>>> on the scopes of the token.
>>>
>>
>> Had an offline chat with Malintha about this as well. I believe we will
>> need to decide which resource will be called based on the flag that is
>> provided in the call(--update-apis or --update-api-products). So the apictl
>> will not need to do any scope validation. Scope validation will occur at
>> API level as usual.
>>
>
> Here, when we are importing an API Product, should not we decide whether
> the user is allowed to *create* APIs or not, instead of just only update.
> For example,
>
>- If a user has the permission to create an API, then when importing
>the API Product, he/she should be allowed to import the dependent APIs as
>well.
>- If a user does not have permission to create APIs, he/she only can
>import the API Product.
>
> IMO, I do not think we can handle this from the flags --update-apis or
> --update-api-product, because initially without doing any update to the API
> Product or the API, this problem arises whether we should allow the user to 
> *create
> APIs*.
>
> Any thoughts on this?
>

I get your point. If we consider  --update-apis and --update-api-product
flags as purely used for determining if existing respective artifacts are
going to be updated then we need a 3rd flag for the scenario you are
talking about. How about *--import-apis*? If this flag is there we will
import both dependent APIs and API Product. If not specified we will only
import API Product. So the flag evaluation logic could be something like,

if --import-apis == true {
   // Import the API Product and its dependent APIs
} else If --update-apis == true {
   // Update th dependent APIs  *AND* the respective API Product
} else if --update-api-products == true {
  // Only update the respective API Product
} else {
 // Only import the API Product
}

So the default scenario with no flags only imports the API Product. WDYT?




> Thanks,
> Wasura
>
>
>
>
>>
>>> If we use the other way, we'd only need a small check and can do what we
>>> need with less effort right?
>>> The way we check scopes is bit deviating from the other resources. But
>>> it might be a small sacrifice to keep things simple.
>>> Just my two cents.
>>>
>>
>> Manlintha already mentioned we are manually checking scopes at code level
>> to validate the API update scenario to ensure that publisher users cannot
>> update creator only fields of an API(example: endpoint). So it seems that
>> this might not be a big deal then.
>>
>>>
>>> Thanks!
>>>
>>>
>> Anyway seems there is mostly a semantic difference between the 2
>> solutions. Can we get some feedback from others as well regarding this?
>>
>>
>>> On Tue, May 19, 2020 at 11:09 AM Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>>
>>>>
>>>> On Mon, 18 May 2020 at 20:05, Wasura Wattearachchi 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> +1 for the suggestion.
>>>>>
>>>>> Please consider the below two (2) solutions where the first one is what 
>>>>> @Malintha
>>>>> Amarasinghe  suggested.
>>>>>
>>>>> *Solution 1*
>>>>>
>>>>>-
>>>>>
>>>>>Add a new scope (apim:api_product_import_export) to the above
>>>>>resources.
>>>>>-
>>>>>
>>>>>If the user has apim:api_product_import_export scope, he/she can
>>>>>import/export the API Product. But if the user wants to import the
>>>>>dependent APIs as well, then he/she should have the
>>>>>*apim:api_import_export* scope as well.
>>>>>
>>>>>
>>>>>
>>>>

Re: [Architecture] [APIM] Support for API Products from API Controller

2020-05-19 Thread Uvindra Dias Jayasinha
On Tue, 19 May 2020 at 11:31, Malintha Amarasinghe 
wrote:

> Implementation wise, would it be a hit if we go with option 2?
> 1. The resources seem a bit duplicated.
> 2. It is difficult to find good names for the two resources as they are
> mostly the same.
>

We could go with,  POST import/api-product and *POST* import
*/api-product-with-apis* which would make them very clear and different.


> 3. Need additional code at the client-side, to switch between APIs based
> on the scopes of the token.
>

Had an offline chat with Malintha about this as well. I believe we will
need to decide which resource will be called based on the flag that is
provided in the call(--update-apis or --update-api-products). So the apictl
will not need to do any scope validation. Scope validation will occur at
API level as usual.


> If we use the other way, we'd only need a small check and can do what we
> need with less effort right?
> The way we check scopes is bit deviating from the other resources. But it
> might be a small sacrifice to keep things simple.
> Just my two cents.
>

Manlintha already mentioned we are manually checking scopes at code level
to validate the API update scenario to ensure that publisher users cannot
update creator only fields of an API(example: endpoint). So it seems that
this might not be a big deal then.

>
> Thanks!
>
>
Anyway seems there is mostly a semantic difference between the 2 solutions.
Can we get some feedback from others as well regarding this?


> On Tue, May 19, 2020 at 11:09 AM Uvindra Dias Jayasinha 
> wrote:
>
>>
>>
>> On Mon, 18 May 2020 at 20:05, Wasura Wattearachchi 
>> wrote:
>>
>>> Hi,
>>>
>>> +1 for the suggestion.
>>>
>>> Please consider the below two (2) solutions where the first one is what 
>>> @Malintha
>>> Amarasinghe  suggested.
>>>
>>> *Solution 1*
>>>
>>>-
>>>
>>>Add a new scope (apim:api_product_import_export) to the above
>>>resources.
>>>-
>>>
>>>If the user has apim:api_product_import_export scope, he/she can
>>>import/export the API Product. But if the user wants to import the
>>>dependent APIs as well, then he/she should have the
>>>*apim:api_import_export* scope as well.
>>>
>>>
>>>
>>>-
>>>
>>>Problems:
>>>-
>>>
>>>   When importing an API Product,  all things happen within the REST
>>>   resource call: POST import/api-product. If we want to check
>>>   whether the user has apim:api_import_export scope before implementing
>>>   dependent APIs, it should be done at the implementation level which
>>>   has not been done before.
>>>   -
>>>
>>>   Also, checking whether a particular user has a particular scope
>>>   is a task to be done at the REST level, not the at the implementation
>>>   level. Conceptually what we are doing is not suitable here.
>>>
>>>
>>> *Solution 2*
>>>
>>>-
>>>
>>>Create three (3) REST APIs.
>>>
>>>
>>>
>>>-
>>>
>>>   Two (2) REST API resources for import API Product task.
>>>   1.
>>>
>>>  POST import/api-product
>>>  -
>>>
>>> This will require apim:api_product_import_export scope.
>>> Users who have this scope can use this resource to 
>>> import/update an API
>>> Product.
>>> -
>>>
>>> Independent APIs will not be allowed to be imported/updated
>>> here.
>>> 2.
>>>
>>>  POST import/api-product-apis
>>>  -
>>>
>>> This will require apim:api_product_import_export and
>>> apim:api_import_export scopes (We can create a new scope
>>> that incorporates both of these scopes). Users who have this 
>>> scope can use
>>> this resource to import/update an API Product and also the 
>>> dependent APIs.
>>>
>>> *Note:- *
>>>
>>>-
>>>
>>> *The API Controller will check the scopes of the user who
>>> executes the command and will decide which resource to be 
>>> invoked. *
>>> -
>>>
>>> *Both these resources will call the same functions
>>> internally, but at the st

Re: [Architecture] [APIM] Support for API Products from API Controller

2020-05-18 Thread Uvindra Dias Jayasinha
On Mon, 18 May 2020 at 20:05, Wasura Wattearachchi  wrote:

> Hi,
>
> +1 for the suggestion.
>
> Please consider the below two (2) solutions where the first one is what 
> @Malintha
> Amarasinghe  suggested.
>
> *Solution 1*
>
>-
>
>Add a new scope (apim:api_product_import_export) to the above
>resources.
>-
>
>If the user has apim:api_product_import_export scope, he/she can
>import/export the API Product. But if the user wants to import the
>dependent APIs as well, then he/she should have the
>*apim:api_import_export* scope as well.
>
>
>
>-
>
>Problems:
>-
>
>   When importing an API Product,  all things happen within the REST
>   resource call: POST import/api-product. If we want to check whether
>   the user has apim:api_import_export scope before implementing dependent
>   APIs, it should be done at the implementation level which has not
>   been done before.
>   -
>
>   Also, checking whether a particular user has a particular scope is
>   a task to be done at the REST level, not the at the implementation 
> level.
>   Conceptually what we are doing is not suitable here.
>
>
> *Solution 2*
>
>-
>
>Create three (3) REST APIs.
>
>
>
>-
>
>   Two (2) REST API resources for import API Product task.
>   1.
>
>  POST import/api-product
>  -
>
> This will require apim:api_product_import_export scope. Users
> who have this scope can use this resource to import/update an API 
> Product.
> -
>
> Independent APIs will not be allowed to be imported/updated
> here.
> 2.
>
>  POST import/api-product-apis
>  -
>
> This will require apim:api_product_import_export and
> apim:api_import_export scopes (We can create a new scope that
> incorporates both of these scopes). Users who have this scope can 
> use this
> resource to import/update an API Product and also the dependent 
> APIs.
>
> *Note:- *
>
>-
>
> *The API Controller will check the scopes of the user who
> executes the command and will decide which resource to be 
> invoked. *
> -
>
> *Both these resources will call the same functions
> internally, but at the start, a flag (named isImportAPIsAllowed) 
> will be
> passed. It will be false in (i) and true in (ii). Based on the 
> value of
> that flag, we can decide whether to allow importing dependent 
> APIs or not.*
>
>
>
>-
>
>   One (2) REST API resource for export API Product task. (The user
>   should only have apim:api_product_import_export scope for this)
>
>
>
>-
>
>Problems:
>-
>
>   Two (2) resources should be added to almost the same task
>   (importing).
>   -
>
>   Two (2) new scopes are needed.
>   -
>
>   Should come up with meaning full names for the resources and scopes.
>
>
> WDYT? Your feedback is much appreciated.
>

+1 for solution 2.

Even though there is an additional resource being added I still see this as
good because we are able to maintain good REST design principles and
maintain the consistency of the access control model we have in place. In
this way we dont need to implement custom role validation logic internally
to see if the caller can import APIs. The actual REST API resources will
only be a thin layer as explained, the implementation logic will be reused
by both specifying the relevant scenario via the isImportAPIsAllowed flag.


> Thanks,
>
> Wasura
>
> On Mon, May 18, 2020 at 12:03 PM Malintha Amarasinghe 
> wrote:
>
>> We are also allowing API Developers also to use apictl right. API
>> creators are not allowed to create API products. So it may not be a good
>> idea to have the same api-impot-export for both product and API import?
>>
>> Thanks!
>>
>> On Mon, May 18, 2020 at 11:48 AM Wasura Wattearachchi 
>> wrote:
>>
>>> Hi,
>>>
>>> We currently do not have any specific scopes for products. We have used
>>> *apim:api_publish*, *apim:api_view *kind of scopes in API Products as
>>> well.
>>>
>>> Thanks,
>>> Wasura
>>>
>>> On Fri, May 15, 2020 at 8:53 PM Bhathiya Jayasekara 
>>> wrote:
>>>
 What are the product-related scopes we have now?

 Thanks,
 Bhathiya

 On Fri, May 15, 2020 at 8:24 PM Wasura Wattearachchi 
 wrote:

> Hi all,
>
> During the code review that conducted today (15th May 2020), a
> question arose related to the scope that has been used in the REST API
> level. Currently, the below REST APIs have been implemented to import and
> export API Products with the scope apim:api_import_export.
>
>
> During the import process, each of the dependent API will be imported
> when the */import/api-product* REST API is called. Please consider
> the below scenario which might be a problem here.
>
> Scenario: 

Re: [Architecture] [APIM] Support for API Products from API Controller

2020-04-30 Thread Uvindra Dias Jayasinha
Ideally it would be great if we could make the commands more consistent but
we need to take care about changing existing ones because we can end up
breaking customers existing CI/CD flows.

The commands are now a binding API. If we are changing them in the future
we need to first deprecate the current ones.

On Thu, 30 Apr 2020 at 18:10, Harsha Kumara  wrote:

>
>
> On Thu, Apr 30, 2020 at 10:02 PM Wasura Wattearachchi 
> wrote:
>
>> Hi all,
>>
>> There are two (2) types of command signatures in API Controller such as 
>> apictl
>> [verb] [noun] [flags] and apictl [command] [flags]. Below are the
>> commands belonging to those two (2) categories.
>>
>> apictl [verb] [noun] [flags]
>>
>> apictl [command] [flags]
>>
>> Existing commands:
>>
>>-
>>
>>apictl list apis [flags]
>>-
>>
>>apictl list apps [flags]
>>-
>>
>>apictl login  [flags]
>>-
>>
>>apictl logout  [flags]
>>-
>>
>>apictl install api-operator [flags]
>>-
>>
>>apictl uninstall api-operator [flags]
>>-
>>
>>apictl change registry [flags]
>>-
>>
>>apictl version < apictl noun only
>>-
>>
>>apictl help <- apictl verb only
>>
>>
>> Newly added  commands:
>>
>>-
>>
>>apictl list api-products [flags]
>>
>> Existing commands:
>>
>>-
>>
>>apictl add [flags]
>>-
>>
>>apictl add-env [flags]
>>-
>>
>>apictl remove-env [flags]
>>-
>>
>>apictl export-api [flags]
>>-
>>
>>apictl export-apis [flags]
>>-
>>
>>apictl export-app [flags]
>>-
>>
>>apictl import-api [flags]
>>-
>>
>>apictl import-app [flags]
>>-
>>
>>apictl init [flags]
>>-
>>
>>apictl get-keys [flags]
>>-
>>
>>apictl set [flags]
>>-
>>
>>apictl update [flags]
>>
>>
>> Newly added commands:
>>
>>-
>>
>>apictl delete-api [flags]
>>-
>>
>>apictl change-api-status [flags]
>>-
>>
>>apictl delete-api-product [flag]
>>
>>
>> Commands to be added:
>>
>>-
>>
>>apictl import-api-product [flags]
>>-
>>
>>apictl export-api-product [flags]
>>
>>
>> Eventually, it would be better if we can stick into one command structure 
>> (which
>> is apictl [verb] [noun] [flags]) as
>> @Bhathiya Jayasekara  and @Harsha Kumara suggested. +1
>> for that, since the commands will be more consistent.
>>
>> But Is it okay to start with the commands for API Products only (IMO,
>> users will be confused if this is only done for API Products, not APIs or
>> Apps) or shall we convert all the commands at once, in a future release?
>>
> Yes it would good if those commands become more consistent. At the moment
> we should follow the proper strategy when adding new commands. We may need
> to decide best way to move existing commands to a consistent format. Having
> consistency will make commands less complicated.
>
>>
>> Your opinions will be highly appreciated.
>>
>> Thank you!
>>
>> On Thu, Apr 16, 2020 at 10:13 AM Wasura Wattearachchi 
>> wrote:
>>
>>> Hi all,
>>>
>>> This is regarding the issue
>>>  which
>>> requests to support listing and generating tokens for API Products through
>>> API Controller (apictl). Currently, we do not support any functionality
>>> related to API Products from API Controller side. Thus, we can introduce
>>> the following four (4) functionalities as a new feature (rather than a fix
>>> to the above-stated issue) in order to improve API Controller, additional
>>> to what has been requested in the above issue.
>>>
>>>
>>>1.
>>>
>>>Import API Products
>>>2.
>>>
>>>Export API Products
>>>3.
>>>
>>>List API Products
>>>4.
>>>
>>>Generate keys (tokens) for API Products
>>>
>>>
>>> Approaches
>>>
>>> Here, we can identify two (2) approaches for importing/exporting API
>>> Products.
>>>
>>> Approach 1 - Import/export without the dependant APIs
>>>
>>>1.
>>>
>>>Import API Products
>>>
>>> Allow to import an API Product only if the dependent APIs have been
>>> already imported/created inside the API Manager. The main task here is
>>> to check whether the required resources to create the API Product are with
>>> the APIs in the API Manager.
>>>
>>>1.
>>>
>>>Export API Products
>>>
>>> Allow to export/download an API Product without the related APIs inside
>>> the archive file.
>>>
>>> Approach 2 - Import/export with the dependant APIs
>>>
>>>1.
>>>
>>>Import API Products
>>>
>>> Give freedom to the user to import an API Product along with the related
>>> APIs (archived together) only if the dependent APIs have not been
>>> already imported/created inside the API Manager. If the user tries to
>>> import an already imported API/APIs when importing the API Product, an
>>> error should be displayed.
>>>
>>>1.
>>>
>>>Export API Products
>>>
>>> Allow to export/download an API Product with the dependent APIs inside
>>> the archive file.
>>>
>>> Comparison of Approach 1 and 2
>>>

Re: [Architecture] [APIM] Support for API Products from API Controller

2020-04-17 Thread Uvindra Dias Jayasinha
On Fri, 17 Apr 2020 at 12:16, Sanjeewa Malalgoda  wrote:

> By the time we export API we really do not know all underlying APIs
> available in the upper environment. It will detect only when the API
> imports to the environment.
> Since API product developers have access to the publisher and see
> underlying APIs exporting product selectively will not be a big issue.
> However if users started to expose only API products to outside and let
> users to consume them as sellable units then things might be changed. In
> that case APIs will be just a building block unit to make sellable
> products. In such cases CI/CD pipelines etc do not need to care about
> underlying APIs. API products will move across environments and when
> something needs to move across environments. Even Though we don't have
> users follow that pattern I think it will adapt very soon.
>

This particular scenario Sanjeewa is talking about impacts subscribers
because they will not even be aware of the APIs, since only the API
Products are exposed(APIs will be in CREATED life cycle state and therefore
will not show up on the dev portal).

However the way API Manager implements this concept means that you cannot
fully forget about the existence of underlying APIs when you are an
admin/creator/publisher user. So because of this CI/CD pipelines also need
to consider this in the same way. Since API Manager supports a hybrid of
exposing APIs as well as API Products, it can become fairly complicated
trying to support importing/exporting dependent APIs under the hood. We
could consider this in the future, but at the moment this is not a major
benefit compared to the changes that we will need to make to existing
stable code to support this.


> Thanks,
> sanjeewa.
>
>
> On Fri, Apr 17, 2020 at 11:22 AM Wasura Wattearachchi 
> wrote:
>
>> Hi,
>>
>> Thank you for your opinion @Uvindra Dias Jayasinha .
>>
>> When discussing the functionality to Generate keys (tokens) for API
>> Products, currently we have the “apictl gen-keys” command to generate
>> keys for *APIs*. It requires --name (-n), --version (-v), --provider
>> (-r) and --environment (-e) flags to be mandatory. Basically, here, the
>> corresponding API will be searched by name, version (since one API can have
>> many versions) and provider, in the corresponding environment, and an
>> application named default-apictl-app will be created (if not already
>> exists) and subscribed to the API, and a key was generated.
>>
>> The above current logic should change as follows in order to incorporate
>> the functionality to generate keys for API Products as well.
>>
>>
>>1.
>>
>>Make the flag --version (-v) optional since API Products do not have
>>versions.
>>2.
>>
>>When a user is trying to get a key for an API (since the version is
>>optional now) search by --name (-n) and --provider (-r) in the
>>corresponding --environment (-e),
>>1.
>>
>>   If more than one APIs have been retrieved while searching, it
>>   means that the API we searched has more than one version. So, display a
>>   message (error) to the user stating “ 
>> versions of
>>   the same API are available. Please specify the API version and try 
>> again”
>>   2.
>>
>>   If only one API has been retrieved while searching, then subscribe
>>   the application to it and generate a key.
>>   3.
>>
>>When the user is trying to get a key for an API Product, since API
>>Products do not have versions, we can directly search by the --name
>>(-n) and --provider (-r) in the corresponding --environment (-e) and
>>retrieve the API Product, then subscribe the application to it and 
>> generate
>>a key.
>>
>>
>> I appreciate any thoughts and feedback on this as well.
>>
>> Thank you!
>>
>> On Thu, Apr 16, 2020 at 11:20 AM Uvindra Dias Jayasinha 
>> wrote:
>>
>>> Great analysis Wasura.
>>>
>>> The unique thing to keep in mind regarding API Products is that they
>>> have two different personas when viewed from the perspective of the API
>>> Publisher and the API Subscriber. To the API Publisher, an API Product is a
>>> different entity from that of an API. The dependence of a given API Product
>>> upon one or more underlying APIs is evident. However from the perspective
>>> of the API Subscriber, an API Product is no different from an API. In fact
>>> the API Subscriber cannot tell the difference between the two.
>>>
>>> Keeping this in mind, see my comments inline.
>>>
>>

Re: [Architecture] [APIM] Support for API Products from API Controller

2020-04-17 Thread Uvindra Dias Jayasinha
On Fri, 17 Apr 2020 at 11:22, Wasura Wattearachchi  wrote:

> Hi,
>
> Thank you for your opinion @Uvindra Dias Jayasinha .
>
> When discussing the functionality to Generate keys (tokens) for API
> Products, currently we have the “apictl gen-keys” command to generate
> keys for *APIs*. It requires --name (-n), --version (-v), --provider (-r)
> and --environment (-e) flags to be mandatory. Basically, here, the
> corresponding API will be searched by name, version (since one API can have
> many versions) and provider, in the corresponding environment, and an
> application named default-apictl-app will be created (if not already
> exists) and subscribed to the API, and a key was generated.
>
> The above current logic should change as follows in order to incorporate
> the functionality to generate keys for API Products as well.
>
>
>1.
>
>Make the flag --version (-v) optional since API Products do not have
>versions.
>
>
Sorry I should have mentioned this earlier. At the moment though we are not
supporting versioning for API Products we plan on implementing this in the
future. At the moment we are hard coding the version as 1.0.0 internally
for any given API Product though this is not displayed in the UI. It might
be a waste to try to make the version optional at ctl level since we will
need to revert back to the original functionality again.

At the moment we could get around this by documenting that when doing
gen-keys for API Products the version should be specific as 1.0.0. This
will simplify things.


>
>1.
>
>When a user is trying to get a key for an API (since the version is
>optional now) search by --name (-n) and --provider (-r) in the
>corresponding --environment (-e),
>1.
>
>   If more than one APIs have been retrieved while searching, it means
>   that the API we searched has more than one version. So, display a 
> message
>   (error) to the user stating “ versions of the
>   same API are available. Please specify the API version and try again”
>   2.
>
>   If only one API has been retrieved while searching, then subscribe
>   the application to it and generate a key.
>   2.
>
>When the user is trying to get a key for an API Product, since API
>Products do not have versions, we can directly search by the --name
>(-n) and --provider (-r) in the corresponding --environment (-e) and
>retrieve the API Product, then subscribe the application to it and generate
>a key.
>
>
> I appreciate any thoughts and feedback on this as well.
>
> Thank you!
>
> On Thu, Apr 16, 2020 at 11:20 AM Uvindra Dias Jayasinha 
> wrote:
>
>> Great analysis Wasura.
>>
>> The unique thing to keep in mind regarding API Products is that they have
>> two different personas when viewed from the perspective of the API
>> Publisher and the API Subscriber. To the API Publisher, an API Product is a
>> different entity from that of an API. The dependence of a given API Product
>> upon one or more underlying APIs is evident. However from the perspective
>> of the API Subscriber, an API Product is no different from an API. In fact
>> the API Subscriber cannot tell the difference between the two.
>>
>> Keeping this in mind, see my comments inline.
>>
>> On Thu, 16 Apr 2020 at 10:13, Wasura Wattearachchi 
>> wrote:
>>
>>> Hi all,
>>>
>>> This is regarding the issue
>>> <https://github.com/wso2/product-apim-tooling/issues/168> which
>>> requests to support listing and generating tokens for API Products through
>>> API Controller (apictl). Currently, we do not support any functionality
>>> related to API Products from API Controller side. Thus, we can introduce
>>> the following four (4) functionalities as a new feature (rather than a fix
>>> to the above-stated issue) in order to improve API Controller, additional
>>> to what has been requested in the above issue.
>>>
>>>
>>>1.
>>>
>>>Import API Products
>>>2.
>>>
>>>Export API Products
>>>3.
>>>
>>>List API Products
>>>4.
>>>
>>>Generate keys (tokens) for API Products
>>>
>>>
>>> Approaches
>>>
>>> Here, we can identify two (2) approaches for importing/exporting API
>>> Products.
>>>
>>> Approach 1 - Import/export without the dependant APIs
>>>
>>>1.
>>>
>>>Import API Products
>>>
>>> Allow to import an API Product only if the dependent APIs have been
>>> already impo

Re: [Architecture] [APIM] Support for API Products from API Controller

2020-04-15 Thread Uvindra Dias Jayasinha
Great analysis Wasura.

The unique thing to keep in mind regarding API Products is that they have
two different personas when viewed from the perspective of the API
Publisher and the API Subscriber. To the API Publisher, an API Product is a
different entity from that of an API. The dependence of a given API Product
upon one or more underlying APIs is evident. However from the perspective
of the API Subscriber, an API Product is no different from an API. In fact
the API Subscriber cannot tell the difference between the two.

Keeping this in mind, see my comments inline.

On Thu, 16 Apr 2020 at 10:13, Wasura Wattearachchi  wrote:

> Hi all,
>
> This is regarding the issue
>  which requests
> to support listing and generating tokens for API Products through API
> Controller (apictl). Currently, we do not support any functionality related
> to API Products from API Controller side. Thus, we can introduce the
> following four (4) functionalities as a new feature (rather than a fix to
> the above-stated issue) in order to improve API Controller, additional to
> what has been requested in the above issue.
>
>
>1.
>
>Import API Products
>2.
>
>Export API Products
>3.
>
>List API Products
>4.
>
>Generate keys (tokens) for API Products
>
>
> Approaches
>
> Here, we can identify two (2) approaches for importing/exporting API
> Products.
>
> Approach 1 - Import/export without the dependant APIs
>
>1.
>
>Import API Products
>
> Allow to import an API Product only if the dependent APIs have been
> already imported/created inside the API Manager. The main task here is to
> check whether the required resources to create the API Product are with the
> APIs in the API Manager.
>
>1.
>
>Export API Products
>
> Allow to export/download an API Product without the related APIs inside
> the archive file.
>

+1 to Approach 1. Since it is clear that an API Product is dependent upon
an API(s), I think it's okay to mandate that the API(s) already have been
imported as a prerequisite. This simplifies the implementation of the
apictl and does not require changing the existing API import/export
functionality.

>
> Approach 2 - Import/export with the dependant APIs
>
>1.
>
>Import API Products
>
> Give freedom to the user to import an API Product along with the related
> APIs (archived together) only if the dependent APIs have not been already
> imported/created inside the API Manager. If the user tries to import an
> already imported API/APIs when importing the API Product, an error should
> be displayed.
>
>1.
>
>Export API Products
>
> Allow to export/download an API Product with the dependent APIs inside
> the archive file.
>
> Comparison of Approach 1 and 2
>
> Approach 1
>
> Approach 2
>
>-
>
>Advantage
>
> Basically our rule would be "If you need to create a CI/CD process for an
> API Product, you should already have performed CI/CD process for all the
> dependent APIs".
>
> So as you can see, after doing CI/CD for each independent API you will
> have all the required APIs imported/exported. Then you can proceed with
> CI/CD for API Products smoothly since the required APIs have been already
> imported/exported. (If APIs are shared between two or more API Products,
> when importing those API Products, those shared APIs will not be
> imported/updated repeatedly, which is efficient)
>
>
>-
>
>Disadvantage
>
> Manually need to setup the CI/CD flows for API Products as the tool does
> not take care of updating the dependent APIs.
>
>-
>
>Advantage
>
> Easier to setup the CI/CD flows for API Products as the tool takes care of
> updating the dependent APIs.
>
>
>-
>
>Disadvantage
>
> If APIs are shared between two or more API Products, when importing those
> API Products, those shared APIs will be imported/updated repeatedly, which
> is inefficient.
>
> Commands to be used
>
> Here, we can identify two (2) ways such as using existing commands or
> using a new set of commands.
>
> Using existing commands
>
>1.
>
>Import API Products
>
> Use the same command that we currently use to import an API “apictl
> import-api”.
>
>1.
>
>Export API Products
>
> Use the same command that we currently use to export an API “apictl
> export-api”.
>
> Currently, name (-n) and (-v) are mandatory in the export-api command. But
> for products, we do not have a version. So we cannot make the version
> parameter mandatory. If we do not mandate it, no issue with products, but
> we need a way to download APIs without the version param. Then we can
> follow the approach like: if there is only one API with name:foo, we
> download it. But if there are multiple APIs with name:foo (with multiple
> versions) we give an error.
>
>1.
>
>List API Products
>
> List API Products along with the APIs using the existing command “apictl
> list apis”. Further, we can improve this command by 

Re: [Architecture] [DEV] [VOTE] Release WSO2 API Manager Tooling v3.1.0 RC4

2020-04-03 Thread Uvindra Dias Jayasinha
Tested the following:

Various scenarios involving super tenant, tenant, secondary user store users

1. Generate Keys
2. Export Apps

No blockers found

[+] Stable - Go ahead and release

On Fri, 3 Apr 2020 at 14:16, Naduni Pamudika  wrote:

> Hi All,
>
> WSO2 Api Manager team is pleased to announce the fourth release candidate
> of WSO2 API Manager Tooling 3.1.0 version.
>
> The WSO2 API Manager tooling provides the capability to import and export
> APIs and Applications across multiple environments seamlessly. Hence it
> provides greater flexibility to create CI/CD pipelines for APIs and
> applications.
>
> Apart from migrating APIs and applications, it supports Kubernetes API
> operator to deploy and manage APIs in the Kubernetes cluster by reducing
> additional overheads for the DevOps.
>
> Please find the improvements and fixes related to this release in Fixed
> Issues
> 
> .
>
> Download the API Manager Tooling Distribution from here
> .
>
> The tag to be voted upon is
> https://github.com/wso2/product-apim-tooling/releases/tag/v3.1.0-rc4
>
> Documentation:
> https://apim.docs.wso2.com/en/next/learn/api-controller/getting-started-with-wso2-api-controller/
>
> Please download, test the tool and vote.
>
>
> *[+] Stable - Go ahead and release*
>
> *[-] Broken - Do not release *(explain why)
>
>
>
> Best Regards,
> WSO2 API Manager Team
>
> --
> *Naduni Pamudika* | Senior Software Engineer | WSO2 Inc.
> (m) +94 (71) 9143658 | (w) +94 (11) 2145345 | (e) nad...@wso2.com
> [image: http://wso2.com/signature] 
>
>

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [DEV] [VOTE] Release WSO2 API Manager Tooling v3.1.0 RC3

2020-04-03 Thread Uvindra Dias Jayasinha
Tested the following:

Various scenarios involving super tenant, tenant, secondary user store users

1. Generate Keys
2. Export Apps

No blockers found

[+] Stable - Go ahead and release

On Thu, 2 Apr 2020 at 19:30, Naduni Pamudika  wrote:

> Hi All,
>
> WSO2 Api Manager team is pleased to announce the third release candidate
> of WSO2 API Manager Tooling 3.1.0 version.
>
> The WSO2 API Manager tooling provides the capability to import and export
> APIs and Applications across multiple environments seamlessly. Hence it
> provides greater flexibility to create CI/CD pipelines for APIs and
> applications.
>
> Apart from migrating APIs and applications, it supports Kubernetes API
> operator to deploy and manage APIs in the Kubernetes cluster by reducing
> additional overheads for the DevOps.
>
> Please find the improvements and fixes related to this release in Fixed
> Issues
> 
> .
>
> Download the API Manager Tooling Distribution from here
> .
>
> The tag to be voted upon is
> https://github.com/wso2/product-apim-tooling/releases/tag/v3.1.0-rc3
>
> Documentation:
> https://apim.docs.wso2.com/en/next/learn/api-controller/getting-started-with-wso2-api-controller/
>
> Please download, test the tool and vote.
>
>
> *[+] Stable - Go ahead and release*
>
> *[-] Broken - Do not release *(explain why)
>
>
>
> Best Regards,
> WSO2 API Manager Team
>
> --
> *Naduni Pamudika* | Senior Software Engineer | WSO2 Inc.
> (m) +94 (71) 9143658 | (w) +94 (11) 2145345 | (e) nad...@wso2.com
> [image: http://wso2.com/signature] 
>
>

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Fine Grained Access Control for GraphQL APIs - Role Specific Depth Allocation

2019-11-18 Thread Uvindra Dias Jayasinha
@Ashera Silva 
, noticed that this thread is private. We need to discuss this on @Architecture
List  so that we can get wider feedback from the
community.

Might be a good idea to link to this[1] as well. There was a lot of
interest shown by the community regarding this so we can get feedback from
them as well.

[1] https://github.com/wso2/product-apim/issues/3184

On Mon, 4 Nov 2019 at 11:56, Sanjeewa Malalgoda  wrote:

> I feel its good to define policies outside API definition and link them
> with scopes. It will work in JWT and OAuth scenarios without issue. However
> when it comes to basic auth there will be small issue as no direct scope
> information comes from that. But we can check resource scopes(roles) and
> cross check them with user roles. Then if role matching worked then
> consider scopes to retrieve policies for graphQL. Main reason behind this
> is usually roles reside outside API Management system. If we are to define
> something global for all resources within API then we can have something
> like API level scope which is attribute of API. Then we can pass that to
> handler while publishing API. Thoughts?
>
> Thanks,
> sanjeewa.
>
> On Fri, Nov 1, 2019 at 5:31 PM Ashera Silva  wrote:
>
>> Hi all,
>>
>> My project is to add fine-grained access control to GraphQL APIs. After
>> the initial project discussion, the main features suggested for the static
>> query analysis part is:
>> 1. query depth limitation
>> 2. query complexity limitation
>>
>> When considering the "query depth limitation", my implementation so far
>> provides a maximum query depth value. When the query from the GraphQL
>> request exceeds this limit, the query is blocked, thereby malicious queries
>> can be identified before they hit the GraphQL server. But this value is
>> common for all users. This is not fair, as different roles should be given
>> different limitations. The suggestion that is proposed is to give a
>> role-specific maximum depth limitation.
>>
>> Please refer to the google doc attached herewith for reference in detail.
>>
>> https://docs.google.com/document/d/1lsO-c9ajHs7Ru3bjApLUwrJNv4eaVZ-yVJUZNMG-ADU/edit?usp=sharing
>>
>> Your feedback with this regard is highly appreciated.
>>
>> --
>> *Ashera Silva* | Engineering - Intern | WSO2 Inc.
>> Mobile : +94702547925 | Email : ash...@wso2.com
>>
>> [image: http://wso2.com/signature] 
>>
>
>
> --
> *Sanjeewa Malalgoda*
> Software Architect | Associate Director, Engineering - WSO2 Inc.
> (m) +94 712933253 | (e) sanje...@wso2.com | (b) Blogger
> , Medium
> 
>
> GET INTEGRATION AGILE 
> Integration Agility for Digitally Driven Business
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [VOTE] Release of WSO2 API Manager 3.0.0 RC3

2019-10-25 Thread Uvindra Dias Jayasinha
Tested Workflows for,

- API State change
- User Signup
- Application Creation
- Application Registration
- API Subscription
- API Product Subscription


No blockers found. +1 for release

On Fri, 25 Oct 2019 at 17:42, Dushani Wellappili  wrote:

> Tested the following,
>
> - Basic flow in API Creation, LifeCycle change, API Invocation for users
> with the separate creator, publisher permissions.
> - API Developer Portal visibility and Publisher access control
> - API Scopes
> - External Developer Portal Publishing
> - XACML
> - Developer Portal self signup
> - Admin Rest API v0.15 functionalities
> -Tested above for both tenant and super tenant in MSSQL 2017
>
> No blockers found. +1 for the release.
>
>
> *Dushani Wellappili*
> Senior Software Engineer - WSO2
>
> Email : dusha...@wso2.com
> Mobile : +94779367571
> Web : https://wso2.com/
>
>
>
>
> On Fri, Oct 25, 2019 at 3:04 PM Hasunie Adikari  wrote:
>
>> Hi,
>>
>> I have tested the following
>>
>> - Micro-gw API import and label import.
>> - Micro-gw analytics.
>> - API request/response validator.
>> - Self signup.
>> - Scope test.
>> - API invocation and analytics.
>> - Tested API manager scenarios in both super tenant and tenant.
>>
>> No blockers found. Hence, [+] Stable - go ahead and release.
>>
>> Regards,
>> Hasunie
>>
>> On Fri, Oct 25, 2019 at 2:52 PM Nalaka Senarathna 
>> wrote:
>>
>>> Hi All,
>>> Tested the followings.
>>> - Token cleanup
>>> - Caching
>>> - API /API product invocation and visibility
>>> - SDK feature
>>> - Pass end-user attributes to backend using JWT
>>> - Alert subscription
>>> - Basic flows in oracle and db2
>>>
>>> Stable and go ahead and release.
>>> Thanks & Regards.
>>> Nalaka
>>>
>>>
>>> On Fri, Oct 25, 2019 at 2:44 PM Hasunie Adikari 
>>> wrote:
>>>
 Hi,

 I have tested the following

 - Micro-gw API import and label import.
 - Micro-gw analytics.
 - API request/response validator.
 - Self signup.
 - Scope test.
 - API invocation and analytics.
 - Tested API manager scenarios in both super tenant and tenant.

 Regards,
 Hasunie


 On Fri, Oct 25, 2019 at 1:23 PM Dushan Silva  wrote:

> Hi,
> I have tested the following
> Authorization code grant type,
> JWT grant type,
> NTLM grant type,
> Password grant type,
> Client credentials grant type,
>
> Provisioning Out-of-Band OAuth Clients
> Application group sharing
> self registration
>
> No blockers found. +1 to go ahead and release.
>
>
>
> On Fri, Oct 25, 2019 at 12:20 PM Chamin Dias  wrote:
>
>> Hi,
>>
>> Tested the following scenarios in both the super tenant and tenant.
>> - API keys for securing APIs
>> - Localization / internationalisation
>> - Monetization (with in-built implementation)
>>
>> No blockers found. Hence, [+] Stable: go ahead and release.
>>
>> Thanks.
>>
>> On Fri, Oct 25, 2019 at 11:28 AM Mushthaq Rumy 
>> wrote:
>>
>>> Hi All,
>>>
>>> Hi All,
>>>
>>> Tested the following scenarios in both super tenant and tenant.
>>> - API Creation, Publishing, Subscribing and invocation of APIs
>>> - Tested Publisher Access Control
>>> - Tested Store Visibility
>>> - Identity management features such as self sign up, password reset,
>>> password policy, account locking.
>>>
>>> No blockers found. Hence, [+] Stable - go ahead and release.
>>>
>>> Thanks & Regards,
>>> Mushthaq
>>>
>>> On Fri, Oct 25, 2019 at 3:52 AM Samitha Chathuranga <
>>> sami...@wso2.com> wrote:
>>>
 Hi All,

 We are pleased to announce the second release candidate of WSO2 API
 Manager 3.0.0.

 This release fixes the following issues.

- Fixes : product-apim

 
- Fixes : carbon-apimgt

 
- Fixes : analytics-apim

 

 Source and distribution,
 Runtime :
 https://github.com/wso2/product-apim/releases/tag/v3.0.0-rc3
 Analytics :
 https://github.com/wso2/analytics-apim/releases/tag/v3.0.0-rc3
 APIM Tooling :
 https://github.com/wso2/product-apim-tooling/releases/tag/v3.0.0-rc

 Please download, test the product and vote.

 [+] Stable - go ahead and release
 [-] Broken - do not release (explain why)

 Thanks,
 WSO2 API Manager Team


 --
 *Samitha 

Re: [Architecture] [Dev] [VOTE] Release of WSO2 API Manager 3.0.0 RC2

2019-10-24 Thread Uvindra Dias Jayasinha
Tested Workflows for,

- API State change
- User Signup
- Application Creation
- Application Registration
- API Subscription
- API Product Subscription


No blockers found. +1

On Thu, 24 Oct 2019 at 17:35, Ishara Cooray  wrote:

> Hi All,
>
> Tested the following.
> - API Creation, Publishing, Subscription in super tenant and tenant.
> - Reverse proxy configuration for publisher and devportal
> - CORS
> - Custom authorization header
> - Message Mediation Policies
>
> No blockers found. Hence +1.
>
> Thanks & Regards,
> Ishara Cooray
> Associate Technical Lead
> Mobile : +9477 262 9512
> WSO2, Inc. | http://wso2.com/
> Lean . Enterprise . Middleware
>
>
> On Thu, Oct 24, 2019 at 4:19 PM Mushthaq Rumy  wrote:
>
>> Hi All,
>>
>> Tested the following scenarios.
>> - API Creation, Publishing and Subscription
>> - Tested Publisher Access Control
>> - Tested Store visibility
>> - Identity management features such as self sign up, password reset,
>> password policy, account locking.
>>
>> No blockers found. Hence +1.
>>
>> Thanks & Regards,
>> Mushthaq
>>
>> On Thu, Oct 24, 2019 at 4:15 PM Prasanna Dangalla 
>> wrote:
>>
>>> Hi All,
>>>
>>> Tested API Creation, Publishing, Subscription JWT & OAuth Token
>>> generation and throttling.
>>>
>>> Hence +1 from me.
>>>
>>> Thanks
>>> *Prasanna Dangalla* | Associate Technical Lead | WSO2 Inc.
>>> (m) +94 718 112 751 | (e) prasa...@wso2.com
>>> [image: Signature.jpg]
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Oct 24, 2019 at 8:51 AM Mushthaq Rumy  wrote:
>>>
 Hi All,

 We are pleased to announce the second release candidate of WSO2 API
 Manager 3.0.0.

 This release fixes the following issues.

- Fixes : carbon-apimgt

 
- Fixes : product-apim

 
- Fixes : analytics-apim

 

 Source and distribution,
 Runtime :
 https://github.com/wso2/product-apim/releases/tag/v3.0.0-rc2
 Analytics :
 https://github.com/wso2/analytics-apim/releases/tag/v3.0.0-rc2
 APIM Tooling :
 https://github.com/wso2/product-apim-tooling/releases/tag/v3.0.0-rc

 Please download, test the product and vote.

 [+] Stable - go ahead and release
 [-] Broken - do not release (explain why)

 Thanks,
 WSO2 API Manager Team


 --
 Mushthaq Rumy
 *Senior Software Engineer*
 Mobile : +94 (0) 779 492140
 Email : musht...@wso2.com
 WSO2, Inc.; http://wso2.com/
 lean . enterprise . middleware.

 
 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture

>>> ___
>>> Dev mailing list
>>> d...@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>
>>
>> --
>> Mushthaq Rumy
>> *Senior Software Engineer*
>> Mobile : +94 (0) 779 492140
>> Email : musht...@wso2.com
>> WSO2, Inc.; http://wso2.com/
>> lean . enterprise . middleware.
>>
>> 
>> ___
>> Dev mailing list
>> d...@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Issue in getting the fileName when getting the API Documents content via REST API

2018-10-24 Thread Uvindra Dias Jayasinha
On Wed, 24 Oct 2018 at 15:05, Dushani Wellappili  wrote:

> Hi all,
>
> As discussed, now all API documentation data will be stored in only one
> table, which is the AM_API_DOCS with the following table schema.
>
> `AM_API_DOCS` (
>   `UUID` VARCHAR(255),
>   `API_ID` VARCHAR(255),
>   `NAME` VARCHAR(255),
>   `SUMMARY` VARCHAR(1024),
>   `TYPE` VARCHAR(255),
>   `OTHER_TYPE_NAME` VARCHAR(255),
>   `CONTENT` VARCHAR(1024),
>   `CONTENT_BINARY_VALUE` LONGBLOB,
>   `FILE_NAME` VARCHAR(255),
>   `SOURCE_TYPE` VARCHAR(255),
>   `VISIBILITY` VARCHAR(30),
>   `AM_DOC_PERMISSION` int(11) DEFAULT '7',
>   CREATED_BY VARCHAR(100),
>   CREATED_TIME TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP(6),
>   UPDATED_BY VARCHAR(100),
>   LAST_UPDATED_TIME TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP(6),
>   PRIMARY KEY (`UUID`),
>   FOREIGN KEY (`API_ID`) REFERENCES `AM_API`(`UUID`) ON UPDATE CASCADE ON 
> DELETE CASCADE
>  )
>
> The REST API call for adding documentation will add the values except for
> CONTENT_BINARY_VALUE column when SOURCE_TYPE is 'FILE'. The CONTENT column
> will have either the documentation URL or the inline content.
>
> When adding a documentation of source type either URL or INLINE, there
> will be only REST call needed which is to add a new Document to API. When
> adding a documentation of source type 'FILE', we would first do the 'Add a
> new Document to API' REST call and then do the 'Upload content of an API
> document' which will update the same record's CONTENT_BINARY_VALUE column.
>
> Similarly, for the REST call of getting a document of an API, we will get
> values except the value in CONTENT_BINARY_VALUE column. For the REST call
> of getting content of the document of an API, the CONTENT_BINARY_VALUE
> column value will be retrieved  after checking whether the SourceType is
> 'FILE'.
>
> One of the few concerns I have on the above design is, whether it is okay
> to retrieve both URL value and inline content in 'Get a document of an API'
> call which we expect only to retrieve API doc metadata. If so, I suppose we
> can't use the term API doc metadata for that REST API operation description
> anymore? If not, we can change the REST API to only retrieve doc metadata
> without the CONTENT column value and retrieve doc content for all 3 types
> of docs using 'Get the content of an API document' REST call.
>

This table is no longer just a meta data table so Im +1 for stopping the
use of the word meta data in the REST API. We need to be able to retrieve
the Inline and URL content in asingle call when we are listing the
documents in the UI.

The other concern is, in the above schema, we have only one of UPDATED_BY,
LAST_UPDATED_TIME values for each document. So for both REST calls of
getting the document (metadata) and getting document content data, is it
okay to check with the same fingerprint  (LAST_UPDATED_TIME), or do we need
to add another separate audit columns for content updates as
CONTENT_LAST_UPDATED_TIME, CONTENT_UPDATED_BY?

We can simply update  UPDATED_BY, LAST_UPDATED_TIME again at the time when
we actually do a document upload. So that way the fingerprint will always
be the latest. No need to add another column since the meta and content
data are actually related.

Appreciate your response.

Thank you

*Dushani Wellappili*
Software Engineer - WSO2

Email : dusha...@wso2.com
Mobile : +94779367571
Web : https://wso2.com/




On Tue, Oct 23, 2018 at 11:09 AM Harsha Kumara  wrote:

>
>
> On Tue, Oct 23, 2018 at 10:07 AM Uvindra Dias Jayasinha 
> wrote:
>
>> @Bhathiya Jayasekara 
>> yes, this is also an option. In reality though very few customers upload
>> lots of additional docs. We could just rename DOC_META_DATA table to DOCS
>> and store the docs in that table itself. So at the very minimum we have a
>> swagger and ballerina definition per an API stored in the RESOURCES table.
>>
> Definitely we should have a separate table for docs.
>
>>
>> Can we make a call on this soon? Implementation is getting delayed
>> because of this.
>>
>> On Mon, 22 Oct 2018 at 21:08, Bhathiya Jayasekara 
>> wrote:
>>
>>> This looks ok. Btw, I was wondering if it's a good idea to have all
>>> files (swaggers, wsdls, docs etc.) in a single table. Won't that make the
>>> table grow faster and make querying slower? How about keeping a separate
>>> table for this? Then we can have a foreign key as well.
>>>
>>> Thanks,
>>> Bhathiya
>>>
>>> On Mon, Oct 22, 2018 at 2:57 PM Uvindra Dias Jayasinha 
>>> wrote:
>>>
>>>> +1, CONTENT  is fine
>>>>
>>>> On Mon, 22 Oct 2018 at 14:45, Mushthaq Rumy  wrote:
>>>>
>>>>&g

Re: [Architecture] [APIM 3.0.0] get rid of relational databases (MySQL, ...)

2018-10-03 Thread Uvindra Dias Jayasinha
Hi Youcef,

We dont provide custom development solutions of our code through this
channel. This is a significant customization of our code. If you are a
paying customer who has subscribed for support I suggest that you log a
support ticket and have a discussion with the support team in order to
clarify your requirement. Depending on the outcome of this discussion we
can decide on an engineering allocation in order to build a solution for
your requirement.

On Tue, 2 Oct 2018 at 15:50, Youcef HILEM  wrote:

> Hi Uvindra,
>
> I come back to this topic because it is of great importance for us to have
> our multicloud deployment of wso2 APIM Cassandra with NoSQL bases to
> overcome all sync problems with MySQL.
>
> Could you provide an implementation of DAOs
> (
> https://github.com/wso2/carbon-apimgt/tree/6.x/components/apimgt/org.wso2.carbon.apimgt.impl/src/main/java/org/wso2/carbon/apimgt/impl/dao
> )
> in JPA?
>
> So, we could use JPA with Cassandra :
> - https://github.com/Impetus/Kundera#supported-datastores
> -
>
> http://quicktechcuisine.blogspot.com/2016/01/using-jpa-with-cassandra-via-kundera.html
>
> Thanks
> Youcef HILEM
>
>
>
>
>
> --
> Sent from:
> http://wso2-oxygen-tank.10903.n7.nabble.com/WSO2-Architecture-f62919.html
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] [C4] Custom header field for OAuth2 token per tenant

2018-09-19 Thread Uvindra Dias Jayasinha
Correction, found the docs https://docs.wso2.com/display/AM250/Securing+APIs.
Thanks for pointing it out Viduranga

On 19 September 2018 at 11:36, Uvindra Dias Jayasinha 
wrote:

> I cant find any documentation for this feature, has this been documented?
>
>
> Without documentation the visibility of this feature is lost
>
> On 14 December 2017 at 13:49, Viduranga Gunarathne 
> wrote:
>
>> Hi,
>>
>> As of now this feature is being improved to support custom authorization
>> headers in a "per API" basis.
>>
>> The proposed design for this is as follows.
>>
>>
>>- A new attribute will be added to the API named "customOAuth2Header"
>>[String]. This value will be saved to registry under the specific API when
>>creating the API.
>>- When the API is published, a custom authorization header will be
>>checked in the following resources in the specified order.
>>   1. Registry where the API is saved.
>>   2. tenant-conf.json of the current tenant
>>   3. api-manager.xml
>>- If a custom header is *NOT* *AVAILABLE* in any of the above
>>resources, then the default "Authorization" authorization header would 
>> work.
>>- If a custom header is* AVAILABLE*, then that value will be added to
>>the synapse config of the specific API and will be published in the 
>> Gateway.
>>
>> *Note:*
>>
>>1. If a user has configured a custom authorization header, when
>>creating an API (per API header) or in the tenant-conf.json (per tenant
>>header) then he/she will have to re publish the APIs in order for the
>>changes to make effect.
>>
>> The issue faced with this implementation is that the so defined custom
>> authorization header is not allowed to be passed since it is not among the
>> list of allowed headers. Hence the proposed solution to this is to add a
>> "customOAuth2Header" property to the CORSRequestHandler so that it can be
>> appended to the list of allowed headers when this handler is executed.
>>
>> The implementation will be as follows:
>>
>> Sample synapse config of an API:
>>
>>   
>>
>>   ...
>>
>>   > "customOAuth2Header" value="ENG_Auth"/>
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>> 
>>
>>  
>>
>>   ...
>>
>>   
>>
>>
>> This implementation will overcome the requirement to enable API based
>> CORS configuration and adding the custom header to the list of allowed
>> headers.
>>
>> Ideas and suggestions are highly appreciated!
>>
>> Thanks,
>> Viduranga.
>>
>> On Thu, Dec 14, 2017 at 11:57 AM, Nuwan Dias  wrote:
>>
>>> Hi all,
>>>
>>> We are not mandating the change of the 'Authorization' header. That will
>>> be the header we support by default.
>>>
>>> The problem here is, when users adopt API Management to proxy
>>> APIs/Services that are already secured using the Authorization header, they
>>> hit a roadblock because the API Gateway also now requires an Authorization
>>> header. There have also been cases where client apps already use different
>>> (non-standard) headers for the same purpose (not everybody adheres to
>>> specs). As an API Management vendor its impractical for us to ask users to
>>> fix and redistribute their client Apps if they are to use our Gateway. They
>>> would simply pick a competitor product that can do that easily. These are
>>> some of the reasons we need to support this as a feature, but of course
>>> default to standard.
>>>
>>> Thanks,
>>> NuwanD.
>>>
>>> On Thu, Dec 14, 2017 at 6:48 AM, Viduranga Gunarathne <
>>> vidura...@wso2.com> wrote:
>>>
>>>> Hi Saneth,
>>>>
>>>> Thanks for your views. One important reason to implement this is that
>>>> this feature has been requested, so that the above mentioned requirements
>>>> can be met.
>>>>
>>>> Thanks,
>>>> Viduranga.
>>>>
>>>> On Tue, Dec 12, 2017 at 11:10 AM, Saneth Dharmakeerthi <
>>>> sane...@wso2.com> wrote:
>>>>
>>>>>  Hi Viduranga,
>>>>>
>>>>> Sorry for late reply, please find my view bellow
>>>>>
>>>>> i) In a scenario where both the back end and the API-M requests
>>>>> authoriza

Re: [Architecture] APIM 3.0.0 - Supporting product upgrades as a first class feature

2018-09-03 Thread Uvindra Dias Jayasinha
On 2 September 2018 at 15:07, Harsha Kumara  wrote:

> According to my understanding we going to do the migration on demand.
> Which means particular resource will be migrated when read is made for that
> particular resource. Let's say we already migrated a particular API, I
> think still we will need to do a check on version column to decide whether
> we need a migration or not right? Which might add a overhead.
>

With on the fly migrations the validation to see if a migration should be
performed or not when data is retrieved cannot be avoided. But the overhead
is negligible since we are only retrieving an additional column for a
specific entity(example: API x). we just need an additional if condition to
compare the string of this column with the current product version string.


> API Manager generally contains artifacts which are reside in the file
> system. In this case we will require to migrate all the file system
> artifacts we can't perform on the fly upgrades to these APIs. Also
> operations like key validation might access database directly to retrieve
> information. In this case we will need to ensure all components are going
> through the standard interfaces to retrieve data including our dependent
> components.
>

We are talking about APIM 3 specifically here, AFAIK we dont have anything
in the file system. We are using the DB as a single source of truth to
simplify managing everything. In general having a uniform method of
accessing data by all components is desirable in order to avoid bugs. The
DOA layer is that uniform method and the migration will be performed at the
DAO layer.


> What will be the impact for the extensions that allow users to extend the
> produce features. Having partially migrated data, specially during the list
> operation might cause issues for them.
>

Not clear exactly on the impact of extensions that you have mentioned. Yes
partially migrated data can be a concern if they start using the product
before migration has been completed properly. I mentioned a strategy for
avoiding this earlier by triggering a GET on each entity in order to force
the on the fly migration upfront.


> Let's say customer is upgrading from 3.0.0 to 3.5.0 without using the
> intermediate versions, do we require to have product version specific
> migration methods to handle this scenario or will we write single method to
> gradually migrate from 3.0.0 to 3.5.0. Sometimes migrating version of API
> will be associate with modify the data of several tables which can share
> data across several APIs, will it be good idea to have partially migrated
> data?
>

>From a developer point of view it makes sense to code the migration for
each version separately since this is easier to manage. The migration
implementation can decide which migrations need to be run based on the
version of the data. See the sample code I have provided in the beginning
to see what this might look like. Agreed that partially migrated data can
be inconsistent, hence I have proposed triggering the on the fly migration
of everything upfront.


> Instead of having a partial migration which will happen on demand, how
> about having full migration during a startup of the server. If we going to
> follow this approach, I believe we need to adapt this everywhere. Otherwise
> our dependent component will have their own way of migrating artifacts
> while we are using on the fly migrations.
>

Agreed. Best approach is triggering the migration upfront as stated
earlier. If we can do this for APIM first we can propose that all other
components as well


> On Wed, Aug 22, 2018 at 1:17 PM Uvindra Dias Jayasinha 
> wrote:
>
>> See my responses inline
>>
>> On 22 August 2018 at 11:54, Nuwan Dias  wrote:
>>
>>> I have a few questions on the design.
>>>
>>> 1. Who will be updating the Audit table and at which point?
>>>
>>
>> This can be done whenever our DAOFactory is called to create an instance
>> of a given DAO object when a DOA operation is performed for the first
>> time.  We already have a setup() method that adds per-requiste data into
>> the DB required for the product to function(Resource Categories, Default
>> Labels, API types)[1]. So inserting into Audit table can be performed at
>> this point as well.
>>
>>
>> [1] https://github.com/wso2/carbon-apimgt/blob/master/
>> components/apimgt/org.wso2.carbon.apimgt.core/src/main/
>> java/org/wso2/carbon/apimgt/core/dao/impl/DAOFactory.java#L379
>>
>>
>>> 2. Who will be calling the method in the interface and at which point? I
>>> understood it as it will be called upon the GET operation of each entity.
>>> If that's the case it'll cause problems specially with listing operations.
>>>
>>
>> The

Re: [Architecture] APIM 3.0.0 - Supporting product upgrades as a first class feature

2018-08-22 Thread Uvindra Dias Jayasinha
See my responses inline

On 22 August 2018 at 11:54, Nuwan Dias  wrote:

> I have a few questions on the design.
>
> 1. Who will be updating the Audit table and at which point?
>

This can be done whenever our DAOFactory is called to create an instance of
a given DAO object when a DOA operation is performed for the first time.
We already have a setup() method that adds per-requiste data into the DB
required for the product to function(Resource Categories, Default Labels,
API types)[1]. So inserting into Audit table can be performed at this point
as well.


[1]
https://github.com/wso2/carbon-apimgt/blob/master/components/apimgt/org.wso2.carbon.apimgt.core/src/main/java/org/wso2/carbon/apimgt/core/dao/impl/DAOFactory.java#L379


> 2. Who will be calling the method in the interface and at which point? I
> understood it as it will be called upon the GET operation of each entity.
> If that's the case it'll cause problems specially with listing operations.
>

The method will be called by the DAO layer when it is retrieving
data(example: getAPI(String apiID) ). In the listing operation you have
mentioned we call a separate method to get the summarized data of all APIs.
So the idea here is that we do this change only when we are getting a
specific entity instead of a list. The advantage of using a GET is that the
caller does not need to change the entity themselves. As far as they are
concerned they are doing a read operation.

NuwanD mentioned that we have been following the method of doing on the fly
migrations when a PUT is done on an entity. The disadvantage of this is
that we need to maintain code that knows to render data that hasn't been
migrated when a GET is performed on this. So 5 releases down the line we
are going to have to have separate code that knows how to render data from
all the previous releases. What I'm proposing is instead of having multiple
rendering logic for the GET, we can decouple this as multiple migration
logic which will reside in a separate class. So we dont need to have
migration code in PUT and rendering code in GET,  only migration code
invoked on GET. This is cleaner and involves less code.

One disadvantage in the method I have proposed is, what if one of the data
displayed during listing needs to be migrated? In that case since the
retrieval of the specific data entity hasn't been called migration is yet
to take place. So the listing non migrated data, which is wrong. We can
overcome this limitation by simply getting the IDs of all entities and
doing a GET for each specific entity via the REST API upfront. We can ship
a script that does this and customer only needs to run this script the
first time before using the product and everything will get migrated.

3. Do we need an interface for each entity type? If yes, that doesn't sound
right because then we'll end up with just 1 implementation of each
interface, which is not very useful. The number of implementations to
maintain is also a concern due to the high number of entity types we'll end
up with.

The reason for using an interface in this case is not for polymorphic
reasons. This is simply to decouple the migration logic from our standard
DAO code. But in this case I concede that we don't have to have an
interface, we can simply have a separate class that takes care of this.
Since DOA layer is not unit tested we don't need to have migration code as
an interface implementation in order to inject it via constructor to
improve testability.

We could have a single class that includes all the migration logic for each
entity to make this more manageable.

Thanks,
NuwanD.

>
> On Mon, Aug 20, 2018 at 5:09 PM Uvindra Dias Jayasinha 
> wrote:
>
>>
>>
>> On 20 August 2018 at 16:58, Ishara Cooray  wrote:
>>
>>> For me, PRODUCT_VERSION column in every table seems to be redundant.
>>> Could you please explain the reason for introducing this column in each
>>> table? Is this for auditing?
>>>
>>
>> The reason for this is so the on the fly migration code is able to detect
>> if it needs to migrate a given row. For example if running version 3.1.0,
>>
>> 1. A row from a table is retrieved
>> 2. If the value of the PRODUCT_VERSION  column of the row is 3.0.0
>> migration code will run to convert the data and update the value of
>> PRODUCT_VERSION to 3.1.0 once row migration has occurred.
>> 3. On subsequent retrievals of the same row since PRODUCT_VERSION is
>> 3.1.0 migration code will not execute against the row.
>>
>>
>>> Thanks & Regards,
>>> Ishara Cooray
>>> Senior Software Engineer
>>> Mobile : +9477 262 9512
>>> WSO2, Inc. | http://wso2.com/
>>> Lean . Enterprise . Middleware
>>>
>>> On Mon, Aug 20, 2018 at 4:03 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>&

Re: [Architecture] APIM 3.0.0 - Supporting product upgrades as a first class feature

2018-08-20 Thread Uvindra Dias Jayasinha
On 20 August 2018 at 16:58, Ishara Cooray  wrote:

> For me, PRODUCT_VERSION column in every table seems to be redundant.
> Could you please explain the reason for introducing this column in each
> table? Is this for auditing?
>

The reason for this is so the on the fly migration code is able to detect
if it needs to migrate a given row. For example if running version 3.1.0,

1. A row from a table is retrieved
2. If the value of the PRODUCT_VERSION  column of the row is 3.0.0
migration code will run to convert the data and update the value of
PRODUCT_VERSION to 3.1.0 once row migration has occurred.
3. On subsequent retrievals of the same row since PRODUCT_VERSION is 3.1.0
migration code will not execute against the row.


> Thanks & Regards,
> Ishara Cooray
> Senior Software Engineer
> Mobile : +9477 262 9512
> WSO2, Inc. | http://wso2.com/
> Lean . Enterprise . Middleware
>
> On Mon, Aug 20, 2018 at 4:03 PM, Uvindra Dias Jayasinha 
> wrote:
>
>> Small calcification regarding this statement,
>>
>> For the 3.0.0 release we just need to implement steps *1* and *2* above.
>>> Step *3* can be done for all subsequent releases.
>>>
>>
>> I specifically meant the changes to the DB schema when it comes to steps
>> 1 and 2 . Obviously no migration logic will be needed for 3.0.0 itself
>>
>> On 20 August 2018 at 15:58, Uvindra Dias Jayasinha 
>> wrote:
>>
>>> In the past the APIM product has relied on an external component such as
>>> a migration client for upgrading from a given product version to a higher
>>> version.The end user was required to configure the latest product that they
>>> are upgrading to against their current data(databases, synapse files,
>>> registry) and run the migration client manually to upgrade the product.
>>> This can be a cumbersome and error prone process to accomplish for end
>>> users, making product version upgrades time consuming.
>>>
>>> To overcome the above problem on the fly upgrades were proposed where
>>> the product code detects if relevant data being accessed needs to be
>>> migrated to the latest version and migrates the data when the code is
>>> executed when the respective feature is used. Upgrading product data is
>>> much easier from 3.0.0 onwards because all data related to the product is
>>> stored in a central database.This means that the end user does not need to
>>> get involved in upgrading, it happens without them even being aware as they
>>> use the latest version of the product by pointing it against their current
>>> database.This makes for a more pleasant user experience when upgrading,
>>> putting the burden of the upgrade to be added by the developer into the
>>> functional code itself.
>>>
>>> The following outlines a design that can be supported from 3.0.0 onwards
>>> to outline a uniform way of handling product upgrades. This is inspired by
>>> the methodology used by FlywayDB to enable DB migrations[1]  but also
>>> taking into account the requirement of being able to run on the fly at
>>> runtime(Note: DB schema changes between releases will need to be handled
>>> via DB vendor specific scripts prepared by the team to be run by the
>>> customer against their DB).
>>>
>>>
>>> *1.* A new table will be added to the schema called
>>> PRODUCT_VERSION_AUDIT to track the product version upgrades that take place
>>> on a given dataset
>>>
>>> PRODUCT_VERSION_AUDIT
>>> VERSION VARCHAR(5)
>>> CREATED_TIME TIMESTAMP(6)
>>>
>>> If a user begins using APIM version 3.0.0 and then upgrades to version
>>> 3.1.0 the table will contain the following values,
>>>
>>> VERSION CREATED_TIME
>>> 3.0.0 2018-11-11 13:23:44
>>> 3.1.0 2019-10-14 9:26:22
>>>
>>> This gives a historical view of the product versions a customer has been
>>> using. A new row will be inserted into the table when a given product
>>> version is started for the first time.
>>>
>>>
>>>
>>> *2*. Each table in the database will have a new column called
>>> PRODUCT_VERSION(VARCHAR(5)) added. When a row is inserted for the first
>>> time it will populate this column with the current product version being
>>> used.
>>> For example the AM_API table could have the following entries for a
>>> customer using APIM 3.0.0,
>>>
>>> UUID PROVIDER NAME VERSION CONTEXT PRODUCT_VERSION
>>> 123e4567-e89b-12d3-a456-42665544 admin abc 1.0.0 /abc 3.0.0
>>> 0011223

Re: [Architecture] APIM 3.0.0 - Supporting product upgrades as a first class feature

2018-08-20 Thread Uvindra Dias Jayasinha
Small calcification regarding this statement,

For the 3.0.0 release we just need to implement steps *1* and *2* above.
> Step *3* can be done for all subsequent releases.
>

I specifically meant the changes to the DB schema when it comes to steps 1
and 2 . Obviously no migration logic will be needed for 3.0.0 itself

On 20 August 2018 at 15:58, Uvindra Dias Jayasinha  wrote:

> In the past the APIM product has relied on an external component such as a
> migration client for upgrading from a given product version to a higher
> version.The end user was required to configure the latest product that they
> are upgrading to against their current data(databases, synapse files,
> registry) and run the migration client manually to upgrade the product.
> This can be a cumbersome and error prone process to accomplish for end
> users, making product version upgrades time consuming.
>
> To overcome the above problem on the fly upgrades were proposed where the
> product code detects if relevant data being accessed needs to be migrated
> to the latest version and migrates the data when the code is executed when
> the respective feature is used. Upgrading product data is much easier from
> 3.0.0 onwards because all data related to the product is stored in a
> central database.This means that the end user does not need to get involved
> in upgrading, it happens without them even being aware as they use the
> latest version of the product by pointing it against their current
> database.This makes for a more pleasant user experience when upgrading,
> putting the burden of the upgrade to be added by the developer into the
> functional code itself.
>
> The following outlines a design that can be supported from 3.0.0 onwards
> to outline a uniform way of handling product upgrades. This is inspired by
> the methodology used by FlywayDB to enable DB migrations[1]  but also
> taking into account the requirement of being able to run on the fly at
> runtime(Note: DB schema changes between releases will need to be handled
> via DB vendor specific scripts prepared by the team to be run by the
> customer against their DB).
>
>
> *1.* A new table will be added to the schema called PRODUCT_VERSION_AUDIT
> to track the product version upgrades that take place on a given dataset
>
> PRODUCT_VERSION_AUDIT
> VERSION VARCHAR(5)
> CREATED_TIME TIMESTAMP(6)
>
> If a user begins using APIM version 3.0.0 and then upgrades to version
> 3.1.0 the table will contain the following values,
>
> VERSION CREATED_TIME
> 3.0.0 2018-11-11 13:23:44
> 3.1.0 2019-10-14 9:26:22
>
> This gives a historical view of the product versions a customer has been
> using. A new row will be inserted into the table when a given product
> version is started for the first time.
>
>
>
> *2*. Each table in the database will have a new column called
> PRODUCT_VERSION(VARCHAR(5)) added. When a row is inserted for the first
> time it will populate this column with the current product version being
> used.
> For example the AM_API table could have the following entries for a
> customer using APIM 3.0.0,
>
> UUID PROVIDER NAME VERSION CONTEXT PRODUCT_VERSION
> 123e4567-e89b-12d3-a456-42665544 admin abc 1.0.0 /abc 3.0.0
> 00112233-4455-6677-8899-aabbccddeeff admin xyz 1.0.0 /xyz 3.0.0
>
>
> Lets assume when upgrading to 3.1.0 the leading '/' character in the
> context needs to be removed. On the fly migration code will run when a
> given row is accessed by the DAO layer to remove the '/'. Once the
> migration of the row is completed the PRODUCT_VERSION column will be
> updated with the value 3.1.0 to signify that the migration for this row has
> been completed. The PRODUCT_VERSION column can be validated to check if
> migration code needs to be executed. So assuming the API abc is accessed
> first the table will look as follows after migration,
>
>
> UUID PROVIDER NAME VERSION CONTEXT PRODUCT_VERSION
> 123e4567-e89b-12d3-a456-42665544 admin abc 1.0.0 abc 3.1.0
> 00112233-4455-6677-8899-aabbccddeeff admin xyz 1.0.0 /xyz 3.0.0
>
>
> As a pre-requisite the product team will need to create respective DB
> scripts for the schema changes that will take place with a given release.
> This will only include schema modifications. Customer will need to run
> these manually against their DB but actual data migration will take place
> automatically under the hood.
>
>
> *3*. New Java interfaces will be added for each DB entity that will
> responsible for migrating the respective entity. For example for APIs we
> can have,
>
> public interface APIMigrator {
> API migrate(PreparedStatement statement) throws MigrationException;}
>
>
>
> This will accept the PreparedStatement created for data retrieval an

[Architecture] APIM 3.0.0 - Supporting product upgrades as a first class feature

2018-08-20 Thread Uvindra Dias Jayasinha
In the past the APIM product has relied on an external component such as a
migration client for upgrading from a given product version to a higher
version.The end user was required to configure the latest product that they
are upgrading to against their current data(databases, synapse files,
registry) and run the migration client manually to upgrade the product.
This can be a cumbersome and error prone process to accomplish for end
users, making product version upgrades time consuming.

To overcome the above problem on the fly upgrades were proposed where the
product code detects if relevant data being accessed needs to be migrated
to the latest version and migrates the data when the code is executed when
the respective feature is used. Upgrading product data is much easier from
3.0.0 onwards because all data related to the product is stored in a
central database.This means that the end user does not need to get involved
in upgrading, it happens without them even being aware as they use the
latest version of the product by pointing it against their current
database.This makes for a more pleasant user experience when upgrading,
putting the burden of the upgrade to be added by the developer into the
functional code itself.

The following outlines a design that can be supported from 3.0.0 onwards to
outline a uniform way of handling product upgrades. This is inspired by the
methodology used by FlywayDB to enable DB migrations[1]  but also taking
into account the requirement of being able to run on the fly at
runtime(Note: DB schema changes between releases will need to be handled
via DB vendor specific scripts prepared by the team to be run by the
customer against their DB).


*1.* A new table will be added to the schema called PRODUCT_VERSION_AUDIT
to track the product version upgrades that take place on a given dataset

PRODUCT_VERSION_AUDIT
VERSION VARCHAR(5)
CREATED_TIME TIMESTAMP(6)

If a user begins using APIM version 3.0.0 and then upgrades to version
3.1.0 the table will contain the following values,

VERSION CREATED_TIME
3.0.0 2018-11-11 13:23:44
3.1.0 2019-10-14 9:26:22

This gives a historical view of the product versions a customer has been
using. A new row will be inserted into the table when a given product
version is started for the first time.



*2*. Each table in the database will have a new column called
PRODUCT_VERSION(VARCHAR(5)) added. When a row is inserted for the first
time it will populate this column with the current product version being
used.
For example the AM_API table could have the following entries for a
customer using APIM 3.0.0,

UUID PROVIDER NAME VERSION CONTEXT PRODUCT_VERSION
123e4567-e89b-12d3-a456-42665544 admin abc 1.0.0 /abc 3.0.0
00112233-4455-6677-8899-aabbccddeeff admin xyz 1.0.0 /xyz 3.0.0


Lets assume when upgrading to 3.1.0 the leading '/' character in the
context needs to be removed. On the fly migration code will run when a
given row is accessed by the DAO layer to remove the '/'. Once the
migration of the row is completed the PRODUCT_VERSION column will be
updated with the value 3.1.0 to signify that the migration for this row has
been completed. The PRODUCT_VERSION column can be validated to check if
migration code needs to be executed. So assuming the API abc is accessed
first the table will look as follows after migration,


UUID PROVIDER NAME VERSION CONTEXT PRODUCT_VERSION
123e4567-e89b-12d3-a456-42665544 admin abc 1.0.0 abc 3.1.0
00112233-4455-6677-8899-aabbccddeeff admin xyz 1.0.0 /xyz 3.0.0


As a pre-requisite the product team will need to create respective DB
scripts for the schema changes that will take place with a given release.
This will only include schema modifications. Customer will need to run
these manually against their DB but actual data migration will take place
automatically under the hood.


*3*. New Java interfaces will be added for each DB entity that will
responsible for migrating the respective entity. For example for APIs we
can have,

public interface APIMigrator {
API migrate(PreparedStatement statement) throws MigrationException;}



This will accept the PreparedStatement created for data retrieval and
returns the migrated API object. The implemenataion of the above could look
as follows,

public class APIMigratorImpl implements APIMigrator {

public API migrate(PreparedStatement statement) throws MigrationException {
API api = null;
try (ResultSet rs = statement.executeQuery()) {
while (rs.next()) {
String dataProductVersion = rs.getString("PRODUCT_VERSION");

// Assume that currentProductVersion == "3.2.0"

while (!currentProductVersion.equals(dataProductVersion)) }
if ("3.0.0".equals(dataProductVersion)) {
// Logic to migrate data to next available version 3.1.0
// And update the PRODUCT_VERSION column of the row to 
3.1.0


Re: [Architecture] Design First APIs on APIM v3.0

2018-08-15 Thread Uvindra Dias Jayasinha
+1 for the approach suggested by NuwanD, this gives us a best of both
worlds scenario separating user code from generated code.

I believe we can even handle Roshan's requirement for non technical users
in this way. Uploading Ballerina functions that will be associated with a
given resource can be an optional step(For injecting custom API logic). By
default an API will simply act as a pass through API proxy for the endpoint
that it is fronting. This allows non technical users to define API proxies
for their existing APIs without writing any code. For all other
scenarios(mediation, transformation, implementing the API on the Ballerina
gateway itself, etc.) you will need to get your hands dirty.


On 15 August 2018 at 20:12, Nuwan Dias  wrote:

> Hi,
>
> This is regarding the implementation of APIs on API Manager v3.0 using a
> Design First Approach. The API Publisher Portal of API Manager will have
> the UI constructs required to design the interface of the API including the
> resource paths, verbs, base paths, etc. This version of API Manager uses
> Ballerina as its API implementation language. When it comes to implementing
> the API logic, our plan was to allow users to edit the full Ballerina
> source of the corresponding API. The sequence of actions in performing this
> operation would be as follows.
>
> 1. User creates API definition (interface) using UI.
> 2. Server code gens the Ballerina source corresponding to the above
> definition.
> 3. User edits the Ballerina source of the corresponding resources.
>
> Since this leads to a situation where the user edits an auto generated
> code and re-auto generation leads to complications related to merging auto
> generated code with user written code, this has led to rethinking how to
> want to enable implementation of APIs from the API publisher.
>
> An alternative approach is to let the user implement Ballerina functions
> offline and upload them to the API Manager. These functions should be
> written according to a function template (signature). They can then be
> linked to a particular resource. A function each can be linked for
> processing the request and for processing the response. This is similar to
> the model that we have today of uploading sequences for a given API, but
> more powerful in terms of capability due to how the Gateway engine behaves
> (Synapse vs Ballerina). This way a user does not edit auto generated code.
> The code gen process would link the auto generated code with the user
> uploaded code and create a consolidated Ballerina source to be deployed on
> the Gateways.
>
> Another advantage here is that this model does not require us to have a
> web based Ballerina editor. Users defining functions could use any editor
> supported by Ballerina offline.
>
> I am opening up this idea for thoughts and suggestions.
>
> Thanks,
> NuwanD.
>
> --
> Nuwan Dias
>
> Director - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM 3.0.0] [Store] Self-sign up and Change password features

2018-08-01 Thread Uvindra Dias Jayasinha
If we are to have a way of throttling our REST APIs via the products own
capabilities it would mean that we have to expose these APIs through the
gateway itself which might not be the most desirable option.

A simpler solution would be to achieve rate limiting via a LB of some sort
that fronts the REST API. This is much easier to achieve and is a lot less
complicated.

On 1 August 2018 at 11:26, Ishara Cooray  wrote:

> Yes , As the offline discussion had with Uvindra, We could avoid
> exploiting the access token issued for the self-signup scenario by adding
> captcha + token revoke mechanism, So they can't reuse the same access token
> once it is used for self-signup, and to get new access token anonymous user
> has to pass the captcha challenge. But still, other product REST APIs are
> vulnerable to DOS attacks since once the user gets an access token by login
> through the UI, it can be used to make a DOS attack. So, in general, we
> would need to introduce throttling policy for product wide REST APIs.
>
> +1 throttling policy for product wide REST APIs.
>
> Thanks & Regards,
> Ishara Cooray
> Senior Software Engineer
> Mobile : +9477 262 9512
> WSO2, Inc. | http://wso2.com/
> Lean . Enterprise . Middleware
>
> On Wed, Aug 1, 2018 at 11:19 AM, Uvindra Dias Jayasinha 
> wrote:
>
>> +1, we need to have default throttling policies for all our REST APIs
>>
>> On 1 August 2018 at 11:17, Kasun Thennakoon  wrote:
>>
>>> Hi Ishara,
>>>
>>> Yes , As the offline discussion had with Uvindra, We could avoid
>>> exploiting the access token issued for the self-signup scenario by adding
>>> captcha + token revoke mechanism, So they can't reuse the same access token
>>> once it is used for self-signup, and to get new access token anonymous user
>>> has to pass the captcha challenge. But still, other product REST APIs are
>>> vulnerable to DOS attacks since once the user gets an access token by login
>>> through the UI, it can be used to make a DOS attack. So, in general, we
>>> would need to introduce throttling policy for product wide REST APIs.
>>>
>>> Thanks
>>> ~KasunTe
>>>
>>> On Wed, Aug 1, 2018 at 11:06 AM Ishara Cooray  wrote:
>>>
>>>> So in this case there are two tokens. One for the sign up that is
>>>> obtained using client credentials that only has the scope for accessing the
>>>> sign up resource. The other is the one obtained from the password grant
>>>> type that is used else where. I don't see a need to immediately revoke the
>>>> token used for the sign up invocation(it can only be used for signing up),
>>>> is there any specific concern you have regarding this?
>>>>
>>>> I was thinking that If this signup token is stolen, one can onboard
>>>> users to the system and will lead to a potential attack. Isn't it?
>>>> Of cause, if we can have captcha validation we can mitigate this.
>>>>
>>>>
>>>> Thanks & Regards,
>>>> Ishara Cooray
>>>> Senior Software Engineer
>>>> Mobile : +9477 262 9512
>>>> WSO2, Inc. | http://wso2.com/
>>>> Lean . Enterprise . Middleware
>>>>
>>>> On Wed, Aug 1, 2018 at 10:48 AM, Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 1 August 2018 at 09:36, Ishara Cooray  wrote:
>>>>>
>>>>>> To obtain an access token using the client credentials grant we need
>>>>>> to store client id and client secrete.
>>>>>> How are we going to store it so that it cannot be stolen?
>>>>>>
>>>>>
>>>>>
>>>>> We need the client id and secret for the password grant type as well
>>>>> which are using for all other calls. We have addressed this security
>>>>> concern already by storing the client id and secret on the server side as
>>>>> discussed in the mail thread[1]
>>>>>
>>>>> [1] API Manager UI - Storing access token in Cookie
>>>>>
>>>>>
>>>>>> Also, I think it is better if we revoke the token as the user is
>>>>>> signed up. So each sign up will need to obtain a new access token.
>>>>>>
>>>>>
>>>>> So in this case there are two tokens. One for the sign up that is
>>>>> obtained using client credentials that only has the scope for accessing 
>>>>> the
>>>>

Re: [Architecture] [APIM 3.0.0] [Store] Self-sign up and Change password features

2018-07-31 Thread Uvindra Dias Jayasinha
+1, we need to have default throttling policies for all our REST APIs

On 1 August 2018 at 11:17, Kasun Thennakoon  wrote:

> Hi Ishara,
>
> Yes , As the offline discussion had with Uvindra, We could avoid
> exploiting the access token issued for the self-signup scenario by adding
> captcha + token revoke mechanism, So they can't reuse the same access token
> once it is used for self-signup, and to get new access token anonymous user
> has to pass the captcha challenge. But still, other product REST APIs are
> vulnerable to DOS attacks since once the user gets an access token by login
> through the UI, it can be used to make a DOS attack. So, in general, we
> would need to introduce throttling policy for product wide REST APIs.
>
> Thanks
> ~KasunTe
>
> On Wed, Aug 1, 2018 at 11:06 AM Ishara Cooray  wrote:
>
>> So in this case there are two tokens. One for the sign up that is
>> obtained using client credentials that only has the scope for accessing the
>> sign up resource. The other is the one obtained from the password grant
>> type that is used else where. I don't see a need to immediately revoke the
>> token used for the sign up invocation(it can only be used for signing up),
>> is there any specific concern you have regarding this?
>>
>> I was thinking that If this signup token is stolen, one can onboard users
>> to the system and will lead to a potential attack. Isn't it?
>> Of cause, if we can have captcha validation we can mitigate this.
>>
>>
>> Thanks & Regards,
>> Ishara Cooray
>> Senior Software Engineer
>> Mobile : +9477 262 9512
>> WSO2, Inc. | http://wso2.com/
>> Lean . Enterprise . Middleware
>>
>> On Wed, Aug 1, 2018 at 10:48 AM, Uvindra Dias Jayasinha > > wrote:
>>
>>>
>>>
>>> On 1 August 2018 at 09:36, Ishara Cooray  wrote:
>>>
>>>> To obtain an access token using the client credentials grant we need to
>>>> store client id and client secrete.
>>>> How are we going to store it so that it cannot be stolen?
>>>>
>>>
>>>
>>> We need the client id and secret for the password grant type as well
>>> which are using for all other calls. We have addressed this security
>>> concern already by storing the client id and secret on the server side as
>>> discussed in the mail thread[1]
>>>
>>> [1] API Manager UI - Storing access token in Cookie
>>>
>>>
>>>> Also, I think it is better if we revoke the token as the user is signed
>>>> up. So each sign up will need to obtain a new access token.
>>>>
>>>
>>> So in this case there are two tokens. One for the sign up that is
>>> obtained using client credentials that only has the scope for accessing the
>>> sign up resource. The other is the one obtained from the password grant
>>> type that is used else where. I don't see a need to immediately revoke the
>>> token used for the sign up invocation(it can only be used for signing up),
>>> is there any specific concern you have regarding this?
>>>
>>>>
>>>>
>>>>
>>>>
>>>> Thanks & Regards,
>>>> Ishara Cooray
>>>> Senior Software Engineer
>>>> Mobile : +9477 262 9512
>>>> WSO2, Inc. | http://wso2.com/
>>>> Lean . Enterprise . Middleware
>>>>
>>>> On Tue, Jul 31, 2018 at 3:21 PM, Vithursa Mahendrarajah <
>>>> vithu...@wso2.com> wrote:
>>>>
>>>>> + [architecture]
>>>>>
>>>>> On Tue, Jul 31, 2018 at 12:55 PM Kasun Thennakoon 
>>>>> wrote:
>>>>>
>>>>>> Hi Rukshan,
>>>>>>
>>>>>> This is the current flow
>>>>>>
>>>>>> [image: image.png]
>>>>>>
>>>>>> So how we restricted this token, talk only to signup api? with
>>>>>>> scopes??
>>>>>>>
>>>>>> Yes we get an access token for self signup scope only
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>> ~KasunTe
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 31, 2018 at 11:21 AM Rukshan Premathunga <
>>>>>> ruks...@wso2.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Jul 31, 2018 at 11:12 AM, Uvindra Dias Jayasinha <
>>>>>>> uvin...@wso2.com> wrote:
&g

Re: [Architecture] [APIM 3.0.0] [Store] Self-sign up and Change password features

2018-07-31 Thread Uvindra Dias Jayasinha
On 1 August 2018 at 09:36, Ishara Cooray  wrote:

> To obtain an access token using the client credentials grant we need to
> store client id and client secrete.
> How are we going to store it so that it cannot be stolen?
>


We need the client id and secret for the password grant type as well which
are using for all other calls. We have addressed this security concern
already by storing the client id and secret on the server side as discussed
in the mail thread[1]

[1] API Manager UI - Storing access token in Cookie


> Also, I think it is better if we revoke the token as the user is signed
> up. So each sign up will need to obtain a new access token.
>

So in this case there are two tokens. One for the sign up that is obtained
using client credentials that only has the scope for accessing the sign up
resource. The other is the one obtained from the password grant type that
is used else where. I don't see a need to immediately revoke the token used
for the sign up invocation(it can only be used for signing up), is there
any specific concern you have regarding this?

>
>
>
>
> Thanks & Regards,
> Ishara Cooray
> Senior Software Engineer
> Mobile : +9477 262 9512
> WSO2, Inc. | http://wso2.com/
> Lean . Enterprise . Middleware
>
> On Tue, Jul 31, 2018 at 3:21 PM, Vithursa Mahendrarajah  > wrote:
>
>> + [architecture]
>>
>> On Tue, Jul 31, 2018 at 12:55 PM Kasun Thennakoon 
>> wrote:
>>
>>> Hi Rukshan,
>>>
>>> This is the current flow
>>>
>>> [image: image.png]
>>>
>>> So how we restricted this token, talk only to signup api? with scopes??
>>>>
>>> Yes we get an access token for self signup scope only
>>>
>>>
>>> Thanks
>>> ~KasunTe
>>>
>>>
>>> On Tue, Jul 31, 2018 at 11:21 AM Rukshan Premathunga 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Jul 31, 2018 at 11:12 AM, Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 31 July 2018 at 10:57, Rukshan Premathunga 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 31, 2018 at 10:57 AM, Rukshan Premathunga <
>>>>>> ruks...@wso2.com> wrote:
>>>>>>
>>>>>>> in sigin up case, if you take a token to talk to signup api, is it
>>>>>>> also store in the browser?
>>>>>>>
>>>>>> * in signup case, if you take a token to talk to signup api, is it
>>>>>> also store in the browser?
>>>>>>
>>>>>
>>>>> In this case, Yes. Since there is no user involved yet(user has not
>>>>> got registered yet), it is the store that is making this call on behalf of
>>>>> the user so that they can get registered.
>>>>>
>>>> So how we restricted this token, talk only to signup api? with scopes??
>>>>
>>>>>
>>>>>>> On Tue, Jul 31, 2018 at 10:26 AM, Fazlan Nazeem 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Yes, since the client secret will not be known to the end users
>>>>>>>> there is no threat in adding client_credentials grant to the store app.
>>>>>>>>
>>>>>>>> On Tue, Jul 31, 2018 at 10:18 AM Uvindra Dias Jayasinha <
>>>>>>>> uvin...@wso2.com> wrote:
>>>>>>>>
>>>>>>>>> +1 for option 1, adding the client credentials capability to the
>>>>>>>>> store app makes sense to support this use case.
>>>>>>>>>
>>>>>>>>> On 31 July 2018 at 10:06, Kasun Thennakoon 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Vithursa,
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> In my opinion
>>>>>>>>>>
>>>>>>>>>> *Option-1: *Adding *client_credentials* grant type to existing
>>>>>>>>>>> application
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> option-1 would be more appropriate here, other than maintaining a
>>>>>>>>>> separate OAuth app for the self sign-up feature.
>>>>>>>>>>
>>>&

Re: [Architecture] [APIM v3] Base path for /userinfo endpoint

2018-03-28 Thread Uvindra Dias Jayasinha
+Sagara, Johann

On 29 March 2018 at 10:57, Uvindra Dias Jayasinha <uvin...@wso2.com> wrote:

> I'm in favour of having userinfo separate from the default oauth2 service
> since its a different concern altogether. Im not sure the reason behind why
> the IS team originally included userinfo as part of their oauth service.
>
> So +1 for option 2
>
>
>
> On 28 March 2018 at 12:46, Pubudu Gunatilaka <pubu...@wso2.com> wrote:
>
>> Hi,
>>
>> Userinfo endpoint comes under OpenID connect. Basically, OpenId is about
>> authentication and OAuth is about authorization. Currently, we have
>> /userinfo endpoint under oauth2 [1].
>>
>> *Available Options:*
>>
>> 1. Use /userinfo endpoint under oauth2.
>>
>> In APIM v3 Key Manager, base path for oauth2 is
>> /api/auth/oauth2/v1.0. By adding this resource, we are allowing OAuth2
>> endpoint for authentication and authorization.
>>
>> 2. Introduce new base path for /userinfo endpoint as it comes under
>> OpenID connect. Oath2 spec does not explain the userinfo endpoint.
>>
>> Suggestions:
>> /api/auth/connect/v1.0/userinfo
>>
>> Appreciate your thoughts?
>>
>> [1] - https://docs.wso2.com/display/IS450/OpenID+Connect+Basic+Cli
>> ent+Profile+with+WSO2+Identity+Server
>>
>> Thank you!
>> --
>> *Pubudu Gunatilaka*
>> Committer and PMC Member - Apache Stratos
>> Senior Software Engineer
>> WSO2, Inc.: http://wso2.com
>> mobile : +94774078049 <%2B94772207163>
>>
>>
>
>
> --
> Regards,
> Uvindra
>
> Mobile: 33962
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM v3] Base path for /userinfo endpoint

2018-03-28 Thread Uvindra Dias Jayasinha
I'm in favour of having userinfo separate from the default oauth2 service
since its a different concern altogether. Im not sure the reason behind why
the IS team originally included userinfo as part of their oauth service.

So +1 for option 2



On 28 March 2018 at 12:46, Pubudu Gunatilaka  wrote:

> Hi,
>
> Userinfo endpoint comes under OpenID connect. Basically, OpenId is about
> authentication and OAuth is about authorization. Currently, we have
> /userinfo endpoint under oauth2 [1].
>
> *Available Options:*
>
> 1. Use /userinfo endpoint under oauth2.
>
> In APIM v3 Key Manager, base path for oauth2 is /api/auth/oauth2/v1.0.
> By adding this resource, we are allowing OAuth2 endpoint for authentication
> and authorization.
>
> 2. Introduce new base path for /userinfo endpoint as it comes under OpenID
> connect. Oath2 spec does not explain the userinfo endpoint.
>
> Suggestions:
> /api/auth/connect/v1.0/userinfo
>
> Appreciate your thoughts?
>
> [1] - https://docs.wso2.com/display/IS450/OpenID+Connect+Basic+
> Client+Profile+with+WSO2+Identity+Server
>
> Thank you!
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Architechture] [APIM] Label based Store nodes for API Manager

2018-03-28 Thread Uvindra Dias Jayasinha
Can I suggest that we rename the label types we have come up with at the
moment?

We right now have *Gateway* and *Store* label types. These names are
tightly coupled to the high level product components that they are designed
to work with and do not communicate their proper intent and purpose. The
lower layers of the product(example: DAO layer) should not be aware of the
higher level components.

We can rename as follows,

Gateway label > Deployment label(Since its communicates where the
associated API will be deployed)
Store label > View label(Since it communicates that this is related to
the ability to view the associated API. This functionality happens to be
used to implement publishing to multiple stores)

This communicates intent better and will remove direct coupling to higher
level.

WDYT?

On 22 June 2017 at 14:13, Praminda Jayawardana  wrote:

> Hi All,
>
> I've updated the design as per the discussion in this thread.
>
>
>
> [image: Inline image 2]
>
>
>
> With this design API Publisher will be asked only to select one label (for
> both gateway and store). Which combination of gateway and store is applied
> for this particular API, depends on the label's scope. If label only has
> GATEWAY scope this API will be deployed on the gateway with selected label
> but it will not be attached with a specific store (will be visible in
> default store). If label has both GATEWAY and STORE scopes API will be
> deployed on a specific gateway and a store specified by that label.
> When selecting label's, API Publisher will only see the labels he/she has
> access depending on the label permissions. Also dynamic label registration
> will not be available anymore as we need intermediate step of permission
> setting in label creation process.
>
> Thanks,
> Praminda
>
>
> On Wed, Jun 21, 2017 at 3:43 PM, Praminda Jayawardana 
> wrote:
>
>> +1 for attaching permission model with labels.
>>
>> As per current discussion we'll need following items added to our todo.
>>
>>1. Attach permission model with label.
>>2. admin APIs to manage(CRUD operations) labels. There will be a UI
>>in admin app.
>>
>> Is there any other things we need to consider?
>> Thanks,
>> Praminda
>>
>> On Wed, Jun 21, 2017 at 7:20 AM, Lakmal Warusawithana 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jun 20, 2017 at 11:16 PM, Pubudu Gunatilaka 
>>> wrote:
>>>
 Hi,

 We actually have a REST API to delete a given label. Also, for the
 initial version, we thought of simplifying the label scenario. In later
 releases, we can add an UI support to manage labels.

 I don't think we need to bring the permission model unless we provide
 gateways per user/group. In the store side, we have introduced an extension
 point to filter labels depends on the user.

 The story behind the label scenario is to simplify the dynamic gateway
 registration. For an example, consider a deployment which has multiple APIs
 deployed in public and private gateways. Now our requirement is to spin a
 new gateway in Singapore region and serve a particular API. What we need to
 do is to spin a gateway in that region with the new label and re-publish
 the API with the new label. We have used labels to manage gateways in this
 scenario.

 If we are not allowing dynamic label registration, we can have admin
 API for labels. Yes, we can ship default labels. But when we need to bring
 up a new gateway, we have to make the REST calls to register the gateway.
 This is kind of a user interaction or maybe we can automate this based on
 the use case. But with dynamic label registration, we don't need to do
 anything specifically. If we consider a container scenario, we only need to
 pass an environment variable with the new label name to the container to
 spin up a new gateway container in a new region.


>>> IMO, but when we come to displaying labels in publisher, governance
>>> aspect going to be critical. If we going to add governance aspect, dynamic
>>> label registration has minimal advantage. In that case anyhow it need to go
>>> through the some kind of permission model.
>>>
>>>
>>>
 Thank you!
 --
 *Pubudu Gunatilaka*
 Committer and PMC Member - Apache Stratos
 Software Engineer
 WSO2, Inc.: http://wso2.com
 mobile : +94774078049 <%2B94772207163>


 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


>>>
>>>
>>> --
>>> Lakmal Warusawithana
>>> Director - Cloud Architecture; WSO2 Inc.
>>> Mobile : +94714289692 <+94%2071%20428%209692>
>>> Blogs : https://medium.com/@lakwarus/
>>> http://lakmalsview.blogspot.com/
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> 

Re: [Architecture] [APIM 3.0.0] get rid of relational databases (MySQL, ...)

2018-02-20 Thread Uvindra Dias Jayasinha
Hi Youcef,

Thanks for you feedback about the product.

Regarding getting rid of relational databases, we really haven't explored
this possibility ourselves.

Technically APIM 3.0.0 uses a set of DAO interfaces to access its
persistent layer. These DAO interfaces such as ApiDAO.java,
ApplicationDAO.java, etc. can be found here[1]. The actual implementations
of these interfaces that talk to the relational databases can be found in
the impl package one level below[2]. So the rest of the product code is not
concerned with the implementation of the DAO interfaces, so theoretically
you can do your own implementation and wire that in through the DAOFactory
class thats used to construct an instance of each DAO implementation.

Practically Im not sure how you could maintain the relationships of the
various entities in a NoSQL DB(This is the very reason why we use a
relational DB implementation by default). You are free to try this out on
your end and let us know how it goes.

[1]
https://github.com/wso2/carbon-apimgt/tree/master/components/apimgt/org.wso2.carbon.apimgt.core/src/main/java/org/wso2/carbon/apimgt/core/dao
[2]
https://github.com/wso2/carbon-apimgt/tree/master/components/apimgt/org.wso2.carbon.apimgt.core/src/main/java/org/wso2/carbon/apimgt/core/dao/impl


On 20 February 2018 at 14:13, Youcef HILEM  wrote:

> Hi,
> First of all, thank you very much for this excellent product.
>
> I am preparing an infrastructure for APIM 3.0.0 in multi-dacenter active /
> active configuration.
>
> I want to get rid of relational databases (MySQL, ...).
>
> Is it possible ? if so, could you please give me the outline to follow to
> use only the NoSQL Cassandra database?
>
> Thank you in advance.
> Youcef HILEM
>
>
>
> --
> Sent from: http://wso2-oxygen-tank.10903.n7.nabble.com/WSO2-
> Architecture-f62919.html
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Clearly defining what operations users can perform on a shared application in APIM

2018-02-13 Thread Uvindra Dias Jayasinha
On 13 February 2018 at 18:02, Chamin Dias <cham...@wso2.com> wrote:

> Hi,
>
> Can we promote a "shared user" to "admin shared user" (and vise versa)? Is
> it supported in this feature?
>
> Thanks.
>
>
 @Chamin, there is no such thing as an admin shrared user. We do have a
facility of changing the owner of an application though[1]

[1] https://docs.wso2.com/display/AM2xx/apidocs/admin/#!
/operations#Application#applicationsApplicationIdChangeOwnerPost


>
> On Tue, Feb 13, 2018 at 3:51 PM, Harsha Kumara <hars...@wso2.com> wrote:
>
>> @Sanjeewa, Uvindra can we actually prevent it? Basically we can hide it
>> from UI. But since he know the consumer key and secret, he can simply
>> revoke and regenerate the token.
>>
>> On Thu, Feb 8, 2018 at 2:57 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
>> wrote:
>>
>>> Yes we can safely prevent shared users from regenerating access tokens
>>> of Apps that they are not owners of. This ideally shouldnt be an issue
>>> since Apps should have provision to regenerate a token if required.
>>>
>>> On 8 February 2018 at 14:23, Sanjeewa Malalgoda <sanje...@wso2.com>
>>> wrote:
>>>
>>>> Can shared users generate keys for the application? After first time if
>>>> one user regenerate application access key then it will effect others as we
>>>> revoke and generate application token.
>>>> I think regenerate option and application access token visibility also
>>>> should remove for above shared users. I think generate token with resource
>>>> owner grant by non app owner may cause issues.
>>>>
>>>> Thanks,
>>>> sanjeewa.
>>>>
>>>> On Wed, Feb 7, 2018 at 11:57 AM, Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>
>>>>> +1 Agreed with Nuwan about how subscriptions should be handled
>>>>>
>>>>>
>>>>> Regarding the behavior of the Admin shared user, seems this is not
>>>>> required because we already have an Admin REST API to change Application
>>>>> ownership available in 2.2.0[1] as discussed in the mail thread[2]. This
>>>>> addresses the requirement of what would happen if an App owner leaves the
>>>>> organization. So we will only address the App Owner and Shared User
>>>>> experience.
>>>>>
>>>>> [1]https://docs.wso2.com/display/AM2xx/apidocs/admin/#!/oper
>>>>> ations#Application#applicationsApplicationIdChangeOwnerPost
>>>>> [2][C4[]APIM] REST API for changing Owner of a Application
>>>>>
>>>>> On 7 February 2018 at 11:18, Nuwan Dias <nuw...@wso2.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 7, 2018 at 11:14 AM, Uvindra Dias Jayasinha <
>>>>>> uvin...@wso2.com> wrote:
>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> It seems that currently we do not have a clear definition in
>>>>>>> regarding what users can do with shared applications. This has been
>>>>>>> highlighted in[1] and the plan is to address this as part of the APIM 
>>>>>>> 2.2.0
>>>>>>> release.
>>>>>>>
>>>>>>> There are two types of users, the *App owner* who creates the App
>>>>>>> and the *shared user* who is able to view the App that is shared
>>>>>>> with them by the App owner.
>>>>>>>
>>>>>>> *Current issues*
>>>>>>> 1. Product allows shared users to attempt updating Apps that are not
>>>>>>> owned by them, which leads to errors because they do not have the 
>>>>>>> required
>>>>>>> permissions.
>>>>>>>
>>>>>>> 2. Product allows shared users to delete Apps that are not owned by
>>>>>>> them which violate the Application ownership concept.
>>>>>>>
>>>>>>> The plan to address this is as follows
>>>>>>>
>>>>>>> *Solution*
>>>>>>> 1. *App Owner *: Has ability to delete/update Apps owned by them.
>>>>>>>
>>>>>>> 2. *Shared user*: Has only Read only access to Apps shared with
>>>>>>> them(cannot delete/update).
>>>>>>> Deletion and updation

Re: [Architecture] Clearly defining what operations users can perform on a shared application in APIM

2018-02-13 Thread Uvindra Dias Jayasinha
@Harsha, yes there is no way to truly prevent shared users from invoking
the token endpoint and there by revoking the access token. But as discussed
since tokens are not hard coded in applicaitions this is not a concern.
Apps should be continuously refreshing their token from time to time

On 13 February 2018 at 15:51, Harsha Kumara <hars...@wso2.com> wrote:

> @Sanjeewa, Uvindra can we actually prevent it? Basically we can hide it
> from UI. But since he know the consumer key and secret, he can simply
> revoke and regenerate the token.
>
> On Thu, Feb 8, 2018 at 2:57 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Yes we can safely prevent shared users from regenerating access tokens of
>> Apps that they are not owners of. This ideally shouldnt be an issue since
>> Apps should have provision to regenerate a token if required.
>>
>> On 8 February 2018 at 14:23, Sanjeewa Malalgoda <sanje...@wso2.com>
>> wrote:
>>
>>> Can shared users generate keys for the application? After first time if
>>> one user regenerate application access key then it will effect others as we
>>> revoke and generate application token.
>>> I think regenerate option and application access token visibility also
>>> should remove for above shared users. I think generate token with resource
>>> owner grant by non app owner may cause issues.
>>>
>>> Thanks,
>>> sanjeewa.
>>>
>>> On Wed, Feb 7, 2018 at 11:57 AM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> +1 Agreed with Nuwan about how subscriptions should be handled
>>>>
>>>>
>>>> Regarding the behavior of the Admin shared user, seems this is not
>>>> required because we already have an Admin REST API to change Application
>>>> ownership available in 2.2.0[1] as discussed in the mail thread[2]. This
>>>> addresses the requirement of what would happen if an App owner leaves the
>>>> organization. So we will only address the App Owner and Shared User
>>>> experience.
>>>>
>>>> [1]https://docs.wso2.com/display/AM2xx/apidocs/admin/#!/oper
>>>> ations#Application#applicationsApplicationIdChangeOwnerPost
>>>> [2][C4[]APIM] REST API for changing Owner of a Application
>>>>
>>>> On 7 February 2018 at 11:18, Nuwan Dias <nuw...@wso2.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Feb 7, 2018 at 11:14 AM, Uvindra Dias Jayasinha <
>>>>> uvin...@wso2.com> wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> It seems that currently we do not have a clear definition in
>>>>>> regarding what users can do with shared applications. This has been
>>>>>> highlighted in[1] and the plan is to address this as part of the APIM 
>>>>>> 2.2.0
>>>>>> release.
>>>>>>
>>>>>> There are two types of users, the *App owner* who creates the App
>>>>>> and the *shared user* who is able to view the App that is shared
>>>>>> with them by the App owner.
>>>>>>
>>>>>> *Current issues*
>>>>>> 1. Product allows shared users to attempt updating Apps that are not
>>>>>> owned by them, which leads to errors because they do not have the 
>>>>>> required
>>>>>> permissions.
>>>>>>
>>>>>> 2. Product allows shared users to delete Apps that are not owned by
>>>>>> them which violate the Application ownership concept.
>>>>>>
>>>>>> The plan to address this is as follows
>>>>>>
>>>>>> *Solution*
>>>>>> 1. *App Owner *: Has ability to delete/update Apps owned by them.
>>>>>>
>>>>>> 2. *Shared user*: Has only Read only access to Apps shared with
>>>>>> them(cannot delete/update).
>>>>>> Deletion and updation of Apps will be restricted at API Store UI
>>>>>> level. App ownership will be   checked before performing App 
>>>>>> update/delete
>>>>>> from server side in  order to   enforce this for REST API calls
>>>>>>
>>>>>
>>>>> Shared user needs to view, remove and add subscriptions too IMO.
>>>>>
>>>>>>
>>>>>> 3 *Admin shared user* : Has ability to delete/update Apps shared
>>>

Re: [Architecture] Clearly defining what operations users can perform on a shared application in APIM

2018-02-08 Thread Uvindra Dias Jayasinha
Yes we can safely prevent shared users from regenerating access tokens of
Apps that they are not owners of. This ideally shouldnt be an issue since
Apps should have provision to regenerate a token if required.

On 8 February 2018 at 14:23, Sanjeewa Malalgoda <sanje...@wso2.com> wrote:

> Can shared users generate keys for the application? After first time if
> one user regenerate application access key then it will effect others as we
> revoke and generate application token.
> I think regenerate option and application access token visibility also
> should remove for above shared users. I think generate token with resource
> owner grant by non app owner may cause issues.
>
> Thanks,
> sanjeewa.
>
> On Wed, Feb 7, 2018 at 11:57 AM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> +1 Agreed with Nuwan about how subscriptions should be handled
>>
>>
>> Regarding the behavior of the Admin shared user, seems this is not
>> required because we already have an Admin REST API to change Application
>> ownership available in 2.2.0[1] as discussed in the mail thread[2]. This
>> addresses the requirement of what would happen if an App owner leaves the
>> organization. So we will only address the App Owner and Shared User
>> experience.
>>
>> [1]https://docs.wso2.com/display/AM2xx/apidocs/admin/#!/
>> operations#Application#applicationsApplicationIdChangeOwnerPost
>> [2][C4[]APIM] REST API for changing Owner of a Application
>>
>> On 7 February 2018 at 11:18, Nuwan Dias <nuw...@wso2.com> wrote:
>>
>>>
>>>
>>> On Wed, Feb 7, 2018 at 11:14 AM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> It seems that currently we do not have a clear definition in regarding
>>>> what users can do with shared applications. This has been highlighted in[1]
>>>> and the plan is to address this as part of the APIM 2.2.0 release.
>>>>
>>>> There are two types of users, the *App owner* who creates the App and
>>>> the *shared user* who is able to view the App that is shared with them
>>>> by the App owner.
>>>>
>>>> *Current issues*
>>>> 1. Product allows shared users to attempt updating Apps that are not
>>>> owned by them, which leads to errors because they do not have the required
>>>> permissions.
>>>>
>>>> 2. Product allows shared users to delete Apps that are not owned by
>>>> them which violate the Application ownership concept.
>>>>
>>>> The plan to address this is as follows
>>>>
>>>> *Solution*
>>>> 1. *App Owner *: Has ability to delete/update Apps owned by them.
>>>>
>>>> 2. *Shared user*: Has only Read only access to Apps shared with
>>>> them(cannot delete/update).
>>>> Deletion and updation of Apps will be restricted at API Store UI level.
>>>> App ownership will be   checked before performing App update/delete from
>>>> server side in  order to   enforce this for REST API calls
>>>>
>>>
>>> Shared user needs to view, remove and add subscriptions too IMO.
>>>
>>>>
>>>> 3 *Admin shared user* : Has ability to delete/update Apps shared with
>>>> them. The reason for this is to address practical issues that take place
>>>> when the App owner leaves an organization and there needs to be some way to
>>>> delete/update such an Application.
>>>>
>>>
>>> +1
>>>
>>>>
>>>>
>>>> Please give your feedback on the above.
>>>>
>>>>
>>>> [1] https://github.com/wso2/product-apim/issues/2690
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>
>>>
>>>
>>> --
>>> Nuwan Dias
>>>
>>> Software Architect - WSO2, Inc. http://wso2.com
>>> email : nuw...@wso2.com
>>> Phone : +94 777 775 729 <+94%2077%20777%205729>
>>>
>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>
>
>
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779 <+94%2071%20306%208779>
>
> <http://sanjeewamalalgoda.blogspot.com/>blog :http://sanjeewamalalgoda.
> blogspot.com/ <http://sanjeewamalalgoda.blogspot.com/>
>
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Clearly defining what operations users can perform on a shared application in APIM

2018-02-06 Thread Uvindra Dias Jayasinha
+1 Agreed with Nuwan about how subscriptions should be handled


Regarding the behavior of the Admin shared user, seems this is not required
because we already have an Admin REST API to change Application ownership
available in 2.2.0[1] as discussed in the mail thread[2]. This addresses
the requirement of what would happen if an App owner leaves the
organization. So we will only address the App Owner and Shared User
experience.

[1]
https://docs.wso2.com/display/AM2xx/apidocs/admin/#!/operations#Application#applicationsApplicationIdChangeOwnerPost
[2][C4[]APIM] REST API for changing Owner of a Application

On 7 February 2018 at 11:18, Nuwan Dias <nuw...@wso2.com> wrote:

>
>
> On Wed, Feb 7, 2018 at 11:14 AM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Hi All,
>>
>> It seems that currently we do not have a clear definition in regarding
>> what users can do with shared applications. This has been highlighted in[1]
>> and the plan is to address this as part of the APIM 2.2.0 release.
>>
>> There are two types of users, the *App owner* who creates the App and
>> the *shared user* who is able to view the App that is shared with them
>> by the App owner.
>>
>> *Current issues*
>> 1. Product allows shared users to attempt updating Apps that are not
>> owned by them, which leads to errors because they do not have the required
>> permissions.
>>
>> 2. Product allows shared users to delete Apps that are not owned by them
>> which violate the Application ownership concept.
>>
>> The plan to address this is as follows
>>
>> *Solution*
>> 1. *App Owner *: Has ability to delete/update Apps owned by them.
>>
>> 2. *Shared user*: Has only Read only access to Apps shared with
>> them(cannot delete/update).
>> Deletion and updation of Apps will be restricted at API Store UI level.
>> App ownership will be   checked before performing App update/delete from
>> server side in  order to   enforce this for REST API calls
>>
>
> Shared user needs to view, remove and add subscriptions too IMO.
>
>>
>> 3 *Admin shared user* : Has ability to delete/update Apps shared with
>> them. The reason for this is to address practical issues that take place
>> when the App owner leaves an organization and there needs to be some way to
>> delete/update such an Application.
>>
>
> +1
>
>>
>>
>> Please give your feedback on the above.
>>
>>
>> [1] https://github.com/wso2/product-apim/issues/2690
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>
>
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729 <+94%2077%20777%205729>
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Clearly defining what operations users can perform on a shared application in APIM

2018-02-06 Thread Uvindra Dias Jayasinha
Hi All,

It seems that currently we do not have a clear definition in regarding what
users can do with shared applications. This has been highlighted in[1] and
the plan is to address this as part of the APIM 2.2.0 release.

There are two types of users, the *App owner* who creates the App and
the *shared
user* who is able to view the App that is shared with them by the App
owner.

*Current issues*
1. Product allows shared users to attempt updating Apps that are not owned
by them, which leads to errors because they do not have the required
permissions.

2. Product allows shared users to delete Apps that are not owned by them
which violate the Application ownership concept.

The plan to address this is as follows

*Solution*
1. *App Owner *: Has ability to delete/update Apps owned by them.

2. *Shared user*: Has only Read only access to Apps shared with them(cannot
delete/update).
Deletion and updation of Apps will be restricted at API Store UI level. App
ownership will be   checked before performing App update/delete from server
side in  order to   enforce this for REST API calls

3 *Admin shared user* : Has ability to delete/update Apps shared with them.
The reason for this is to address practical issues that take place when the
App owner leaves an organization and there needs to be some way to
delete/update such an Application.


Please give your feedback on the above.


[1] https://github.com/wso2/product-apim/issues/2690
-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Did we thought about APIM 3.0.0 Audit log?

2017-08-30 Thread Uvindra Dias Jayasinha
There is a JIRA[1] for this that I moved to github


[1] https://wso2.org/jira/browse/APIMANAGER-5802

On 30 August 2017 at 11:07, Rukshan Premathunga  wrote:

> Hi all,
>
> In c4 we had separate log file to audit API and application related stuff.
> Is this already done in AM 3?
>
> Thanks and Regards
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
> +94711822074 <+94%2071%20182%202074>
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM][C5] Key generation API changes in Store REST API

2017-06-29 Thread Uvindra Dias Jayasinha
+1, looks good

On 29 June 2017 at 12:27, Malintha Amarasinghe  wrote:

>
>
> On Thu, Jun 29, 2017 at 12:20 PM, Harsha Kumara  wrote:
>
>>
>>
>> On Thu, Jun 29, 2017 at 11:43 AM, Malintha Amarasinghe <
>> malint...@wso2.com> wrote:
>>
>>> Hi all,
>>>
>>> Bhathiya and I had a discussion about this and came up with the below
>>> approach regarding POST /provide-keys.
>>>
>>> 1.Creates a new resource in /keys collection providing the key type.
>>> (Similar to semi-manual client registration).
>>>
>>> POST  /applications/{applicationId}/keys
>>>
>>> *Request:*
>>>
>>> POST  /applications/876f8fd8-269a-41db-b1cf-e4efe8a8426d/keys
>>>
>>> {
>>>   "consumerKey": "",
>>>   "consumerSecret": "yyy",
>>>   "keyType": "PRODUCTION"
>>> }
>>>
>>> *Response*
>>>
>>> HTTP/1.1 201 CREATED
>>> Location: https://localhost:9292/api/am/store/v1/applications/876f8fd8
>>> -269a-41db-b1cf-e4efe8a8426d/keys/PRODUCTION
>>>
>>> {
>>>   "consumerKey": "xx",
>>>   "consumerSecret": "yyy",
>>>   "supportedGrantTypes": [
>>> "client-credentials", "password"
>>>   ],
>>>   "callbackUrl": "http://localhost/callback;,
>>>   "keyType": "PRODUCTION"
>>> }
>>>
>>> Seems we are thinking keyType as a resource. We will need to add a
>> validation for keyType at  implementation layer. +1 for the approach.
>>
> Yeah we will need a validation since the only allowed key types are
> PRODUCTION and SANDBOX.
>
>>
>>> 2. Get all keys
>>>
>>> GET /applications/{applicationId}/keys
>>>
>>>
>>> *Request:*
>>>
>>> GET /applications/876f8fd8-269a-41db-b1cf-e4efe8a8426d/keys
>>>
>>>
>>> *Response:*
>>>
>>> HTTP/1.1 200 OK
>>> {
>>> "count": 2,
>>> "items": [
>>>
>>> {
>>>   "consumerKey": "xx",
>>>   "consumerSecret": "yyy",
>>>   "supportedGrantTypes": [
>>> "client-credentials", "password"
>>>   ],
>>>   "callbackUrl": "http://localhost/callback;,
>>>   "keyType": "PRODUCTION"
>>> },
>>>
>>> {
>>>   "consumerKey": "xx",
>>>   "consumerSecret": "yyy",
>>>   "supportedGrantTypes": [
>>> "client-credentials", "password"
>>>   ],
>>>   "callbackUrl": "http://localhost/callback;,
>>>   "keyType": "SANDBOX"
>>> }
>>>
>>> ]
>>> }
>>>
>>>
>>> 3. Get a single key detail
>>>
>>> GET /applications/{applicationId}/keys/{keyType}
>>>
>>> *Request*
>>>
>>> GET /applications/876f8fd8-269a-41db-b1cf-e4efe8a8426d/keys/PRODUCTION
>>>
>>>
>>> *Response*
>>>
>>> HTTP/1.1 200 OK
>>>
>>> {
>>>   "consumerKey": "xx",
>>>   "consumerSecret": "yyy",
>>>   "supportedGrantTypes": [
>>> "client-credentials", "password"
>>>   ],
>>>   "callbackUrl": "http://localhost/callback;,
>>>   "keyType": "PRODUCTION"
>>> }
>>>
>>> 4. Update a key
>>>
>>> PUT /applications/{applicationId}/keys/{keyType}
>>>
>>> *We will only allow updating supported grant types and callback URLs for
>>> individual keys.*
>>>
>>> *Request*
>>>
>>> PUT /applications/876f8fd8-269a-41db-b1cf-e4efe8a8426d/keys/PRODUCTION
>>>
>>>
>>> {
>>>   "supportedGrantTypes": [
>>> "client-credentials"
>>>   ],
>>>   "callbackUrl": "http://localhost/callback-updated;,
>>> }
>>>
>>> *Response:*
>>>
>>> HTTP/1.1 200 OK
>>>
>>> {
>>>   "consumerKey": "xx",
>>>   "consumerSecret": "yyy",
>>>   "supportedGrantTypes": [
>>> "client-credentials"
>>>   ],
>>>   "callbackUrl": "http://localhost/callback-updated;,
>>>   "keyType": "PRODUCTION"
>>> }
>>>
>>>
>>>
>>> Thanks
>>> Malintha
>>>
>>>
>>>
>>> On Wed, Jun 28, 2017 at 1:37 PM, Bhathiya Jayasekara 
>>> wrote:
>>>
 Hi all,

 As discussed in [1], I split generate keys operation into 2, and added
 "provide-keys" operation for semi-manual client registration. Here is the
 final list with sample requests and responses.


 POST  /applications/{applicationId}/generate-keys

 {
   "keyType": "PRODUCTION",
   "grantTypesToBeSupported": [
 "client-credentials", "password"
   ],
   "callbackUrl": "http://localhost/callback"}


 Response

 {
   "consumerKey": "xx",
   "consumerSecret": "yyy",
   "supportedGrantTypes": [
 "client-credentials", "password"
   ],
   "callbackUrl": "http://localhost/callback;,
   "keyType": "PRODUCTION"}



 POST  /applications/{applicationId}/provide-keys

 {
   "consumerKey": "",
   "consumerSecret": "yyy",
   "keyType": "PRODUCTION"}


 Response

 {
   "consumerKey": "xx",
   "consumerSecret": "yyy",
   "supportedGrantTypes": [
 "client-credentials", "password"
   ],
   "callbackUrl": "http://localhost/callback;,
   "keyType": "PRODUCTION"}



 POST  

Re: [Architecture] [MB] Best Approach to write unit tests for DAO Layer ?

2017-05-21 Thread Uvindra Dias Jayasinha
Just to be clear, no one is saying to replace unit testing with integration
testing. Just that somethings are better handled as integration tests.

It is up to the MB team to decide what is the best approach that suits
their requirement. We have just shared what we are using for APIM

On 21 May 2017 at 07:46, Pumudu Ruhunage <pum...@wso2.com> wrote:

> Hi,
> We can not replace unit tests with integration tests and vice versa. Unit
> tests are for testing smallest testable units(usually per method) and
> integration tests are to cover high-level functionalities of the system
> when different modules integrated. Main idea of unit testing is that
> identify bugs/regressions in initial stages before propagating them to
> integration level.
>
> AFAIU question here is choosing unit test approach(mock databases) or
> integration test approach(use an existing database) for testing MB DAO
> layer?
>
> IMO since we have planned to write separate integration test suite, it's
> better to go with option-2 (mock db approach) for DAO layer testing if
> possible.
>
> Thanks,
>
> On Fri, May 19, 2017 at 2:33 PM, Rajith Roshan <raji...@wso2.com> wrote:
>
>> Hi,
>>
>>
>> On Fri, May 19, 2017 at 12:56 PM, Dharshana Warusavitharana <
>> dharsha...@wso2.com> wrote:
>>
>>> Hi Fazlan,
>>>
>>> By using docker you are writing an Integration test, not a unit test. If
>>> you are using integration approach with docker its waste to test with
>>> in-memory databases. Just use read database like mysql.
>>>
>>> But still, if you need to validate DAO in unit layer (at component build
>>> time) you need to mock these database layers.
>>>
>> I think there is slight confusion about unit test and integration test.
>> In APIM we mock DAO interface which are our unit tests. When we use
>> docker images the DAO layer does get tested against the actual DB. This is
>> also done at the component building time. As C4 these are not run when the
>> product is running. These are still executed when components are building.
>>   Since these tests are actually running against a database, these are not
>> unit tests, they are actually integration tests used at the component
>> building time.
>>
>> Thanks!
>> Rajith
>>
>>>
>>>
>>> The importance of having unit test layer is you can validate any issue
>>> before it reaches the feature and bundle with the product. Once it bundles
>>> with the produce cost of fix is high. So the point of the unit test is
>>> eliminating that.
>>>
>>> So if you are talking about unit test, they must at least cover this
>>> aspect. Else forget this test layer and write end to end integration test
>>> using docker or what ever. But they are Integration tests and have the cost
>>> i mentioned above.
>>>
>>> Thank you,
>>> Dharshana.
>>>
>>> On Fri, May 19, 2017 at 11:53 AM, Malaka Gangananda <mala...@wso2.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Actually  JavaDB do have network drivers [1].
>>>>
>>>> [1] http://db.apache.org/derby/papers/DerbyTut/ns_intro.html
>>>>
>>>> Thanks,
>>>>
>>>> On Fri, May 19, 2017 at 11:35 AM, Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>
>>>>> FYI let me give some details regarding how we are testing the APIM DAO
>>>>> layer for C5.
>>>>>
>>>>> 1. The DAO layer is an interface that the rest of our code interacts
>>>>> with in order to store and retrieve data. We mock the DOA layer and can
>>>>> control its behaviour to unit test how the rest of our code behaves when
>>>>> interacting with it.
>>>>>
>>>>> 2. The implementation of the DAO interface will actually be
>>>>> communicating with the database. Since this is the case unit testing the
>>>>> DAO implementation does not give much of a benefit. So when it comes to
>>>>> testing the actual DAO implementation we are running automated integration
>>>>> tests with various DB docker images running(We test against H2, MySQL,
>>>>> Oracle, PostgreSQL, SQLServer)
>>>>>
>>>>>
>>>>> I believe trying to unit test the DAO implementation will only give
>>>>> you a false sense of security. You are better off doing actual integration
>>>>> tests for these
>>>>>
>>>>>
>>>

Re: [Architecture] [MB] Best Approach to write unit tests for DAO Layer ?

2017-05-19 Thread Uvindra Dias Jayasinha
FYI let me give some details regarding how we are testing the APIM DAO
layer for C5.

1. The DAO layer is an interface that the rest of our code interacts with
in order to store and retrieve data. We mock the DOA layer and can control
its behaviour to unit test how the rest of our code behaves when
interacting with it.

2. The implementation of the DAO interface will actually be communicating
with the database. Since this is the case unit testing the DAO
implementation does not give much of a benefit. So when it comes to testing
the actual DAO implementation we are running automated integration tests
with various DB docker images running(We test against H2, MySQL, Oracle,
PostgreSQL, SQLServer)


I believe trying to unit test the DAO implementation will only give you a
false sense of security. You are better off doing actual integration tests
for these


On 19 May 2017 at 10:53, Sanjiva Weerawarana  wrote:

> I didn't realize there was a version of Derby in the JDK! Yes we should
> support it as a real DB now and can we even use it in production?? That
> would be awesome as it'll reduce complexity for smaller deployments - just
> download and run.
>
> Earlier IIRC Derby didn't have networked drivers and therefore couldn't be
> set up for simple 2-node HA. If that has changed that's great.
>
> Sanjiva.
>
> On Fri, May 19, 2017 at 9:31 AM, Asanka Abeyweera 
> wrote:
>
>> Does this mean we are adding Derby to the list of supported RDBMS for MB
>> 4.0.0?
>>
>> On Fri, May 19, 2017 at 9:05 AM, Pumudu Ruhunage  wrote:
>>
>>> Can we consider javaDB(Derby)[1] which is part of JDK. since it's
>>> shipped with jdk, it'll be more suitable for unit tests instead of going
>>> for external databases/frameworks.
>>> Since we are not using any vendor-specific sql's in DAO it
>>> should support all required sql syntaxes without any issue.
>>>
>>> [1] http://www.oracle.com/technetwork/java/javadb/overview/j
>>> avadb-156712.html
>>>
>>> Thanks,
>>>
>>> On Fri, May 19, 2017 at 8:11 AM, Pamod Sylvester  wrote:
>>>
 (+) Adding @architecture

 On Thu, May 18, 2017 at 11:34 AM, Asanka Abeyweera 
 wrote:

> Are we planning to use stored procedures? If yes better to use a
> framework that is flexible enough.
>
> On Thu, May 18, 2017 at 10:59 AM, Ramith Jayasinghe 
> wrote:
>
>> if you want to mess with the database/data, this is the lib for that
>> (regardless of the test type).
>>
>> On Thu, May 18, 2017 at 10:48 AM, Manuri Amaya Perera <
>> manu...@wso2.com> wrote:
>>
>>> @Hasitha Actually that was for integration tests. I guess Ramith's
>>> suggestion would be better for unit tests. When writing integration 
>>> tests
>>> we could look into the possibility of having containerized databases.
>>>
>>> Thanks,
>>> Manuri
>>>
>>> On Thu, May 18, 2017 at 10:42 AM, Ramith Jayasinghe >> > wrote:
>>>
 I propose using http://dbunit.sourceforge.net.
 easy api. and allows you to insert data into database before the
 test and then clean up etc etc.


 On Thu, May 18, 2017 at 10:40 AM, Fazlan Nazeem 
 wrote:

>
>
> On Thu, May 18, 2017 at 10:39 AM, Hasitha Hiranya <
> hasit...@wso2.com> wrote:
>
>> Hi Manuri,
>>
>> Was this approach taken for unit tests or integration tests?
>>
>> Thanks
>>
>
> This approach was taken for integration testing in APIM.
>
> For unit testing we are using Mockito framework for mocking out
> dependencies.
>
>>
>> On Thu, May 18, 2017 at 10:31 AM, Manuri Amaya Perera <
>> manu...@wso2.com> wrote:
>>
>>> Hi Pamod,
>>>
>>> API Manager team is using dynamically created containerized
>>> databases for some tests[1]. With this approach we can perform the 
>>> tests
>>> for several databases types. I think they have already implemented 
>>> this.
>>>
>>> Can we also do something like this?
>>>
>>> [1] [Build Team] Jenkins Build configuration on API Manager
>>>
>>> Thanks,
>>> Manuri
>>>
>>> On Thu, May 18, 2017 at 10:23 AM, Pamod Sylvester <
>>> pa...@wso2.com> wrote:
>>>
 Hi All,

 When unit testing DAO layers what will be the best approach we
 should be using ? some of the approaches would be the following,

 1. Use an in-memory database ? (h2,Derby or HSSQLDB)

 *Pros*

  - Easy to configure
  - SQL query executions will be covered

 *Cons*

Re: [Architecture] [APIM] Life cycle Management

2017-04-05 Thread Uvindra Dias Jayasinha
On 13 October 2016 at 17:51, Rajith Roshan  wrote:

> Hi,
>
> The current implementation keeps the data in a separate database. For now
> life cycle  component has its own data source. I think its better to read
> the data source name from a configuration so the tables related to life
> cycle operations can be populated in an existing database as well. For ex:
> if the config specifies the AM _DB then these tables will be created in API
> Manager database, without using separate database.
>

Yes we need to be able to save LC information in the AM_DB itself, but the
real requirement for doing this is to make sure that we can handle creating
APIs in a transactional way.

Having to deal with two databases(AM DB and Life Cycle DB) when creating an
API means we cant make things transactional. For APIM we should be able to
pass in the same DB connection for the AM_DB to the LC component. Then if
an exception gets thrown we can role back without having to deal with half
backed data getting saved in the DB.

Can we get this done?


>
> Thanks!
> Rajith
>
> On Mon, Oct 10, 2016 at 3:49 PM, Prasanna Dangalla 
> wrote:
>
>>
>> Hi Rajith,
>>
>> On Mon, Oct 10, 2016 at 3:30 PM, Rajith Roshan  wrote:
>>
>>> Hi,

>>> please find the images below
>>>
 On Mon, Oct 10, 2016 at 3:16 PM, Rajith Roshan 
 wrote:

> Hi all,
>
> Life cycle is an integral part in any product. Each SOA governance
> related artifacts can have their own life cycle. Capabilities provided in
> order to manage life cycles not only allow you to properly organize your
> assets but also provides many extensibilities (For ex through custom
> executors)
>
> RequirementCurrent API life cycle of API manager completely depends
>  on the life cycle implementation provided by the registry. Since we are
> moving away from the registry concept we need a completely new life cycle
> management framework which cater for life cycle management within API
> Manager and should be able to ship with other products which require life
> cycle management.
>
> Proposed SolutionThe basic idea is to completely decouple the life
> cycle framework from the system to which its provide life cycle
> capabilities. I.e the system which uses life cycle will not change the
> behavior of the life cycle framework and only vice versa will be
> applicable. The framework should exist independent of the asset type to
> which it provides life cycle capability
>
>
> ​
>
>
> The mapping should be maintained by asset type in order to associate
> with life cycle data. To be more specific in database schema level each
> asset type should update their tables (add extra columns to maintain
> mapping with life cycle data) in order to map  life cycle data.
>
> The external systems which uses life cycle service should connect with
> the service in order to have a unique life cycle id which is
> generated by the service. This id should be stored in the external system
> in order to maintain the mapping (Each asset should have its own life 
> cycle
> id). On the other hand life cycle framework will also store this id in
> order to provide all the life cycle related operations for a particular
> asset.
>
>
> ​
>
> Basically, we will be providing a class (ManagedLifecycle) with all
> the required basic operations. Any asset which requires life cycle
> management can extend this class.
> Further this can be extended to support features like check list items
> as well
> Features supported
>
>-
>
>Custom Inputs - User should  provide with the capability in order
>to save custom values per each state, which can be passed to executors 
> .
>- Executors - Which are custom classes executed during life cycle
>state change operation
>
>
> Please provide you valuable inputs.
>
> Thanks!
> Rajith
>
> --
> Rajith Roshan
> Software Engineer, WSO2 Inc.
> Mobile: +94-72-642-8350 <%2B94-71-554-8430>
>

>>>
>>>
>>> --
>>> Rajith Roshan
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94-72-642-8350 <%2B94-71-554-8430>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>> Since this proposed feature is running separately, are we keeping the db
>> tables in a separate database? If so, we can separate them from products
>> and bind them to the feature. IMHO it will be an advantage.
>>
>> Regards,
>>
>> *Prasanna Dangalla*
>> Senior Software Engineer, WSO2, Inc.; http://wso2.com/
>> lean.enterprise.middleware
>>
>>
>> *cell: +94 718 11 27 51*
>> *twitter: @prasa77*
>>
>>
>
>
> --
> Rajith Roshan
> Software Engineer, 

Re: [Architecture] Retrieving Rating value of API in the GET_API resource API

2017-03-06 Thread Uvindra Dias Jayasinha
+1 to Option 1, an APIs ratings are not something that everyone will be
interested in. Those who want that data can make the additional API call to
retrieve it.

On 6 March 2017 at 13:40, Chamalee De Silva  wrote:

> Hi Dilan,
>
> I agree with your point in terms of performance point of view.
> But when thinking of implementing the retrieval of Rating value of an API
> in terms of re-usability in scenarios like API composition which we are
> going to implement in the future I think it is fair to have this as a
> separate REST API.
>
> So far in the the current implementation of API Manager 3.0.0 all the
> details of the API is taken from one single table.
> With option 2, we are going to do an inner join of two tables to get the
> rating value of the API.
>
> But in the future, same as for API Ratings we will need this
> implementation to be done for API comments as well.
> So in that case also we will need to change the getAPI query if we stick
> with option 2.
>
> So if we go with option 2,  I feel that we are making that query more
> complex to get that data.
> ( There can emerge some point where performance degradation due to this
> expensive joining operations as well. Since we cannot predict whether we
> can limit it with one inner join operation and how much data it can exist.)
>
> I am just explaining my opinion here. Please correct me if I am wrong.
>
> @all,
> And really appreciate your ideas and feedback on this.
>
>
>
> Thanks,
> Chamalee
>
>
>
> On Mon, Mar 6, 2017 at 12:28 PM, Dilan Udara Ariyaratne 
> wrote:
>
>> Hi Chamalee,
>>
>> I think that the 1st option would be the slowest in performance as it
>> requires two network calls to APIs + two separate DB Calls to get all
>> information related to an API.
>>
>> So, IMO, we should rather focus on other two options which only require
>> one network call, instead of the first.
>>
>> From that, 2nd option only involves 1 network call + 1 database call
>> where as
>> 3rd involves 1 network call + 2 database calls.
>>
>> So, IMO, what's left here to compare is if one join is better than
>> performing multiple queries to achieve the same.
>>
>> Since you are performing simply an inner join here and not any outer join
>> which can result in any redundant rows, I guess that 2nd option would be
>> much better.
>>
>> WDYT?
>>
>> Thanks.
>>
>> *Dilan U. Ariyaratne*
>> Senior Software Engineer
>> WSO2 Inc. 
>> Mobile: +94766405580 <%2B94766405580>
>> lean . enterprise . middleware
>>
>>
>> On Fri, Mar 3, 2017 at 10:37 AM, Chamalee De Silva 
>> wrote:
>>
>>> Thanks all for the responses.
>>>
>>> I will go ahead with option 1.
>>>
>>> So with that,
>>>
>>>
>>>- The rating value of the API *will not be returned* in the response
>>>of the getAPI  resource API.
>>>- Separate resource APIs will be implemented to do the following.
>>>
>>>  -  add rating value by an API
>>>  -  get the overall rating of an API
>>> -  to get rating given by a particular
>>> subscriber for an API
>>>
>>>
>>> Thanks,
>>> Chamalee.
>>>
>>> On Thu, Mar 2, 2017 at 10:01 AM, Vidura Nanayakkara 
>>> wrote:
>>>
 Hi Chamalee,

 The response in [1]
 
  suggests
 option 1 in general. It explains few good points that are worth looking at.

 Furthermore, in addition to Chamin's response, getAPI call looks to me
 is common HTTP call in the application. Therefore we do not need to include
 additional information in the response. (You only need to proceed with
 option 2 if you need to always get the API rating along with the API (a
 single request is much more scalable than multiple requests in this case))

 So IMO we should proceed with option 1

 [1] Many asynchronous calls vs single call to the API
 


 On Wed, Mar 1, 2017 at 6:09 PM, Malintha Amarasinghe <
 malint...@wso2.com> wrote:

>
>
> On Wed, Mar 1, 2017 at 1:45 PM, Chamalee De Silva 
> wrote:
>
>> Hi all,
>>
>> I am working on adding REST APIs for the following operations.
>>
>> 1. Rate an API (add rating)
>> 2. Get rating of an API (given the UUID of the API)
>>
>>
>> A separate table is going to be added to store for API_RATINGS like
>> what we had in APIM 2.1.0
>> where it has 1:m mapping with AM_API table.
>>
>> AM_API_RATINGS
>>
>> ID
>>
>> API_ID
>>
>> SUBSCRIBER_ID
>>
>> RATING
>>
>>
>>
>>
>>
>> In current implementation or in the previous version of the API there
>> is no implementation 

Re: [Architecture] [APIM] [C5] Rest API Support for Importing and Exporting APIs between Multiple Environments

2017-01-26 Thread Uvindra Dias Jayasinha
Hi Jochen,

Just to clarify we have two fields in the API table of the DB that we used
to represent the API Owner and API creator as follows,

PROVIDER - This is the actual owner of the API. This should not change
during an Import since the ownership of a given API remains constant
across all environments it maybe deployed on.

CREATED_BY - The person who actually created the API. Under normal
circumstances this would be the same as the provider but in the case of an
Import done by a operations user on behalf of someone when setting up an
environment then that users name will be listed as CREATED_BY


I believe this should cover differentiating between the actual owner of the
API and the person who did the creating on a given environment.

On 23 January 2017 at 08:47, Isuru Haththotuwa <isu...@wso2.com> wrote:

> HI Shani,
>
> Apologies for the late response.
>
> On Thu, Jan 19, 2017 at 11:40 AM, Shani Ranasinghe <sh...@wso2.com> wrote:
>
>> Hi Isuru,
>>
>> When we import an API, in the new environment, who will be listed as the
>> API creator? the person who imports it? that is if the user store is not
>> the same ?
>>
> In import export we have not yet considered data changes; meaning whatever
> is exported will be imported as it is. So if the provider needs to be
> changed, +1 to set it to the importer.
>
>>
>>
>> On Wed, Jan 18, 2017 at 3:35 PM, Malintha Amarasinghe <malint...@wso2.com
>> > wrote:
>>
>>> Hi,
>>>
>>> Yes +1 from me too. Additionally, if we can support exporting a specific
>>> set of APIs at once by giving a list of UUIDs or a provider-name-version
>>> based list that would also be useful and it can save multiple HTTP requests
>>> when we need to download a very specific set of APIs. That requirement was
>>> considered when writing the CLI based import export tool.
>>>
>>> Thanks!
>>>
>>> On Wed, Jan 18, 2017 at 1:09 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> +1 Isuru, that would be extremely helpful I think. The search
>>>> functionality should respect user boundaries to ensure that APIs that
>>>> should not be visible to a given user don't get exported.
>>>>
>>>> On 17 January 2017 at 21:51, Isuru Haththotuwa <isu...@wso2.com> wrote:
>>>>
>>>>> Hi Rushmin/Uvindra,
>>>>>
>>>>> Thanks for the clarifications and inputs.
>>>>>
>>>>> Yes I agree that there is a valid use case for bulk import of APIs.
>>>>> This was suggested in a previous reply as well, by Jochen. I suggest we 
>>>>> use
>>>>> the standard API search functionality to cater this, where we can use all
>>>>> the supported search criteria, provider, api name, wildcards, etc. WDYT?
>>>>>
>>>>> On Tue, Jan 17, 2017 at 6:12 PM, Uvindra Dias Jayasinha <
>>>>> uvin...@wso2.com> wrote:
>>>>>
>>>>>> I think there might be a valid case for allowing a provider to
>>>>>> export/import all the APIs that belong to that provider only in bulk. Its
>>>>>> much more user friendly as Rushmin has mentioned
>>>>>>
>>>>>> On 17 January 2017 at 17:15, Rushmin Fernando <rush...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Isuru,
>>>>>>>
>>>>>>> Please see my comments inline.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jan 11, 2017 at 6:03 PM, Isuru Haththotuwa <isu...@wso2.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Jan 11, 2017 at 2:56 PM, Rushmin Fernando <rush...@wso2.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> IMO although we need bulk (all APIs) exporting and per API
>>>>>>>>> exporting, only one ReST resource is enough for importing.
>>>>>>>>>
>>>>>>>>> The resource would read the received package and import the API(s)
>>>>>>>>> inside the package. So having a ReST resource for importing a single 
>>>>>>>>> API is
>>>>>>>>> redundant.
>>>>>>>>>
>>>>>>>> Agreed. Should be able to identify whether there is only a single
>

Re: [Architecture] [C5][APIM] Full Text Search

2017-01-26 Thread Uvindra Dias Jayasinha
+1, permission check for searches is a must

On 26 January 2017 at 13:50, Roshan Wijesena  wrote:

> Hi,
>
> We may need to add permission check query also into this search, which
> means In C5 we have a permission model for APIs,Applications,Docs, etc..
> IMO, when someone is doing a query that query should go through permission
> tables as well.
>
> Mail thread [[APIM][C5] API Manager entities(APIs/Applications/Docs
> etc..) permission model and group sharing.]
>
>
> On Tue, Jan 24, 2017 at 4:54 PM, Rajith Roshan  wrote:
>
>> Hi all,
>>
>> We have integrated full text search to the api manager C5 server. We
>> implemented full text search and attribute search for Oracle, MsSQL, MySQL,
>> Postgres, and H2.
>>
>> *Full text search *: Search will be done for apis regardless of the
>> attribute of the api. Full text indexes will be created for AM_API table.
>> The indexes will include the columns which will be used in the full text
>> search. The indexing and querying process differ from database to database.
>> Sample curl full text query would be as follows
>>
>> "curl  -H "Authorization: Basic YWRtaW46YWRtaW4="
>> http://127.0.0.1:9090/api/am/store/v0.10/apis?*query="test*
>> fset=1=10""
>>
>> *Attribute search* : This will be used to search APIS based on their
>> attributes (metadata). This is implemented as usual SQL "Like" queries.
>> Sample curl command would be as follows.
>>
>> "curl  -H "Authorization: Basic YWRtaW46YWRtaW4="
>> http://127.0.0.1:9090/api/am/store/v0.10/apis?
>> *query="name:test,version:8.0.0*=1=10""
>>
>> We have carried out latency test for MsSQL [1], Oracle [2], Postgres [3],
>> and MySQL [4] databases using different number of APIS and with different
>> concurrency levels .
>> Even for 10,000 API s all the databases shows manageable latency (10,000
>> APIS can be a very rare case for single tenant).
>>
>> Please share your thoughts on the latency for full text search and
>> attribute search
>>
>> [1] - https://docs.google.com/a/wso2.com/spreadsheets/d/1fhwz5T-
>> cIZzhpgs2Np6eQIeBN2yyB55EgWLXGtXjABg/edit?usp=sharing
>> [2] - https://docs.google.com/a/wso2.com/spreadsheets/d/18qW6OeH
>> 9d7VFq1d6GaCRFQV09I9rbmmJq0EVrTqwb-M/edit?usp=sharing
>> [3] - https://docs.google.com/a/wso2.com/spreadsheets/d/11okKYYe
>> Az8OY7_2VAYnlqbwh79sog5bkplnZEsnPeQo/edit?usp=sharing
>> [4] - https://docs.google.com/a/wso2.com/spreadsheets/d/1r0b9YlE
>> GZ5VTFbPatTW14WBLBoDB7l6ecmK7arqS4KA/edit?usp=sharing
>>
>>
>> Thanks!
>> Rajith
>>
>> On Tue, Jan 24, 2017 at 1:10 PM, Malith Jayasinghe 
>> wrote:
>>
>>> Ok. You could also consider writing a small micro bench-mark and get the
>>> performance numbers (instead of testing this using an external load
>>> generator). This will minimize the impact of other components/layers on the
>>> results.
>>>
>>> On Tue, Jan 24, 2017 at 9:56 AM, Rajith Roshan  wrote:
>>>
 Hi Malith,

 Thanks for your input. I have changed the jmeter scripts according to
 you instructions.
 I was using a oracle docker instance in my local machine. I have
 changed it to a remote oracle database. Now the latency is much less. I
 will share all the performance numbers once I have finished collecting them
 for oracle, mssql, mysql and postgres databases.

 Thanks!
 Rajith

 On Tue, Jan 24, 2017 at 9:46 AM, Malith Jayasinghe 
 wrote:

> @Rajith
>
> As per the offline discussion we had regarding the performance
> evaluation (using JMETER) of the two methods:
> - use 2 separate thread groups for evaluating the performance of 2 DB
> based search methods
> - run each thread group sequentially
> - run the test for a longer period under different concurrency level
> and record the latency and TPS
>
> With regard to the long latencies you are noticing, we need to figure
> out if this is related to the database query/queries or something else. To
> do a quick test: simply log the execution time of the query/queries. If 
> the
> execution times are high (or have spikes) then we can try to optimize the
> DB query. Otherwise we have to do some further investigations.
>
> Thanks
>
> Malith
>
>
>
>
>
>
> On Sat, Jan 14, 2017 at 10:43 PM, Nuwan Dias  wrote:
>
>> File system based indexers bring new challenges in the container
>> world. It also poses challenges in HA (multinode) and multi regional
>> deployments. Therefore if we're selecting a solr indexing approach, the
>> only practical solution is to go with a solr server based architecture.
>>
>> Even with a solr server assisted architecture, we still have
>> complexities when dealing with multi regional deployments. Also since API
>> artifacts reside in the database, if we're loading search results from an
>> index, we have to sync the 

Re: [Architecture] [APIM] [C5] Rest API Support for Importing and Exporting APIs between Multiple Environments

2017-01-17 Thread Uvindra Dias Jayasinha
+1 Isuru, that would be extremely helpful I think. The search functionality
should respect user boundaries to ensure that APIs that should not be
visible to a given user don't get exported.

On 17 January 2017 at 21:51, Isuru Haththotuwa <isu...@wso2.com> wrote:

> Hi Rushmin/Uvindra,
>
> Thanks for the clarifications and inputs.
>
> Yes I agree that there is a valid use case for bulk import of APIs. This
> was suggested in a previous reply as well, by Jochen. I suggest we use the
> standard API search functionality to cater this, where we can use all the
> supported search criteria, provider, api name, wildcards, etc. WDYT?
>
> On Tue, Jan 17, 2017 at 6:12 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> I think there might be a valid case for allowing a provider to
>> export/import all the APIs that belong to that provider only in bulk. Its
>> much more user friendly as Rushmin has mentioned
>>
>> On 17 January 2017 at 17:15, Rushmin Fernando <rush...@wso2.com> wrote:
>>
>>> Hi Isuru,
>>>
>>> Please see my comments inline.
>>>
>>>
>>>
>>> On Wed, Jan 11, 2017 at 6:03 PM, Isuru Haththotuwa <isu...@wso2.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Jan 11, 2017 at 2:56 PM, Rushmin Fernando <rush...@wso2.com>
>>>> wrote:
>>>>
>>>>> IMO although we need bulk (all APIs) exporting and per API exporting,
>>>>> only one ReST resource is enough for importing.
>>>>>
>>>>> The resource would read the received package and import the API(s)
>>>>> inside the package. So having a ReST resource for importing a single API 
>>>>> is
>>>>> redundant.
>>>>>
>>>> Agreed. Should be able to identify whether there is only a single api /
>>>> many apis from the multipart data and do the API data insertion with a
>>>> single resource.
>>>>
>>>>>
>>>>> And is there a real need for restricting the bulk import for admin
>>>>> users ?
>>>>>
>>>> An API can be published by different users. In a scenario where the api
>>>> portal is exposed as a service (where users can signup and publish APIs),
>>>> IMHO we should restrict the ability for a single user to export/import all
>>>> APIs, which can impact all users.
>>>>
>>>>>
>>>>>
>>> Actually I was only commenting on API importing. Say I'm an API
>>> publisher and I have published 10 APIs. I should be able to export all 10
>>> APIs and import them to another env.
>>>
>>> Seems like we need a generic bulk API import/exports as well as *ALL*
>>> API import/export which is a special case and a more restricted action.
>>>
>>>
>>>
>>>> On Tue, Jan 10, 2017 at 11:22 AM, Isuru Haththotuwa <isu...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Devs,
>>>>>>
>>>>>> This is to discuss subject.
>>>>>>
>>>>>> *Requirement:*
>>>>>>
>>>>>> Once an API is exported, its possible to be directly imported in to
>>>>>> another APIM deployment in a separate environment. For an admin user, it
>>>>>> should be possible to export all APIs in one deployment to another one.
>>>>>>
>>>>>> The following information will be available in exported data, related
>>>>>> to a single API:
>>>>>>
>>>>>>- Docs
>>>>>>- API definition (JSON formatted)
>>>>>>- Swagger file (JSON formatted)
>>>>>>- Gateway configuration
>>>>>>- API thumbnails (image)
>>>>>>
>>>>>> Several new resources will be added to the publisher rest API to
>>>>>> cater this, as follows:
>>>>>>
>>>>>> *GET **/apis/{apiId}**/export-config*
>>>>>>
>>>>>>- Produces a form/multipart output as a zip archive, which will
>>>>>>have the following structure and which will comprise of the above 
>>>>>> mentioned
>>>>>>items:
>>>>>>
>>>>>> *  --<*
>>>>>>
>>>>>> *api-version>.zip|*
>>>>>>
>>>>>>
>>>>>> *   

Re: [Architecture] [APIM] [C5] Rest API Support for Importing and Exporting APIs between Multiple Environments

2017-01-17 Thread Uvindra Dias Jayasinha
I think there might be a valid case for allowing a provider to
export/import all the APIs that belong to that provider only in bulk. Its
much more user friendly as Rushmin has mentioned

On 17 January 2017 at 17:15, Rushmin Fernando  wrote:

> Hi Isuru,
>
> Please see my comments inline.
>
>
>
> On Wed, Jan 11, 2017 at 6:03 PM, Isuru Haththotuwa 
> wrote:
>
>>
>>
>> On Wed, Jan 11, 2017 at 2:56 PM, Rushmin Fernando 
>> wrote:
>>
>>> IMO although we need bulk (all APIs) exporting and per API exporting,
>>> only one ReST resource is enough for importing.
>>>
>>> The resource would read the received package and import the API(s)
>>> inside the package. So having a ReST resource for importing a single API is
>>> redundant.
>>>
>> Agreed. Should be able to identify whether there is only a single api /
>> many apis from the multipart data and do the API data insertion with a
>> single resource.
>>
>>>
>>> And is there a real need for restricting the bulk import for admin users
>>> ?
>>>
>> An API can be published by different users. In a scenario where the api
>> portal is exposed as a service (where users can signup and publish APIs),
>> IMHO we should restrict the ability for a single user to export/import all
>> APIs, which can impact all users.
>>
>>>
>>>
> Actually I was only commenting on API importing. Say I'm an API publisher
> and I have published 10 APIs. I should be able to export all 10 APIs and
> import them to another env.
>
> Seems like we need a generic bulk API import/exports as well as *ALL* API
> import/export which is a special case and a more restricted action.
>
>
>
>> On Tue, Jan 10, 2017 at 11:22 AM, Isuru Haththotuwa 
>>> wrote:
>>>
 Hi Devs,

 This is to discuss subject.

 *Requirement:*

 Once an API is exported, its possible to be directly imported in to
 another APIM deployment in a separate environment. For an admin user, it
 should be possible to export all APIs in one deployment to another one.

 The following information will be available in exported data, related
 to a single API:

- Docs
- API definition (JSON formatted)
- Swagger file (JSON formatted)
- Gateway configuration
- API thumbnails (image)

 Several new resources will be added to the publisher rest API to cater
 this, as follows:

 *GET **/apis/{apiId}**/export-config*

- Produces a form/multipart output as a zip archive, which will
have the following structure and which will comprise of the above 
 mentioned
items:

 *  --<*

 *api-version>.zip|*


 *| --- Docs||*
 *|| --- *
* |  *
 * |*
 *|| ---* documentation
 metadata (json)
   *  || ---* documentation
 content (optional)

 *|*


 *| --- Gateway-Config|
  |*
 *|  | --- *gateway config file

 *|*
 *| --- *thumbnail file

 *|*
 *| --- *api definition (json)

 *|*
 *| --- *swagger definition (json)


 Note that there can be multiple docs for a single API.

 *GET **/apis/export-config*

- Produces a zip archive comprising of the above structure for each
API in the system. This operation will be permitted for admin users 
 only.


 *POST **/apis**/{apiId}**/import-config*

- Consumes the same zip archive produced by the
/{apiId/}export-config resource as a form/multipart input, extracts and
inserts the relevant data.


 *POST *
 */apis/import-config*

- Consumes the same zip archive produced by the /export-config
resource as a form/multipart input, extracts and inserts the relevant 
 data
for all APIs. Should be permitted only for admin users.


 This does not consider the endpoint information [1] yet. Would need to
 incorporate that here in a suitable way.

 Please share your feedback.

 [1]. [Architecture] [APIM][C5] - Definining Endpoint for Resource from
 Rest API

 --
 Thanks and Regards,

 Isuru H.
 +94 716 358 048 <071%20635%208048>* *



 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


>>>
>>>
>>> --
>>> *Best Regards*
>>>
>>> *Rushmin Fernando*
>>> 

Re: [Architecture] [Dev] [C5] MSF4J Interceptors need to be configurable.

2016-12-08 Thread Uvindra Dias Jayasinha
Hi Ishara,

Another possibility for supporting multiple auth types with what you have
proposed is to have a collection Authenticator interfaces(using a Map
possibly) at the RestAPISecurityInterceptor level. Depending on some
condition you could selectively choose what implementation to use at
runtime.

On 9 December 2016 at 07:32, Ishara Cooray  wrote:

> Please find my comments in line.
>
> Yes for the moment lets use this approach. Lets have 2 interceptors for
> authenticate and authorization. From that lets provide way to add pluggable
> authenticators and authorizers.
> I guess you mean having two interfaces for authenticate and
> authorization.What if we have two methods in one interface?Otherwise  we
> will have to maintain two configurations.
>
> Also we may be able to route request through multiple authenticators
> according to predefined order(when we need to support multiple auth types
> at once).
> +1
>
> Also its better both identity and APIM can use same approach as we all are
> doing same thing.
> Identity team is writing JAAS Login Modules
> @Thanuja,
> Do you have any input here
>
> Thanks & Regards,
> Ishara Cooray
> Senior Software Engineer
> Mobile : +9477 262 9512 <+94%2077%20262%209512>
> WSO2, Inc. | http://wso2.com/
> Lean . Enterprise . Middleware
>
> On Thu, Dec 8, 2016 at 9:06 PM, Sanjeewa Malalgoda 
> wrote:
>
>> Yes for the moment lets use this approach. Lets have 2 interceptors for
>> authenticate and authorization. From that lets provide way to add pluggable
>> authenticators and authorizers.
>> Also we may be able to route request through multiple authenticators
>> according to predefined order(when we need to support multiple auth types
>> at once).
>> Also its better both identity and APIM can use same approach as we all
>> are doing same thing.
>>
>>
>> Thanks,
>> sanjeewa.
>>
>> On Thu, Dec 8, 2016 at 6:59 PM, Ishara Cooray  wrote:
>>
>>> To overcome the above limitation where we cannot plug custom
>>> authentication, i came up with the below approach.
>>>
>>> Having one interceptor and delegate authentication to an interface.
>>> Implementation of the interface is configurable so that we can plug custom
>>> authentication as well.
>>>
>>> [image: Inline image 1]
>>>
>>> One limitation here is we can have only one auth type active at a time.
>>>
>>> Hi Sanjeewa,
>>>
>>> Shall we continue with this approach until we get a proper fix from
>>> msf4j?
>>> ​
>>>
>>>
>>> Thanks & Regards,
>>> Ishara Cooray
>>> Senior Software Engineer
>>> Mobile : +9477 262 9512 <077%20262%209512>
>>> WSO2, Inc. | http://wso2.com/
>>> Lean . Enterprise . Middleware
>>>
>>> On Thu, Dec 8, 2016 at 11:23 AM, Ishara Cooray  wrote:
>>>
 Hi Thilina,
>
> And also if there are multiple interceptors and one interceptor
> returns false from its' preCaall then the invocation chain will not
> continue further.
>
> So Is this implies if preCall returns 'true' then the invocation chain
> will continue further?
>

 Yes

 I was thinking to return 'true' if particular auth header type(Basic,
 Bearer) is not found in an interceptor, so that it will check the other
 available interceptors.
 But i guess this approach may also fail if the request header type is
 not provided may be by mistake.
 Because all the interceptors will return true and will it be taken as a
 valid authorization?


 Thanks & Regards,
 Ishara Cooray
 Senior Software Engineer
 Mobile : +9477 262 9512 <+94%2077%20262%209512>
 WSO2, Inc. | http://wso2.com/
 Lean . Enterprise . Middleware

 On Wed, Dec 7, 2016 at 5:25 PM, Afkham Azeez  wrote:

>
>
> On Wed, Dec 7, 2016 at 5:17 PM, Ishara Cooray 
> wrote:
>
>> Hi Thilina,
>>
>> And also if there are multiple interceptors and one interceptor
>> returns false from its' preCaall then the invocation chain will not
>> continue further.
>>
>> So Is this implies if preCall returns 'true' then the invocation
>> chain will continue further?
>>
>
> Yes
>
>
>> If that is the case we can return true in our overridden preCall
>> method so that it goes to next Interceptor.
>>
>>
>> Thanks & Regards,
>> Ishara Cooray
>> Senior Software Engineer
>> Mobile : +9477 262 9512 <077%20262%209512>
>> WSO2, Inc. | http://wso2.com/
>> Lean . Enterprise . Middleware
>>
>> On Wed, Dec 7, 2016 at 2:33 PM, Afkham Azeez  wrote:
>>
>>> How about supporting JAXRS filters?
>>>
>>> On Wed, Dec 7, 2016 at 12:52 PM, Thusitha Thilina Dayaratne <
>>> thusit...@wso2.com> wrote:
>>>
 Hi Ishara,

 As you have mentioned, with the current architecture we can't set
 the specific interceptor for a particular service but rather to 

Re: [Architecture] [Dev] APIM C5 Analytics - Designing Event streams

2016-12-05 Thread Uvindra Dias Jayasinha
Hi Rukshan,

There have been requests in the past to include all available CGI
variables[1] to give more flexibility when it comes to stats. We are
already capturing some of these above. I dont know for sure how useful or
relevant it might be to capture them all though.

[1] http://www.cgi101.com/book/ch3/text.html

On 6 December 2016 at 11:33, Rukshan Premathunga  wrote:

> Hi All,
>
> As part of the new APIM C5 release we are start to design analytics
> Streams. Here is the list of streams we decided to publish from APIM
> components. Even though here we listed separate event streams we hope to
> merge all the event streams except *org.wso2.apimgt.statistics.workflow* to
> one event and publish. By merging these event stream we will be able to
> avoid duplicate attributes and increase the performance at the event
> receivers.
>
> So please go though these stream schema and share your ideas.
>
> *Stream name  :org.wso2.apimgt.statistics.request*
>
> *version  :2.0.0*
>
> api :STRING
>
> context  :STRING
>
> version  :STRING
>
> publisher:STRING
>
> subscription_policy :STRING
>
> resource_path:STRING
>
> uri_template :STRING
>
> consumer_key :STRING
>
> application_name:STRING
>
> application_id   :STRING
>
> application_owner   :STRING
>
> user_id  :STRING
>
> subscriber   :STRING
>
> request_count:INT
>
> request_time :LONG
>
> gateway_domain :STRING
>
> gateway_ip   :STRING
>
> is_throttled :BOOL
>
> throttled_reason :STRING
>
> throttled_policy :STRING
>
> client_ip:STRING
>
> user_agent   :STRING
>
> host_name:STRING
> method   :STRING
>
>
>
> *name : org.wso2.apimgt.statistics.response*
>
> *version  : 2.0.0*
>
> consumer_key : STRING
>
> context  : STRING
>
> api  : STRING
>
> resource_path: STRING
>
> uri_template : STRING
>
> method   : STRING
>
> version  : STRING
>
> response_count   : INT
>
> username : STRING
>
> event_time   : LONG
>
> host_name: STRING
>
> publisher: STRING
>
> application_name: STRING
>
> application_id   : STRING
>
> application_owner: STRING
>
> user_id : STRING
>
> subscriber  : STRING
>
> cache_hit   : BOOL
>
> response_size: LONG
>
> protocol : STRING
>
> response_code: INT
>
> destination  : STRING
>
> gateway_domain   : STRING
>
> gateway_ip   : STRING
>
> response_time: LONG
>
> service_time : LONG
>
> backend_time : LONG
>
> backend_latency  : LONG
>
> security_latency : LONG
>
> throttling_latency   : LONG
>
> request_mediation_latency: LONG
>
> respons_mediation_latency: LONG
> other_latency: LONG
>
>
>
> *name   : org.wso2.apimgt.statistics.fault*
>
> *version: 2.0.0*
>
> consumer_key : STRING
>
> context : STRING
>
> api  : STRING
>
> resource_path: STRING
>
> method   : STRING
>
> version  : STRING
>
> error_code   : STRING
>
> error_message: STRING
>
> request_time : LONG
>
> application_owner   : STRING
>
> user_id  : STRING
>
> subscriber   : STRING
>
> host_name: STRING
>
> publisher: STRING
>
> application_name: STRING
>
> application_id   : STRING
>
> protocol : STRING
>
> gateway_domain : STRING
>
> gateway_ip   : STRING
>
>
>
> *name: org.wso2.apimgt.statistics.throttle*
>
> *version : 2.0.0*
>
> api  : STRING
>
> context  : STRING
>
> publisher: STRING
>
> throttled_time   : LONG
>
> application_name: STRING
>
> application_id   : STRING
>
> application_owner   : STRING
>
> user_id  : STRING
>
> subscriber   : STRING
>
> throttled_reason : STRING
>
> throttled_policy : STRING
>
> gateway_domain : STRING
>
> gateway_ip   : STRING
>
>
>
> *name: org.wso2.apimgt.statistics.workflow*
>
> *version : 2.0.0*
>
> workflow_reference : STRING
>
> workflow_status  : STRING
>
> workflow : STRING
>
> created_time : LONG
>
> updated_time : LONG
>
> node_domain  : STRING
> node_ip  : STRING
>
>
> Thanks and Regards
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>
> ___
> Dev mailing list
> d...@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org

Re: [Architecture] APIM C5 - Using UUID columns as Primary Keys in DB tables

2016-11-20 Thread Uvindra Dias Jayasinha
Yes Akalanka, that is one disadvantage that has been highlighted. But going
by the fact that the volume of data in a given DB instance reduces
drastically in C5 this may not be an issue. But only a perf test with real
data will reveal this for sure

On 21 November 2016 at 12:27, Akalanka Pagoda Arachchi <darsha...@wso2.com>
wrote:

> Hi Uvindra,
>
> One aspect to keep in mind is that if you have a lot of foreign key
> references referring this table, all those tables will have a UUID if you
> choose UUID as the primary key. This might affect the storage space and
> indexing of those tables as well.
>
> Thanks,
> Akalanaka.
>
> On Sun, Nov 20, 2016 at 6:20 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Just thought of pointing out that there is an option of optimizing UUIDs
>> stored in the DB to make them easier to sequence and reduce storage
>> size[1]. Though I doubt we will have such high volumes of data in a given
>> DB instance with the new C5 architecture so don't think we need to go down
>> this route.
>>
>> +1 for a perf test as Bhathiya suggested to make sure.
>>
>> [1] https://www.percona.com/blog/2014/12/19/store-uuid-optimized-way/
>>
>> On 19 November 2016 at 16:58, Lahiru Cooray <lahi...@wso2.com> wrote:
>>
>>> Hi Bathiya/Jo,
>>> Thanks for the valuable information.
>>> +1 for UUID as the PK
>>>
>>>
>>> On Sat, Nov 19, 2016 at 3:41 PM, Joseph Fonseka <jos...@wso2.com> wrote:
>>>
>>>>
>>>>
>>>> On Sat, Nov 19, 2016 at 3:38 PM, Joseph Fonseka <jos...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi All
>>>>>
>>>>> Another most obvious advantage of using UUIDs would that it will
>>>>> simplify import/export of APIs. I am +1 for using UUIDs as primary keys.
>>>>>
>>>>
>>>> In a more correct term UUIDs will simplify managing APIs cross
>>>> environments.
>>>>
>>>>
>>>>>
>>>>> Cheers
>>>>> Jo
>>>>>
>>>>> On Sat, Nov 19, 2016 at 12:42 PM, Bhathiya Jayasekara <
>>>>> bhath...@wso2.com> wrote:
>>>>>
>>>>>> Hi Lahiru,
>>>>>>
>>>>>> One of the reasons to expose UUIDs instead of auto increment IDs in
>>>>>> REST resources is that if we expose auto increment ID, we unwillingly
>>>>>> expose certain internal information like how many resources we have in 
>>>>>> our
>>>>>> system and the pattern how we store these resources. That information can
>>>>>> be used as vulnerabilities for security attacks. Due to the same reason,
>>>>>> it's kind of a standard to use UUIDs instead of auto increment IDs in 
>>>>>> REST
>>>>>> world. You can find some detailed explanations about that on the 
>>>>>> web[1][2].
>>>>>>
>>>>>> [1] http://stackoverflow.com/questions/12378220/api-design-a
>>>>>> nd-security-why-hide-internal-ids
>>>>>> [2] http://blogs.perl.org/users/ovid/2014/11/stop-putting-auto-i
>>>>>> ncrement-ids-in-urls.html
>>>>>>
>>>>>> Thanks,
>>>>>> Bhathiya
>>>>>>
>>>>>> On Sat, Nov 19, 2016 at 12:12 PM, Lahiru Cooray <lahi...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Uvindra,
>>>>>>> The reason you mentioned is:
>>>>>>> "Having a UUID for this purpose means that end users cannot guess
>>>>>>> the possible unique identity of other entities, which is possible if we
>>>>>>> exposed an integer based identifier."
>>>>>>>
>>>>>>> What is the purpose of having a non guessable Id here? Anywhere the
>>>>>>> user who has permissions to invoke api's can get the uuid list or simply
>>>>>>> can view in the Store/Publisher UI. If there are any more implementation
>>>>>>> constraints (eg: user should be able to delete api's only in his tenant
>>>>>>> domain, etc) should be handle internally.
>>>>>>>
>>>>>>> What I'm trying to highlight is, we should only move to a hybrid
>>>>>>> approach only if there's a valid use case. Else it's less complex if we 
>>>>>>> can
>&

Re: [Architecture] APIM C5 - Using UUID columns as Primary Keys in DB tables

2016-11-20 Thread Uvindra Dias Jayasinha
Just thought of pointing out that there is an option of optimizing UUIDs
stored in the DB to make them easier to sequence and reduce storage
size[1]. Though I doubt we will have such high volumes of data in a given
DB instance with the new C5 architecture so don't think we need to go down
this route.

+1 for a perf test as Bhathiya suggested to make sure.

[1] https://www.percona.com/blog/2014/12/19/store-uuid-optimized-way/

On 19 November 2016 at 16:58, Lahiru Cooray <lahi...@wso2.com> wrote:

> Hi Bathiya/Jo,
> Thanks for the valuable information.
> +1 for UUID as the PK
>
>
> On Sat, Nov 19, 2016 at 3:41 PM, Joseph Fonseka <jos...@wso2.com> wrote:
>
>>
>>
>> On Sat, Nov 19, 2016 at 3:38 PM, Joseph Fonseka <jos...@wso2.com> wrote:
>>
>>> Hi All
>>>
>>> Another most obvious advantage of using UUIDs would that it will
>>> simplify import/export of APIs. I am +1 for using UUIDs as primary keys.
>>>
>>
>> In a more correct term UUIDs will simplify managing APIs cross
>> environments.
>>
>>
>>>
>>> Cheers
>>> Jo
>>>
>>> On Sat, Nov 19, 2016 at 12:42 PM, Bhathiya Jayasekara <bhath...@wso2.com
>>> > wrote:
>>>
>>>> Hi Lahiru,
>>>>
>>>> One of the reasons to expose UUIDs instead of auto increment IDs in
>>>> REST resources is that if we expose auto increment ID, we unwillingly
>>>> expose certain internal information like how many resources we have in our
>>>> system and the pattern how we store these resources. That information can
>>>> be used as vulnerabilities for security attacks. Due to the same reason,
>>>> it's kind of a standard to use UUIDs instead of auto increment IDs in REST
>>>> world. You can find some detailed explanations about that on the web[1][2].
>>>>
>>>> [1] http://stackoverflow.com/questions/12378220/api-design-a
>>>> nd-security-why-hide-internal-ids
>>>> [2] http://blogs.perl.org/users/ovid/2014/11/stop-putting-auto-i
>>>> ncrement-ids-in-urls.html
>>>>
>>>> Thanks,
>>>> Bhathiya
>>>>
>>>> On Sat, Nov 19, 2016 at 12:12 PM, Lahiru Cooray <lahi...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Uvindra,
>>>>> The reason you mentioned is:
>>>>> "Having a UUID for this purpose means that end users cannot guess the
>>>>> possible unique identity of other entities, which is possible if we 
>>>>> exposed
>>>>> an integer based identifier."
>>>>>
>>>>> What is the purpose of having a non guessable Id here? Anywhere the
>>>>> user who has permissions to invoke api's can get the uuid list or simply
>>>>> can view in the Store/Publisher UI. If there are any more implementation
>>>>> constraints (eg: user should be able to delete api's only in his tenant
>>>>> domain, etc) should be handle internally.
>>>>>
>>>>> What I'm trying to highlight is, we should only move to a hybrid
>>>>> approach only if there's a valid use case. Else it's less complex if we 
>>>>> can
>>>>> move only with the auto increment Id.
>>>>>
>>>>> [Solution we discuss here can also be reused in AppM c5 upgrade]
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Nov 18, 2016 at 11:27 PM, Uvindra Dias Jayasinha <
>>>>> uvin...@wso2.com> wrote:
>>>>>
>>>>>> We expose the unique UUID of a given artifact to the end user via
>>>>>> REST APIs. You can see how this is used in the REST API docs[1]. We cant
>>>>>> use the auto increment ID for this purpose for the reasons I mentioned
>>>>>> earlier.
>>>>>>
>>>>>> [1] https://docs.wso2.com/display/AM200/apidocs/publisher/index.html
>>>>>>
>>>>>> On 18 November 2016 at 22:48, Lahiru Cooray <lahi...@wso2.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I was under the impression that the UUID was used as a result of
>>>>>>> having registry and UUID was used to map the registry resource. Pls 
>>>>>>> correct
>>>>>>> me if I'm wrong.
>>>>>>>
>>>>>>> When the registry is no longer present, I don't see a real use case
>>>>>>&g

Re: [Architecture] APIM C5 - Using UUID columns as Primary Keys in DB tables

2016-11-18 Thread Uvindra Dias Jayasinha
We expose the unique UUID of a given artifact to the end user via REST
APIs. You can see how this is used in the REST API docs[1]. We cant use the
auto increment ID for this purpose for the reasons I mentioned earlier.

[1] https://docs.wso2.com/display/AM200/apidocs/publisher/index.html

On 18 November 2016 at 22:48, Lahiru Cooray <lahi...@wso2.com> wrote:

> Hi,
>
> I was under the impression that the UUID was used as a result of having
> registry and UUID was used to map the registry resource. Pls correct me if
> I'm wrong.
>
> When the registry is no longer present, I don't see a real use case of
> going for a hybrid approach. Either we could use UUID as a PK or an auto
> increment ID.
>
> In this case +1 for an auto increment ID as the PK.
> Reasons: easy to debug manually/ easy to sort by id/ save space
>
> On Fri, Nov 18, 2016 at 9:34 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> We already have a UUID column for few tables such as AM_API and
>> AM_APPLICATION which is used to uniquely identify a record. The reason why
>> we have a UUID column is because it is the unique identifier that we expose
>> to the end user. Having a UUID for this purpose means that end users cannot
>> guess the possible unique identity of other entities, which is possible if
>> we exposed an integer based identifier.
>>
>> However at table level we were still maintaining an auto incrementing
>> primary key. So the UUID was used externally but the integer key was used
>> privately to maintain foreign key relationships within the schema.
>>
>> We first thought it might be a good idea to dispense of the auto
>> incrementing primary key and use the UUID as the primary key itself since
>> it seemed like we had two columns that served somewhat duplicate purposes.
>> But I have been doing some research regarding this and have found that the
>> industry is divided a bit regarding this point.
>>
>> These links advocate UUIDs as primary keys
>> https://blog.codinghorror.com/primary-keys-ids-versus-guids/
>> https://www.clever-cloud.com/blog/engineering/2015/05/20/why
>> -auto-increment-is-a-terrible-idea/
>>
>> These links recommend auto incrementing integers as primary keys
>> http://stackoverflow.com/questions/404040/how-do-you-like-
>> your-primary-keys/404057#404057
>> http://stackoverflow.com/questions/829284/guid-vs-int-identity
>>
>> We can still continue with our hybrid approach of having both an auto
>> incriminating integer as primary key and using the UUID for external
>> interactions, whihc seems to be also used by some to get the best of both
>> worlds.
>>
>>
>> So how should we proceed?
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Lahiru Cooray*
> Software Engineer
> WSO2, Inc.;http://wso2.com/
> lean.enterprise.middleware
>
> Mobile: +94 715 654154
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] APIM C5 - Using UUID columns as Primary Keys in DB tables

2016-11-18 Thread Uvindra Dias Jayasinha
We already have a UUID column for few tables such as AM_API and
AM_APPLICATION which is used to uniquely identify a record. The reason why
we have a UUID column is because it is the unique identifier that we expose
to the end user. Having a UUID for this purpose means that end users cannot
guess the possible unique identity of other entities, which is possible if
we exposed an integer based identifier.

However at table level we were still maintaining an auto incrementing
primary key. So the UUID was used externally but the integer key was used
privately to maintain foreign key relationships within the schema.

We first thought it might be a good idea to dispense of the auto
incrementing primary key and use the UUID as the primary key itself since
it seemed like we had two columns that served somewhat duplicate purposes.
But I have been doing some research regarding this and have found that the
industry is divided a bit regarding this point.

These links advocate UUIDs as primary keys
https://blog.codinghorror.com/primary-keys-ids-versus-guids/
https://www.clever-cloud.com/blog/engineering/2015/05/20/why-auto-increment-is-a-terrible-idea/

These links recommend auto incrementing integers as primary keys
http://stackoverflow.com/questions/404040/how-do-you-like-your-primary-keys/404057#404057
http://stackoverflow.com/questions/829284/guid-vs-int-identity

We can still continue with our hybrid approach of having both an auto
incriminating integer as primary key and using the UUID for external
interactions, whihc seems to be also used by some to get the best of both
worlds.


So how should we proceed?

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - Schema for handling resources stored as blobs

2016-11-17 Thread Uvindra Dias Jayasinha
Hi Lahiru,

Yes what you have suggested is a lot more normalised, but I still prefer to
refer to it as a DATA_TYPE as opposed to a FILE_TYPE. The reason is that
File types are always stored in BLOB format where as shorter text data is
stored as VARCHAR. So referring to free form text as a file type is
misleading, because it isn't treated in the same way that files are
treated. So to be more generic its better to consider these as Data Types.

So this would be the more normalised table structure,

CREATE TABLE `AM_RESOURCE_DATA_TYPES` (
  `DATA_TYPE_ID` INTEGER AUTO_INCREMENT,
  `DATA_TYPE_NAME` VARCHAR(255),
  PRIMARY KEY (`DATA_TYPE_ID`)
);

CREATE TABLE `AM_RESOURCE_TYPES` (
  `RESOURCE_TYPE_ID` INTEGER AUTO_INCREMENT,
  `DATA_TYPE_ID` INTEGER,
  `RESOURCE_TYPE_NAME` VARCHAR(255),
  PRIMARY KEY (`RESOURCE_TYPE_ID`),
  FOREIGN KEY (`DATA_TYPE_ID`) REFERENCES
`AM_RESOURCE_DATA_TYPES`(`DATA_TYPE_ID`)
);

CREATE TABLE `AM_API_RESOURCES` (
  `RESOURCE_ID` INTEGER AUTO_INCREMENT,
  `API_ID` VARCHAR(255),
  `RESOURCE_TYPE_ID` INTEGER,
  `RESOURCE_TEXT_VALUE` VARCHAR(1024),
  `RESOURCE_BINARY_VALUE` BLOB,
  PRIMARY KEY (`RESOURCE_ID`),
  FOREIGN KEY (`API_ID`) REFERENCES `AM_API`(`UUID`) ON UPDATE CASCADE ON
DELETE CASCADE,
  FOREIGN KEY (`RESOURCE_TYPE_ID`) REFERENCES `AM_RESOURCE_TYPES`(`RESOURCE_
TYPE_ID`)
);



On 13 November 2016 at 12:35, Lahiru Cooray <lahi...@wso2.com> wrote:

> Hi,
> Well if I use an example to explain what I said, refer data_type column.
> Check the given sample values.
> xml,png,json are file extensions while text is a magic word/implementation
> logic we use to refer urls and inline text (file type). AFAIC these are
> different data.
> Even if we treat this as a generic field, we could further nomalize the DB
> structure having this field in resource type table.
>
> eg:
> CREATE TABLE `AM_FILE_TYPES` (
>   `FILE_TYPE_ID` INTEGER AUTO_INCREMENT,
>   `FILE_TYPE_NAME` VARCHAR(255),
>   PRIMARY KEY (`FILE_TYPE_ID `)
> );
> (file type: either blob/inline text or url)
>
> CREATE TABLE `AM_RESOURCE_TYPES` (
>   `RESOURCE_TYPE_ID` INTEGER AUTO_INCREMENT,
>   `RESOURCE_TYPE_NAME` VARCHAR(255),
>   `FILE_TYPE_ID ` INTEGER,
>   PRIMARY KEY (`RESOURCE_TYPE_ID`),
>   FOREIGN KEY (`FILE_TYPE_ID `) REFERENCES`AM_FILE_TYPES `(`FILE_TYPE_
> ID `) ON UPDATE CASCADE ON DELETECASCADE,
>
> );
>
> CREATE TABLE `AM_API_RESOURCES` (
>   `RESOURCE_ID` INTEGER AUTO_INCREMENT,
>   `API_ID` INTEGER,
>   `RESOURCE_TYPE_ID` INTEGER,
> //  `DATA_TYPE` VARCHAR(255), [remove this field]
>   `RESOURCE_TEXT_VALUE` VARCHAR(1024),
>   `RESOURCE_BINARY_VALUE` BLOB,
>   PRIMARY KEY (`RESOURCE_ID`),
>   FOREIGN KEY (`API_ID`) REFERENCES `AM_API`(`API_ID`) ON UPDATE CASCADE
> ON DELETE CASCADE,
>   FOREIGN KEY (`RESOURCE_TYPE_ID`) REFERENCES
> `AM_RESOURCE_TYPES`(`RESOURCE_TYPE_ID`) ON UPDATE CASCADE ON DELETE
> CASCADE
> );
>
>
> further if required, we could maintain a extension field in AM_API_RESOURCES
> table.
>
> I'm not telling that the proposed structure is wrong. This is just a
> suggestion to further normalize and minimize data repetition and to make
> the structure more clearer.
>
> On Wed, Nov 9, 2016 at 4:54 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>>
>>
>> On 9 November 2016 at 12:36, Lahiru Cooray <lahi...@wso2.com> wrote:
>>
>>>
>>>
>>> On Tue, Nov 8, 2016 at 1:23 PM, Uvindra Dias Jayasinha <uvin...@wso2.com
>>> > wrote:
>>>
>>>>
>>>>
>>>> On 8 November 2016 at 11:10, Lahiru Cooray <lahi...@wso2.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Nov 3, 2016 at 4:01 PM, Uvindra Dias Jayasinha <
>>>>> uvin...@wso2.com> wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> Currently APIs have a few resources such as Swagger File, Optional
>>>>>> WSDL file, Related Documents file and an Optional Thumbnail image that
>>>>>> needs to be stored as blobs in the DB.
>>>>>>
>>>>>> Initially we thought of having separate tables to store these
>>>>>> resources, but what if we have a single generic resource table to store 
>>>>>> all
>>>>>> these?
>>>>>>
>>>>>> We could have schema such as below for the generic resource table
>>>>>>
>>>>>>
>>>>>> ​Since we previously stored our resources in the registry, a similar
>>>>>> generic schema was used to store all such resources by the registry 
>>>>>> itself.
>>>>>> So an

Re: [Architecture] REST API versioning - which version to mandate in the URI scheme?

2016-11-17 Thread Uvindra Dias Jayasinha
I think there is agreement that we can go with just the major version as
part of the URI, but I'm still wondering if we need to ask users to send
the minor version in a header.

Users can mistakenly send a wrong minor version and if we blindly take
decisions based on that header it could lead to errors. So we anyway need
to do a sanity check to make sure the optional parameter introduced in a
given minor version has been populated by the user as expected. So now
users need to send an additional minor version but we cant really trust
that because there is no guarantee that the parameters will be compatible.

So seems like the minor version in the header is an unwanted overhead for
both the user and us.

I can understand us needing to support multiple major versions but not
multiple minor versions. So we will only have the latest minor version
implementation that is fully backward compatible to its respective major
version and preceding minor versions.


On 17 November 2016 at 20:21, Frank Leymann  wrote:

> We need to discuss this more carefully.
>
> The real question is: *do we want to make it easy for us, or do we want
> to make it easy for our customers? *I think that we should strive for
> APIs that serve our customers as simple as possible. And this means that
> clients must not break.
>
> ​Nearly all
>  know
> ​n​
> REST APIs use URI-based versioning (see section "Versioning strategies in
> popular REST APIs"
> ​ at http://www.lexicalscope.com/blog/2012/03/12/how-are-rest-
> apis-versioned/).  Out of the other approaches, Azure is using a
> proprietary header (which is not RESTful because it significantly reduces
> "visibility") and GitHub recently switched to HTTP Accept headers.
>
> ​Thus, we should stick to our decision to support URI-based versioning.
> Having said that, t
> he URI-based versioning camp is split into specifying both, major and
> minor version, and only specifying major version. Facebook, Netflix,... are
> supporting major/minor.
>
> ​If it reduces our development effort significantly, and if we guarantee
> (!) that minor versions are backward compatible (which they are by the very
> definition), then using major versions only maybe fine (Jo says the same).
> Remember that backward compatibility means that we are only allowed to add
> optional (!) new parameters in the API, that we do not return payload that
> the client doesn't understand/can parse, that we do not return new status
> codes etc etc. All of that requires a new major version. Are we ready to
> commit to this?
>
>
>
>
> Best regards,
> Frank
>
> 2016-11-17 11:25 GMT+01:00 Sanjeewa Malalgoda :
>
>> +1. I have some doubts about using minor versions within the
>> implementation. If we don't have multiple apps(jax-rs apps or micro
>> services) then single app should handle all minor version requests and
>> process accordingly. Then we may need to have multiple methods in java
>> API(for each minor version) and call them accordingly from REST API.
>> Or we need to have single method in java API and implement processing
>> logic based on minor version. WDYT?
>>
>> Thanks,
>> sanjeewa.
>>
>> On Thu, Nov 17, 2016 at 2:38 PM, Malintha Amarasinghe > > wrote:
>>
>>> Hi,
>>>
>>> +1 for the approach since it give many benefits and simplifications.
>>>
>>> However, if we remove version, shouldn't there be some way that client
>>> can know the version of the API he is going to call.
>>> For example: in v1.0 there is /sample resource that does some work. And
>>> in v1.1, there is /sample-2 resource that does the same work but in a
>>> better way. So someone wants to write a code that calls the correct
>>> resource as per the version of the API he is calling.
>>> As Nuwan/Jo mentioned client sending minor version as a header,
>>> shouldn't server also send the minor version of the API to the client (may
>>> be as a header or some other way) to support the above case?
>>>
>>> Thanks!
>>> Malintha
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Nov 17, 2016 at 2:24 PM, Joseph Fonseka  wrote:
>>>
 Hi Nuwan

 +1 for the approach. I think this will simplify on how we support
 multiple versions on server side. If we adopt it we may only have to ship
 an implementation for each major version.

 If we can ensure forward compatibility I guess there will be no issue
 with this approach otherwise we will see the clients breaking if they try
 to access an previous minor version of the same API.

 To solve the later case we can mandate clients should send the minor
 version they are intend to use with the request ( may be in a header ) so
 that server can validate and send a error if that is not supported.

 Furthermore will it make sense to have the full version of the API in
 the header ?

 Thanks & Regards
 Jo


 [1] http://wso2.com/whitepapers/wso2-rest-apis-design-guidelines/




Re: [Architecture] REST API versioning - which version to mandate in the URI scheme?

2016-11-17 Thread Uvindra Dias Jayasinha
Are we actually going to be supporting multiple minor version
implementations?

Seems like the rationale for doing this was so that we dont need to have
multiple implementations to support minor versions. Since a minor version
change is totally backward compatible with the major version it seems that
the end user should not need to worry about specifying a minor version when
calling the API right? Any parameters that are added via minor version
changes will always be optional and have default values when not being set.

I cant think of a use case that might require us to look up a minor version
in a header and take a decision. This can be handled by the presence of the
optional parameters that were added in the minor version change.


On 17 November 2016 at 14:38, Malintha Amarasinghe 
wrote:

> Hi,
>
> +1 for the approach since it give many benefits and simplifications.
>
> However, if we remove version, shouldn't there be some way that client can
> know the version of the API he is going to call.
> For example: in v1.0 there is /sample resource that does some work. And in
> v1.1, there is /sample-2 resource that does the same work but in a better
> way. So someone wants to write a code that calls the correct resource as
> per the version of the API he is calling.
> As Nuwan/Jo mentioned client sending minor version as a header, shouldn't
> server also send the minor version of the API to the client (may be as a
> header or some other way) to support the above case?
>
> Thanks!
> Malintha
>
>
>
>
>
>
>
> On Thu, Nov 17, 2016 at 2:24 PM, Joseph Fonseka  wrote:
>
>> Hi Nuwan
>>
>> +1 for the approach. I think this will simplify on how we support
>> multiple versions on server side. If we adopt it we may only have to ship
>> an implementation for each major version.
>>
>> If we can ensure forward compatibility I guess there will be no issue
>> with this approach otherwise we will see the clients breaking if they try
>> to access an previous minor version of the same API.
>>
>> To solve the later case we can mandate clients should send the minor
>> version they are intend to use with the request ( may be in a header ) so
>> that server can validate and send a error if that is not supported.
>>
>> Furthermore will it make sense to have the full version of the API in the
>> header ?
>>
>> Thanks & Regards
>> Jo
>>
>>
>> [1] http://wso2.com/whitepapers/wso2-rest-apis-design-guidelines/
>>
>>
>>
>>
>>
>> On Thu, Nov 17, 2016 at 1:50 PM, Nuwan Dias  wrote:
>>
>>> Hi,
>>>
>>> The API Manager REST API [1], [2] follows the semantic versioning
>>> strategy. It currently requires you to have the Major.Minor versions in the
>>> URI scheme (/api/am/publisher/v*0.10*). This however is problematic
>>> because practically, as we add features to the product we need to add new
>>> resources to the API (backwards compatible API changes) and hence have to
>>> change the .Minor version of it on every new release.
>>>
>>> This results in complications because we have to keep supporting at
>>> least a few .Minor versions backward on a given product version (support
>>> for v1.0, v1.1, v1.2). Which means that we have to ship and maintain
>>> several versions of the JAX-RS (or Microservice) at any given time.
>>>
>>> Shall we adopt a strategy where we only mandate the .Major version in
>>> the URI scheme (/api/am/publisher/v*1*/) and request for the .Minor
>>> version to be sent as a Header? This will ensure that we don't have to
>>> maintain several versions of the JAX-RS on a given server runtime and if we
>>> need the .Minor version for some functionality we look it up from the
>>> Header.
>>>
>>> [1] - https://docs.wso2.com/display/AM200/apidocs/publisher/
>>> [2] - https://docs.wso2.com/display/AM200/apidocs/store/
>>>
>>> Thanks,
>>> NuwanD.
>>>
>>> --
>>> Nuwan Dias
>>>
>>> Software Architect - WSO2, Inc. http://wso2.com
>>> email : nuw...@wso2.com
>>> Phone : +94 777 775 729
>>>
>>
>>
>>
>> --
>>
>> --
>> *Joseph Fonseka*
>> WSO2 Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> mobile: +94 772 512 430
>> skype: jpfonseka
>>
>> * *
>>
>>
>
>
> --
> Malintha Amarasinghe
> Software Engineer
> *WSO2, Inc. - lean | enterprise | middleware*
> http://wso2.com/
>
> Mobile : +94 712383306
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - Pagination

2016-11-16 Thread Uvindra Dias Jayasinha
Had a chat with Jo and NuwanD,

Jo elaborated more on his point of having an additional numeric column to
determine the latest version. How this will work is,

1. Before inserting a new API version we use the existing API Version
comparator class to sort all available versions of the API into a
collection.

2. Now the collection has the APIs sorted from least to greatest version
and we can use the index of each respective object in the collection to
populate the numeric column field in the DB table.

3. We now have a numeric column at DB level which indicates the ascending
order of the APIs by version.

We will need to update the corresponding numeric column for all versions of
a given API when a new API is inserted. When retrieving data we can now
simply get the API with the max value of the numeric column to find its
latest version.

So there is a performance gain when doing things this way as opposed to
figuring out the latest version every single time using the version
comparator class if we were to do this purely at Java level.


There are Pros and Cons with both DB and Java level pagination. At the
moment we thought it is better to delay the decision of committing to a
specific implementation. We can decide once we integrate with the user core.

Thanks all for the feedback



On 17 November 2016 at 10:05, Uvindra Dias Jayasinha <uvin...@wso2.com>
wrote:

> We are not doing this to avoid vendor specific queries.
>
> The real reason why we want to go down this path, we have no straight
> forward way of paginating at DB level along with taking into a account the
> latest version of a given API. To support this at DB level properly means
> we need to really complicate things and even then we are not sure we can
> actually meet the need. Being able to avoid vendor specific queries is just
> a nice side effect.
>
> Having thought about the concern Jo raised, I think we can simply avoid
> using a cache for searching. The cache will only be used for things like
> API listing. Search results will always be served from the DB and paginated
> at Java level.
>
> We can dispense with the cache altogether for now and add later on if
> performance is not enough. The main thing is that we want to move
> pagination logic to the Java level.
>
>
> On 16 November 2016 at 21:14, Lahiru Cooray <lahi...@wso2.com> wrote:
>
>> +1 for Jo's opinion on using RBMS based pagination.
>>
>> I think we are trying here to make the code unnecessarily complex by
>> introducing caches. As discussed this may become more complex when we do
>> sorting/searching operations.
>> Also regarding the performance, I don't see a significant performance
>> gain by introducing our own cache as RDBMS's normally maintain query caches.
>>
>> Only benefit I see here is avoiding the vendor specific queries.
>> How about addressing this concern properly by introducing a factory
>> method and giving the flexibility to plug a new vendor easily when needed.
>>
>> On Wed, Nov 16, 2016 at 3:23 PM, Uvindra Dias Jayasinha <uvin...@wso2.com
>> > wrote:
>>
>>> Hi Jo,
>>>
>>> On 16 November 2016 at 14:47, Joseph Fonseka <jos...@wso2.com> wrote:
>>>
>>>> Hi Uvindra
>>>>
>>>> On Wed, Nov 16, 2016 at 1:48 PM, Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>>
>>>>>
>>>>> Searching and sorting will be done at DB level via SQL. It is only the
>>>>> pagination of the returned result set that will be done at Java level.
>>>>>
>>>>
>>>> So does this mean for different search request you will be caching
>>>> separate sets of data ? If that is so will it exhaust the memory and how
>>>> will API search perform under a load. And since the servers will be running
>>>> on containers available memory would be even less.
>>>>
>>>
>>> Thats a good point, definitely should not be caching different search
>>> request results.. Will discuss this with the team and get back to you.
>>>
>>>>
>>>> Why would this be more efficient than Java? We achieve a similar thing
>>>>> by using a local cache so there is no need to have an intermediate in
>>>>> memory DB. What was proposed makes sense when you consider that we a
>>>>> dealing with only a very small amount of data at DB level now.
>>>>>
>>>>
>>>> Considering pagination is done using DBMS, in-memory db will help to
>>>> work only with a one data set as well as needing to maintain one set of
>>>> pagination queries.
>>>>
>>>
>&

Re: [Architecture] APIM C5 - Pagination

2016-11-16 Thread Uvindra Dias Jayasinha
We are not doing this to avoid vendor specific queries.

The real reason why we want to go down this path, we have no straight
forward way of paginating at DB level along with taking into a account the
latest version of a given API. To support this at DB level properly means
we need to really complicate things and even then we are not sure we can
actually meet the need. Being able to avoid vendor specific queries is just
a nice side effect.

Having thought about the concern Jo raised, I think we can simply avoid
using a cache for searching. The cache will only be used for things like
API listing. Search results will always be served from the DB and paginated
at Java level.

We can dispense with the cache altogether for now and add later on if
performance is not enough. The main thing is that we want to move
pagination logic to the Java level.


On 16 November 2016 at 21:14, Lahiru Cooray <lahi...@wso2.com> wrote:

> +1 for Jo's opinion on using RBMS based pagination.
>
> I think we are trying here to make the code unnecessarily complex by
> introducing caches. As discussed this may become more complex when we do
> sorting/searching operations.
> Also regarding the performance, I don't see a significant performance gain
> by introducing our own cache as RDBMS's normally maintain query caches.
>
> Only benefit I see here is avoiding the vendor specific queries.
> How about addressing this concern properly by introducing a factory method
> and giving the flexibility to plug a new vendor easily when needed.
>
> On Wed, Nov 16, 2016 at 3:23 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Hi Jo,
>>
>> On 16 November 2016 at 14:47, Joseph Fonseka <jos...@wso2.com> wrote:
>>
>>> Hi Uvindra
>>>
>>> On Wed, Nov 16, 2016 at 1:48 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>>
>>>>
>>>> Searching and sorting will be done at DB level via SQL. It is only the
>>>> pagination of the returned result set that will be done at Java level.
>>>>
>>>
>>> So does this mean for different search request you will be caching
>>> separate sets of data ? If that is so will it exhaust the memory and how
>>> will API search perform under a load. And since the servers will be running
>>> on containers available memory would be even less.
>>>
>>
>> Thats a good point, definitely should not be caching different search
>> request results.. Will discuss this with the team and get back to you.
>>
>>>
>>> Why would this be more efficient than Java? We achieve a similar thing
>>>> by using a local cache so there is no need to have an intermediate in
>>>> memory DB. What was proposed makes sense when you consider that we a
>>>> dealing with only a very small amount of data at DB level now.
>>>>
>>>
>>> Considering pagination is done using DBMS, in-memory db will help to
>>> work only with a one data set as well as needing to maintain one set of
>>> pagination queries.
>>>
>>
>> Hmmm I still dont understand the difference between in memory DB and in
>> memory cache. Technically in both cases we are working with an in memory
>> dataset.
>>
>>>
>>>
>>>> In the future if we foresee a requirement for DB level pagination we
>>>> can implement that within the DAO layer, but for now this is more than
>>>> enough to meet our needs. This way we can avoid introducing unwanted
>>>> complexity upfront since there is no rationale for doing so.
>>>>
>>>
>>> Agreed, but if the complexity is to support pagination for all the DBMSs
>>> we can at-least focus on adding db level pagination for commonly used DBMSs
>>> (ie MySQL, Oracle).
>>>
>>
>> Its not a complexity to support for multiple DB's, we can do that but its
>> an additional effort we can avoid by going down this path.
>>
>>>
>>> Regards
>>> Jo
>>>
>>>
>>>
>>>
>>>
>>>
>>>>
>>>>
>>>>> Thanks & Regards
>>>>> Jo
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>>> With regards,
>>>>>>> *Manu*ranga Perera.
>>>>>>>
>>>>>>> phone : 071 7 70 20 50
>>>>>>> mail : m...@wso2.com
>>>>>>>
>>>>>>> __

Re: [Architecture] APIM C5 - Pagination

2016-11-16 Thread Uvindra Dias Jayasinha
+NuwanD, Sanjeewa

On 16 November 2016 at 15:23, Uvindra Dias Jayasinha <uvin...@wso2.com>
wrote:

> Hi Jo,
>
> On 16 November 2016 at 14:47, Joseph Fonseka <jos...@wso2.com> wrote:
>
>> Hi Uvindra
>>
>> On Wed, Nov 16, 2016 at 1:48 PM, Uvindra Dias Jayasinha <uvin...@wso2.com
>> > wrote:
>>>
>>>
>>> Searching and sorting will be done at DB level via SQL. It is only the
>>> pagination of the returned result set that will be done at Java level.
>>>
>>
>> So does this mean for different search request you will be caching
>> separate sets of data ? If that is so will it exhaust the memory and how
>> will API search perform under a load. And since the servers will be running
>> on containers available memory would be even less.
>>
>
> Thats a good point, definitely should not be caching different search
> request results.. Will discuss this with the team and get back to you.
>
>>
>> Why would this be more efficient than Java? We achieve a similar thing by
>>> using a local cache so there is no need to have an intermediate in memory
>>> DB. What was proposed makes sense when you consider that we a dealing with
>>> only a very small amount of data at DB level now.
>>>
>>
>> Considering pagination is done using DBMS, in-memory db will help to work
>> only with a one data set as well as needing to maintain one set of
>> pagination queries.
>>
>
> Hmmm I still dont understand the difference between in memory DB and in
> memory cache. Technically in both cases we are working with an in memory
> dataset.
>
>>
>>
>>> In the future if we foresee a requirement for DB level pagination we can
>>> implement that within the DAO layer, but for now this is more than enough
>>> to meet our needs. This way we can avoid introducing unwanted complexity
>>> upfront since there is no rationale for doing so.
>>>
>>
>> Agreed, but if the complexity is to support pagination for all the DBMSs
>> we can at-least focus on adding db level pagination for commonly used DBMSs
>> (ie MySQL, Oracle).
>>
>
> Its not a complexity to support for multiple DB's, we can do that but its
> an additional effort we can avoid by going down this path.
>
>>
>> Regards
>> Jo
>>
>>
>>
>>
>>
>>
>>>
>>>
>>>> Thanks & Regards
>>>> Jo
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>>> With regards,
>>>>>> *Manu*ranga Perera.
>>>>>>
>>>>>> phone : 071 7 70 20 50
>>>>>> mail : m...@wso2.com
>>>>>>
>>>>>> ___
>>>>>> Architecture mailing list
>>>>>> Architecture@wso2.org
>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Uvindra
>>>>>
>>>>> Mobile: 33962
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> --
>>>> *Joseph Fonseka*
>>>> WSO2 Inc.; http://wso2.com
>>>> lean.enterprise.middleware
>>>>
>>>> mobile: +94 772 512 430
>>>> skype: jpfonseka
>>>>
>>>> * <http://lk.linkedin.com/in/rumeshbandara>*
>>>>
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Uvindra
>>>
>>> Mobile: 33962
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> --
>> *Joseph Fonseka*
>> WSO2 Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> mobile: +94 772 512 430
>> skype: jpfonseka
>>
>> * <http://lk.linkedin.com/in/rumeshbandara>*
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Regards,
> Uvindra
>
> Mobile: 33962
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - Pagination

2016-11-16 Thread Uvindra Dias Jayasinha
Hi Jo,

On 16 November 2016 at 14:47, Joseph Fonseka <jos...@wso2.com> wrote:

> Hi Uvindra
>
> On Wed, Nov 16, 2016 at 1:48 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>>
>>
>> Searching and sorting will be done at DB level via SQL. It is only the
>> pagination of the returned result set that will be done at Java level.
>>
>
> So does this mean for different search request you will be caching
> separate sets of data ? If that is so will it exhaust the memory and how
> will API search perform under a load. And since the servers will be running
> on containers available memory would be even less.
>

Thats a good point, definitely should not be caching different search
request results.. Will discuss this with the team and get back to you.

>
> Why would this be more efficient than Java? We achieve a similar thing by
>> using a local cache so there is no need to have an intermediate in memory
>> DB. What was proposed makes sense when you consider that we a dealing with
>> only a very small amount of data at DB level now.
>>
>
> Considering pagination is done using DBMS, in-memory db will help to work
> only with a one data set as well as needing to maintain one set of
> pagination queries.
>

Hmmm I still dont understand the difference between in memory DB and in
memory cache. Technically in both cases we are working with an in memory
dataset.

>
>
>> In the future if we foresee a requirement for DB level pagination we can
>> implement that within the DAO layer, but for now this is more than enough
>> to meet our needs. This way we can avoid introducing unwanted complexity
>> upfront since there is no rationale for doing so.
>>
>
> Agreed, but if the complexity is to support pagination for all the DBMSs
> we can at-least focus on adding db level pagination for commonly used DBMSs
> (ie MySQL, Oracle).
>

Its not a complexity to support for multiple DB's, we can do that but its
an additional effort we can avoid by going down this path.

>
> Regards
> Jo
>
>
>
>
>
>
>>
>>
>>> Thanks & Regards
>>> Jo
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> --
>>>>> With regards,
>>>>> *Manu*ranga Perera.
>>>>>
>>>>> phone : 071 7 70 20 50
>>>>> mail : m...@wso2.com
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> --
>>> *Joseph Fonseka*
>>> WSO2 Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> mobile: +94 772 512 430
>>> skype: jpfonseka
>>>
>>> * <http://lk.linkedin.com/in/rumeshbandara>*
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> --
> *Joseph Fonseka*
> WSO2 Inc.; http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 772 512 430
> skype: jpfonseka
>
> * <http://lk.linkedin.com/in/rumeshbandara>*
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - Pagination

2016-11-16 Thread Uvindra Dias Jayasinha
Hi Jo, my responses are inline.

On 16 November 2016 at 13:03, Joseph Fonseka <jos...@wso2.com> wrote:

> Hi Uvindra
>
> How do we paginate searched and sorted data, searching and sorting will be
> hard without using a DBMSs support ? And I think we can come up with
> simpler solutions for the points you have mention and still stick with a
> DBMS.
>

Searching and sorting will be done at DB level via SQL. It is only the
pagination of the returned result set that will be done at Java level.

>
> On Wed, Nov 16, 2016 at 10:34 AM, Uvindra Dias Jayasinha <uvin...@wso2.com
> > wrote:
>
>>
>>>> 1. Display only the latest version of a given published API by the API
>>>> Store - Since the version is a VARCHAR column we cannot assume it is
>>>> numerical so its not straightforward to find the latest version of a given
>>>> API via SQL.
>>>>
>>>
> As a solution we can introduce a numeric version column which will be
> updated when the API is saved based on the version string given. With that
> update cost is high retrieve cost is low.
>

Here we are adding complexity just to be able to support this from the DB.
We dont know what version strategy the end user will use so its going to
difficult to decide how we are going to populate this numeric version
column.

>
>
>>
>>>> 2. Validate an APIs roles against a given users roles to validate
>>>> eligibility to view the API - In situations where users have a large number
>>>> of roles assigned to them we will need to pass all these roles in the WHERE
>>>> clause of the SELECT statement when filtering the result set further
>>>> complicating statement.
>>>>
>>>
> If the visibility is only depend on roles I guess it is a straight forward
> thing to implement with SQL. When you mention large number of roles what
> would be a rough amount we have to expect and in which scenarios ?
>

Agreed it can be done at SQL level but its a lot easier to read the entire
result set without the where for roles and then validate it. Think of a
scenario of a user that has a 1000 roles(from a real scenario that was
encountered), the WHERE clause will need to check for all 1000 roles which
is a great way to slow down the query. So this can get pretty heavy.


>
>>
>>>> Given that in the C5 tenancy model each tenant will have there own DB
>>>> instance, data numbers will be quite low. For example it will be unusual to
>>>> have a 1000 APIs in the DB. So in this scenario its a lot easier to read
>>>> the entire result set into memory at the back end and then paginate what is
>>>> returned to the caller at Java level along with doing permission checks.
>>>> the advantage of this is,
>>>>
>>>
> Even though its unusual there are existing users with 700+ APIs so we
> should be able to support it.
>

Looping over 10,000 results in Java and paginating can be done fairly
quickly, so this can even scale up to 10,000 APIs.


>
>> 1. Minimise the need to maintain vendor specific SQL for pagination
>>>> 2. Simplifies SQL statements
>>>>
>>>
> If the above is a big problem we can see the possibility of using a
> in-memory db (ie h2). Then we only have to maintain SQL queries only for
> that. I guess that will be lot efficient than processing raw data using
> Java. We can load the data to in-memory db on-demand.
>
> Why would this be more efficient than Java? We achieve a similar thing by
using a local cache so there is no need to have an intermediate in memory
DB. What was proposed makes sense when you consider that we a dealing with
only a very small amount of data at DB level now.

In the future if we foresee a requirement for DB level pagination we can
implement that within the DAO layer, but for now this is more than enough
to meet our needs. This way we can avoid introducing unwanted complexity
upfront since there is no rationale for doing so.


> Thanks & Regards
> Jo
>
>
>
>
>
>
>
>
> --
>>> With regards,
>>> *Manu*ranga Perera.
>>>
>>> phone : 071 7 70 20 50
>>> mail : m...@wso2.com
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> --
> *Joseph Fonseka*
> WSO2 Inc.; http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 772 512 430
> skype: jpfonseka
>
> * <http://lk.linkedin.com/in/rumeshbandara>*
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - Pagination

2016-11-15 Thread Uvindra Dias Jayasinha
By permissions I meant, the role checks against API artifacts that are
stored in the APIM DB. We store the allowed roles(the role name as a
string) along with the artifact in the DB. These are APIM specific. For
example if a user is doing a search for APIs by name and does not have the
required role assigned to view a given API in the APIM DB, then that
particular API should not be returned in the search results.

Sorry if I caused confusion by using the word permissions, we are only
working with roles in this case. Of course the permissions are defined
within the role, but we will not be going to that level since that is
handled by the user core. And the given users roles will also be obtained
from the user core.

On 15 November 2016 at 18:44, Manuranga Perera <m...@wso2.com> wrote:

> Hi Jayanga,
> Doesn't this functionality overlaps with permission checking implemented
> for C5?
>
> On Tue, Nov 15, 2016 at 1:10 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Pagination is required whenever we return a collection of items to the
>> requester and we use widely as part in the REST API.
>>
>> Initially we thought of supporting pagination at DB level. This ensures
>> that only a limited dataset is read at a given time based on the offset and
>> limit values provided by the caller. To support this we need to maintain DB
>> vendor specific SQL statements since pagination at DB level is
>> propitiatory.
>>
>> But there are a few complications with some of our use cases when relying
>> on DB pagination.
>>
>> 1. Display only the latest version of a given published API by the API
>> Store - Since the version is a VARCHAR column we cannot assume it is
>> numerical so its not straightforward to find the latest version of a given
>> API via SQL. We can overcome this at Java level by implementing a
>> sophisticated String comparator.
>>
>> 2. Validate an APIs roles against a given users roles to validate
>> eligibility to view the API - In situations where users have a large number
>> of roles assigned to them we will need to pass all these roles in the WHERE
>> clause of the SELECT statement when filtering the result set further
>> complicating statement.
>>
>> Given that in the C5 tenancy model each tenant will have there own DB
>> instance, data numbers will be quite low. For example it will be unusual to
>> have a 1000 APIs in the DB. So in this scenario its a lot easier to read
>> the entire result set into memory at the back end and then paginate what is
>> returned to the caller at Java level along with doing permission checks.
>> the advantage of this is,
>>
>> 1. Minimise the need to maintain vendor specific SQL for pagination
>>
>> 2. Simplifies SQL statements
>>
>> 3. Easier to implement the above use cases in Java
>>
>> We can minimise database access to read all the information by having a
>> local cache with a time out of about 30 seconds which we can use to serve
>> requests. When a new API is added it can be added to the cache.
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> With regards,
> *Manu*ranga Perera.
>
> phone : 071 7 70 20 50
> mail : m...@wso2.com
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] APIM C5 - Pagination

2016-11-15 Thread Uvindra Dias Jayasinha
Pagination is required whenever we return a collection of items to the
requester and we use widely as part in the REST API.

Initially we thought of supporting pagination at DB level. This ensures
that only a limited dataset is read at a given time based on the offset and
limit values provided by the caller. To support this we need to maintain DB
vendor specific SQL statements since pagination at DB level is
propitiatory.

But there are a few complications with some of our use cases when relying
on DB pagination.

1. Display only the latest version of a given published API by the API
Store - Since the version is a VARCHAR column we cannot assume it is
numerical so its not straightforward to find the latest version of a given
API via SQL. We can overcome this at Java level by implementing a
sophisticated String comparator.

2. Validate an APIs roles against a given users roles to validate
eligibility to view the API - In situations where users have a large number
of roles assigned to them we will need to pass all these roles in the WHERE
clause of the SELECT statement when filtering the result set further
complicating statement.

Given that in the C5 tenancy model each tenant will have there own DB
instance, data numbers will be quite low. For example it will be unusual to
have a 1000 APIs in the DB. So in this scenario its a lot easier to read
the entire result set into memory at the back end and then paginate what is
returned to the caller at Java level along with doing permission checks.
the advantage of this is,

1. Minimise the need to maintain vendor specific SQL for pagination

2. Simplifies SQL statements

3. Easier to implement the above use cases in Java

We can minimise database access to read all the information by having a
local cache with a time out of about 30 seconds which we can use to serve
requests. When a new API is added it can be added to the cache.


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - API Environments vs API Endpoints

2016-11-15 Thread Uvindra Dias Jayasinha
On 15 November 2016 at 16:39, Rukshan Premathunga <ruks...@wso2.com> wrote:

> Hi Uvindra,
>
> We can introduce per resource endpoint instead of per API endpoint with
> new APIM. Do we need to support this as well?
>
> Thanks and Regards
>

Yes that was a requirement with Synapse, but dont know how this will be
represented in the new Integration Server.

>
> On Tue, Nov 15, 2016 at 4:27 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>>
>>
>> On 15 November 2016 at 16:20, Abimaran Kugathasan <abima...@wso2.com>
>> wrote:
>>
>>> Hi Uvindra,
>>>
>>> What could be the possible values for ENVIRONMENT_CATEGORY column?
>>>
>>
>> This could be Production or Sandbox
>>
>>
>>> Also, at a time, only one environment will be supported by the APIM, Do
>>> we really need AM_API_ENVIRONMENTS table?
>>>
>>
>> Yes in this case there would be no use for it. Also had a chat with
>> NuwanD and he mentioned that we may not have a requirement for storing env
>> information separately in C5, since we will be supporting a CLI to help
>> exchange APIs created between different deployment environments.
>>
>>
>>> On Tue, Nov 15, 2016 at 3:46 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> As part of the C5 effort we need to evaluate how we are to persist API
>>>> Endpoint and API Environment information. Here  is a brief introduction of
>>>> what each of these are.
>>>>
>>>> *API Endpoint*
>>>> The actual backend endpoint that a API created in APIM fronts. In C4
>>>> these were defined as Production and Sandbox endpoints. So at DB table
>>>> level we could represent this as follows,
>>>>
>>>> CREATE TABLE `AM_API_ENDPOINTS` (
>>>>   `API_ID` INTEGER,
>>>>   `ENVIRONMENT_CATEGORY` VARCHAR(30),
>>>>   `ENDPOINT_TYPE` VARCHAR(30),
>>>>   `IS_ENDPOINT_SECURED` BOOLEAN,
>>>>   `TPS` INTEGER,
>>>>   `AUTH_DIGEST` VARCHAR(30),
>>>>   `USERNAME` VARCHAR(255),
>>>>   `PASSWORD` VARCHAR(255)
>>>> );
>>>>
>>>> This naturally maps to our current concepts that already exist in C4.
>>>>
>>>> *API Environment*
>>>> This represents different gateway environments across which a given API
>>>> can be deployed on such as Dev, QA, Production. So at DB table level we
>>>> could represent this as follows,
>>>>
>>>> CREATE TABLE `AM_API_ENVIRONMENTS` (
>>>>   `API_ID` INTEGER,
>>>>   `ENV_NAME` VARCHAR(255),
>>>>   `HTTP_URL` VARCHAR(255),
>>>>   `HTTPS_URL` VARCHAR(255),
>>>>   `APPEND_CONTEXT` BOOLEAN
>>>> );
>>>>
>>>>
>>>> Is there an overlap between these two concepts in the way we are
>>>> representing them here?
>>>>
>>>> Please give your feedback.
>>>>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>
>>>
>>>
>>> --
>>> Thanks
>>> Abimaran Kugathasan
>>> Senior Software Engineer - API Technologies
>>>
>>> Email : abima...@wso2.com
>>> Mobile : +94 773922820
>>>
>>> <http://stackoverflow.com/users/515034>
>>> <http://lk.linkedin.com/in/abimaran>
>>> <http://www.lkabimaran.blogspot.com/>  <https://github.com/abimarank>
>>> <https://twitter.com/abimaran>
>>>
>>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - API Environments vs API Endpoints

2016-11-15 Thread Uvindra Dias Jayasinha
The ENDPOINT_TYPE refers to things such as HTTP Endpoint or WSDL Endpoint,
etc. which we supported in C4. But I dont knowhow this applies to C5.

On 15 November 2016 at 16:37, Abimaran Kugathasan <abima...@wso2.com> wrote:

> HI Uvindra
>
> On Tue, Nov 15, 2016 at 4:27 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>>
>>
>> On 15 November 2016 at 16:20, Abimaran Kugathasan <abima...@wso2.com>
>> wrote:
>>
>>> Hi Uvindra,
>>>
>>> What could be the possible values for ENVIRONMENT_CATEGORY column?
>>>
>>
>> This could be Production or Sandbox
>>
>
> I hope, values for ENDPOINT_TYPE should be Production or Sandbox. IMO,
> Production or Sandbox aren't Environments but a Endpoint  types or category.
>
>
>>
>>
>>> Also, at a time, only one environment will be supported by the APIM, Do
>>> we really need AM_API_ENVIRONMENTS table?
>>>
>>
>> Yes in this case there would be no use for it. Also had a chat with
>> NuwanD and he mentioned that we may not have a requirement for storing env
>> information separately in C5, since we will be supporting a CLI to help
>> exchange APIs created between different deployment environments.
>>
>>
>>> On Tue, Nov 15, 2016 at 3:46 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> As part of the C5 effort we need to evaluate how we are to persist API
>>>> Endpoint and API Environment information. Here  is a brief introduction of
>>>> what each of these are.
>>>>
>>>> *API Endpoint*
>>>> The actual backend endpoint that a API created in APIM fronts. In C4
>>>> these were defined as Production and Sandbox endpoints. So at DB table
>>>> level we could represent this as follows,
>>>>
>>>> CREATE TABLE `AM_API_ENDPOINTS` (
>>>>   `API_ID` INTEGER,
>>>>   `ENVIRONMENT_CATEGORY` VARCHAR(30),
>>>>   `ENDPOINT_TYPE` VARCHAR(30),
>>>>   `IS_ENDPOINT_SECURED` BOOLEAN,
>>>>   `TPS` INTEGER,
>>>>   `AUTH_DIGEST` VARCHAR(30),
>>>>   `USERNAME` VARCHAR(255),
>>>>   `PASSWORD` VARCHAR(255)
>>>> );
>>>>
>>>> This naturally maps to our current concepts that already exist in C4.
>>>>
>>>> *API Environment*
>>>> This represents different gateway environments across which a given API
>>>> can be deployed on such as Dev, QA, Production. So at DB table level we
>>>> could represent this as follows,
>>>>
>>>> CREATE TABLE `AM_API_ENVIRONMENTS` (
>>>>   `API_ID` INTEGER,
>>>>   `ENV_NAME` VARCHAR(255),
>>>>   `HTTP_URL` VARCHAR(255),
>>>>   `HTTPS_URL` VARCHAR(255),
>>>>   `APPEND_CONTEXT` BOOLEAN
>>>> );
>>>>
>>>>
>>>> Is there an overlap between these two concepts in the way we are
>>>> representing them here?
>>>>
>>>> Please give your feedback.
>>>>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>
>>>
>>>
>>> --
>>> Thanks
>>> Abimaran Kugathasan
>>> Senior Software Engineer - API Technologies
>>>
>>> Email : abima...@wso2.com
>>> Mobile : +94 773922820
>>>
>>> <http://stackoverflow.com/users/515034>
>>> <http://lk.linkedin.com/in/abimaran>
>>> <http://www.lkabimaran.blogspot.com/>  <https://github.com/abimarank>
>>> <https://twitter.com/abimaran>
>>>
>>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>
>
>
> --
> Thanks
> Abimaran Kugathasan
> Senior Software Engineer - API Technologies
>
> Email : abima...@wso2.com
> Mobile : +94 773922820
>
> <http://stackoverflow.com/users/515034>
> <http://lk.linkedin.com/in/abimaran>
> <http://www.lkabimaran.blogspot.com/>  <https://github.com/abimarank>
> <https://twitter.com/abimaran>
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - API Environments vs API Endpoints

2016-11-15 Thread Uvindra Dias Jayasinha
On 15 November 2016 at 16:20, Abimaran Kugathasan <abima...@wso2.com> wrote:

> Hi Uvindra,
>
> What could be the possible values for ENVIRONMENT_CATEGORY column?
>

This could be Production or Sandbox


> Also, at a time, only one environment will be supported by the APIM, Do we
> really need AM_API_ENVIRONMENTS table?
>

Yes in this case there would be no use for it. Also had a chat with NuwanD
and he mentioned that we may not have a requirement for storing env
information separately in C5, since we will be supporting a CLI to help
exchange APIs created between different deployment environments.


> On Tue, Nov 15, 2016 at 3:46 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> As part of the C5 effort we need to evaluate how we are to persist API
>> Endpoint and API Environment information. Here  is a brief introduction of
>> what each of these are.
>>
>> *API Endpoint*
>> The actual backend endpoint that a API created in APIM fronts. In C4
>> these were defined as Production and Sandbox endpoints. So at DB table
>> level we could represent this as follows,
>>
>> CREATE TABLE `AM_API_ENDPOINTS` (
>>   `API_ID` INTEGER,
>>   `ENVIRONMENT_CATEGORY` VARCHAR(30),
>>   `ENDPOINT_TYPE` VARCHAR(30),
>>   `IS_ENDPOINT_SECURED` BOOLEAN,
>>   `TPS` INTEGER,
>>   `AUTH_DIGEST` VARCHAR(30),
>>   `USERNAME` VARCHAR(255),
>>   `PASSWORD` VARCHAR(255)
>> );
>>
>> This naturally maps to our current concepts that already exist in C4.
>>
>> *API Environment*
>> This represents different gateway environments across which a given API
>> can be deployed on such as Dev, QA, Production. So at DB table level we
>> could represent this as follows,
>>
>> CREATE TABLE `AM_API_ENVIRONMENTS` (
>>   `API_ID` INTEGER,
>>   `ENV_NAME` VARCHAR(255),
>>   `HTTP_URL` VARCHAR(255),
>>   `HTTPS_URL` VARCHAR(255),
>>   `APPEND_CONTEXT` BOOLEAN
>> );
>>
>>
>> Is there an overlap between these two concepts in the way we are
>> representing them here?
>>
>> Please give your feedback.
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>
>
>
> --
> Thanks
> Abimaran Kugathasan
> Senior Software Engineer - API Technologies
>
> Email : abima...@wso2.com
> Mobile : +94 773922820
>
> <http://stackoverflow.com/users/515034>
> <http://lk.linkedin.com/in/abimaran>
> <http://www.lkabimaran.blogspot.com/>  <https://github.com/abimarank>
> <https://twitter.com/abimaran>
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] APIM C5 - API Environments vs API Endpoints

2016-11-15 Thread Uvindra Dias Jayasinha
As part of the C5 effort we need to evaluate how we are to persist API
Endpoint and API Environment information. Here  is a brief introduction of
what each of these are.

*API Endpoint*
The actual backend endpoint that a API created in APIM fronts. In C4 these
were defined as Production and Sandbox endpoints. So at DB table level we
could represent this as follows,

CREATE TABLE `AM_API_ENDPOINTS` (
  `API_ID` INTEGER,
  `ENVIRONMENT_CATEGORY` VARCHAR(30),
  `ENDPOINT_TYPE` VARCHAR(30),
  `IS_ENDPOINT_SECURED` BOOLEAN,
  `TPS` INTEGER,
  `AUTH_DIGEST` VARCHAR(30),
  `USERNAME` VARCHAR(255),
  `PASSWORD` VARCHAR(255)
);

This naturally maps to our current concepts that already exist in C4.

*API Environment*
This represents different gateway environments across which a given API can
be deployed on such as Dev, QA, Production. So at DB table level we could
represent this as follows,

CREATE TABLE `AM_API_ENVIRONMENTS` (
  `API_ID` INTEGER,
  `ENV_NAME` VARCHAR(255),
  `HTTP_URL` VARCHAR(255),
  `HTTPS_URL` VARCHAR(255),
  `APPEND_CONTEXT` BOOLEAN
);


Is there an overlap between these two concepts in the way we are
representing them here?

Please give your feedback.

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - Schema for handling resources stored as blobs

2016-11-09 Thread Uvindra Dias Jayasinha
On 9 November 2016 at 12:36, Lahiru Cooray <lahi...@wso2.com> wrote:

>
>
> On Tue, Nov 8, 2016 at 1:23 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>>
>>
>> On 8 November 2016 at 11:10, Lahiru Cooray <lahi...@wso2.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 3, 2016 at 4:01 PM, Uvindra Dias Jayasinha <uvin...@wso2.com
>>> > wrote:
>>>
>>>> Hi All,
>>>>
>>>> Currently APIs have a few resources such as Swagger File, Optional WSDL
>>>> file, Related Documents file and an Optional Thumbnail image that needs to
>>>> be stored as blobs in the DB.
>>>>
>>>> Initially we thought of having separate tables to store these
>>>> resources, but what if we have a single generic resource table to store all
>>>> these?
>>>>
>>>> We could have schema such as below for the generic resource table
>>>>
>>>>
>>>> ​Since we previously stored our resources in the registry, a similar
>>>> generic schema was used to store all such resources by the registry itself.
>>>> So anything that is not a text data type can be considered as a BLOB.
>>>>
>>>> The advantages of doing this are,
>>>>
>>>> 1. Can manage all API related resources from a central table without
>>>> having to define custom tables for each resource.
>>>>
>>>  +1
>>>
>>>> 2. When an API is deleted its very easy to locate and remove all the
>>>> resources related with it
>>>>
>>>  +1
>>>
>>>> 3. When a new version of an API is created its very easy to copy over
>>>> the resources associated with the previous version to the new one.
>>>>
>>> Do we have a new API_ID for each version or do we have multiple version
>>> no's against an API_ID? Then we need to maintain Version also as another
>>> column
>>>
>>
>> A new API version will have a new API_ID.
>>
>>>
>>>
>>>> WDYT?
>>>> ​
>>>>
>>> +1 for the idea.
>>> Also is DATA_TYPE mean the file extension? if so I suggest to rename the
>>> name and also keep the file name as another column.
>>>
>>
>> In some case we are saving URLs with DATA_TYPE column as TEXT so those
>> aren't files. To be generic its better to  keep this as DATA_TYPE
>>
>
> In that case I have a different suggestion. Im not sure if maintaining
> irrelevant data in same column is a good practice.
> Can't we maintain the file type (eg: binary or inline) in the resources
> types table which Akalanka suggested?
> And in this transaction table, if the file type is inline we could store
> the url in a separate column and if the file type is a binary we could
> store binary value/file name/extension in separate columns.
>
> As mentioned earlier we dont store the data in the same column. There is a
VARCHAR column(RESOURCE_TEXT_VALUE) and a BLOB
column(RESOURCE_BINARY_VALUE). We are using a separate table to store the
type as Akalanka suggested, below are the tables,

CREATE TABLE `AM_RESOURCE_TYPES` (
  `RESOURCE_TYPE_ID` INTEGER AUTO_INCREMENT,
  `RESOURCE_TYPE_NAME` VARCHAR(255),
  PRIMARY KEY (`RESOURCE_TYPE_ID`)
);

CREATE TABLE `AM_API_RESOURCES` (
  `RESOURCE_ID` INTEGER AUTO_INCREMENT,
  `API_ID` INTEGER,
  `RESOURCE_TYPE_ID` INTEGER,
  `DATA_TYPE` VARCHAR(255),
  `RESOURCE_TEXT_VALUE` VARCHAR(1024),
  `RESOURCE_BINARY_VALUE` BLOB,
  PRIMARY KEY (`RESOURCE_ID`),
  FOREIGN KEY (`API_ID`) REFERENCES `AM_API`(`API_ID`) ON UPDATE CASCADE ON
DELETE CASCADE,
  FOREIGN KEY (`RESOURCE_TYPE_ID`) REFERENCES
`AM_RESOURCE_TYPES`(`RESOURCE_TYPE_ID`) ON UPDATE CASCADE ON DELETE CASCADE
);

>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> *Lahiru Cooray*
>>> Software Engineer
>>> WSO2, Inc.;http://wso2.com/
>>> lean.enterprise.middleware
>>>
>>> Mobile: +94 715 654154
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Lahiru Cooray*
> Software Engineer
> WSO2, Inc.;http://wso2.com/
> lean.enterprise.middleware
>
> Mobile: +94 715 654154
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM C5 - Schema for handling resources stored as blobs

2016-11-07 Thread Uvindra Dias Jayasinha
On 8 November 2016 at 11:10, Lahiru Cooray <lahi...@wso2.com> wrote:

>
>
> On Thu, Nov 3, 2016 at 4:01 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Hi All,
>>
>> Currently APIs have a few resources such as Swagger File, Optional WSDL
>> file, Related Documents file and an Optional Thumbnail image that needs to
>> be stored as blobs in the DB.
>>
>> Initially we thought of having separate tables to store these resources,
>> but what if we have a single generic resource table to store all these?
>>
>> We could have schema such as below for the generic resource table
>>
>>
>> ​Since we previously stored our resources in the registry, a similar
>> generic schema was used to store all such resources by the registry itself.
>> So anything that is not a text data type can be considered as a BLOB.
>>
>> The advantages of doing this are,
>>
>> 1. Can manage all API related resources from a central table without
>> having to define custom tables for each resource.
>>
>  +1
>
>> 2. When an API is deleted its very easy to locate and remove all the
>> resources related with it
>>
>  +1
>
>> 3. When a new version of an API is created its very easy to copy over the
>> resources associated with the previous version to the new one.
>>
> Do we have a new API_ID for each version or do we have multiple version
> no's against an API_ID? Then we need to maintain Version also as another
> column
>

A new API version will have a new API_ID.

>
>
>> WDYT?
>> ​
>>
> +1 for the idea.
> Also is DATA_TYPE mean the file extension? if so I suggest to rename the
> name and also keep the file name as another column.
>

In some case we are saving URLs with DATA_TYPE column as TEXT so those
aren't files. To be generic its better to  keep this as DATA_TYPE

>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Lahiru Cooray*
> Software Engineer
> WSO2, Inc.;http://wso2.com/
> lean.enterprise.middleware
>
> Mobile: +94 715 654154
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] APIM C5 - Schema for handling resources stored as blobs

2016-11-03 Thread Uvindra Dias Jayasinha
Hi All,

Currently APIs have a few resources such as Swagger File, Optional WSDL
file, Related Documents file and an Optional Thumbnail image that needs to
be stored as blobs in the DB.

Initially we thought of having separate tables to store these resources,
but what if we have a single generic resource table to store all these?

We could have schema such as below for the generic resource table


​Since we previously stored our resources in the registry, a similar
generic schema was used to store all such resources by the registry itself.
So anything that is not a text data type can be considered as a BLOB.

The advantages of doing this are,

1. Can manage all API related resources from a central table without having
to define custom tables for each resource.

2. When an API is deleted its very easy to locate and remove all the
resources related with it

3. When a new version of an API is created its very easy to copy over the
resources associated with the previous version to the new one.

WDYT?
​

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM DAO Interface usage for C5

2016-11-03 Thread Uvindra Dias Jayasinha
HI Ruwan,

The source you mentioned above supports what I'm proposing, the main reason
to use DI is to make testing practically possible. We suffered previously
because we couldn't do this.

I did do some research on both Spring and Guice and as you said there are
some practical issues with using these sometime(not unlike issues we face
with OSGi). So maybe there is an argument for not using a particular
framework/library for this purpose due to the cons that might be introduced.

But to be clear, using this as a design pattern is a must. I doesn't matter
what you call it, DI or something else. Designing classes to have hard
dependencies is the way to ensure that unit testing is not a viable option.
All DI proposes is that you don't do this. I have personally used this
pattern in other languages without the help of any framework and have
benefited greatly in being able to test code more thoroughly.


On 3 November 2016 at 13:01, Ruwan Abeykoon <ruw...@wso2.com> wrote:

> Hi Uvindra,
> DI is good for rapid development of specific applications, by taking
> various readymade components. IMO DI pattern was popularized due to .Net
>  way of doing things.
> There are many practical issues with DI, and as a platform provider, we
> should not use DI on our platform. We can be a DI container for our
> extensions, and it is a different story.
>
> Please see [1], [2] for not using DI. Having used Spring extensively
> couple of years back, I would recommend steer away from DI while building
> our platforms.
>
> [1] http://www.tonymarston.net/php-mysql/dependency-injection-is-evil.html
> [2] http://www.yegor256.com/2014/10/03/di-containers-are-evil.html
>
> Cheers,
> Ruwan
>
> On Thu, Nov 3, 2016 at 12:01 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> I think there is a misunderstanding on what I have meant by DI in this
>> case.
>>
>> OSGi addresses runtime wiring of components. What I mean by DI here is
>> its application as a design pattern[1]. This is not about unit testing the
>> DAO itself. Its about unit testing other components that use the DAO.
>>
>> You dont need any frameworks for doing DI actually because its a design
>> pattern. We are currently doing it by hand. A framework would just make it
>> slightly easier to manage but its not essential. If we are serious about
>> testing we need to take DI into consideration when designing our classes.
>>
>> [1] http://martinfowler.com/articles/dipInTheWild.html
>>
>> On 1 November 2016 at 18:42, Ruwan Abeykoon <ruw...@wso2.com> wrote:
>>
>>> Hi All,
>>> If this is about unit testing on DAO, we are also experimenting with
>>> simple JDBC approach[1]. No Dependency Injection.
>>> We could test the DAO, the SQL correctness etc. because there is no Mock
>>> objects.
>>>
>>> I am not in favor of DI on developing a platform as we have no control
>>> over which get wired at the runtime. OSGI is already a kind of "dependency
>>> injection" and we have more than enough issues with that.
>>>
>>> [1] https://github.com/wso2/carbon-identity-providers/blob/m
>>> aster/components/identity-provider/org.wso2.carbon.identity.
>>> provider/src/test/java/org/wso2/carbon/identity/provider/
>>> dao/IdentityProviderDAOTest.java
>>>
>>> Cheers,
>>> Ruwan
>>>
>>>
>>> On Tue, Nov 1, 2016 at 4:10 PM, Manuranga Perera <m...@wso2.com> wrote:
>>>
>>>> I like Guice as well, but since we already have OSGi, (if we do have
>>>> OSGi) shouldn't we leverage that ?
>>>>
>>>> On Tue, Nov 1, 2016 at 5:37 AM, Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>
>>>>>
>>>>>
>>>>>> Spring too supports dependency injection (since its first release in
>>>>>> early 2000s), anyone knows the differences between Spring and Guice?
>>>>>>
>>>>>>
>>>>>> I checked up on this, mainly Spring was the alternative to the
>>>>> bloated JavaEE. So its a fully featured framework with lots of stuff
>>>>> including the ability to do dependency injection.
>>>>>
>>>>> Guice is a light weight library that only focuses on DI. So if you
>>>>> want just DI, Guice is nice and simple as opposed to a heavier framework
>>>>> like Spring that has a lot more in it.
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Uvindra
>>>>>
>>>>> Mobile: 33962

Re: [Architecture] APIM DAO Interface usage for C5

2016-11-03 Thread Uvindra Dias Jayasinha
I think there is a misunderstanding on what I have meant by DI in this case.

OSGi addresses runtime wiring of components. What I mean by DI here is its
application as a design pattern[1]. This is not about unit testing the DAO
itself. Its about unit testing other components that use the DAO.

You dont need any frameworks for doing DI actually because its a design
pattern. We are currently doing it by hand. A framework would just make it
slightly easier to manage but its not essential. If we are serious about
testing we need to take DI into consideration when designing our classes.

[1] http://martinfowler.com/articles/dipInTheWild.html

On 1 November 2016 at 18:42, Ruwan Abeykoon <ruw...@wso2.com> wrote:

> Hi All,
> If this is about unit testing on DAO, we are also experimenting with
> simple JDBC approach[1]. No Dependency Injection.
> We could test the DAO, the SQL correctness etc. because there is no Mock
> objects.
>
> I am not in favor of DI on developing a platform as we have no control
> over which get wired at the runtime. OSGI is already a kind of "dependency
> injection" and we have more than enough issues with that.
>
> [1] https://github.com/wso2/carbon-identity-providers/
> blob/master/components/identity-provider/org.wso2.
> carbon.identity.provider/src/test/java/org/wso2/carbon/
> identity/provider/dao/IdentityProviderDAOTest.java
>
> Cheers,
> Ruwan
>
>
> On Tue, Nov 1, 2016 at 4:10 PM, Manuranga Perera <m...@wso2.com> wrote:
>
>> I like Guice as well, but since we already have OSGi, (if we do have
>> OSGi) shouldn't we leverage that ?
>>
>> On Tue, Nov 1, 2016 at 5:37 AM, Uvindra Dias Jayasinha <uvin...@wso2.com>
>> wrote:
>>
>>>
>>>
>>>> Spring too supports dependency injection (since its first release in
>>>> early 2000s), anyone knows the differences between Spring and Guice?
>>>>
>>>>
>>>> I checked up on this, mainly Spring was the alternative to the bloated
>>> JavaEE. So its a fully featured framework with lots of stuff including the
>>> ability to do dependency injection.
>>>
>>> Guice is a light weight library that only focuses on DI. So if you want
>>> just DI, Guice is nice and simple as opposed to a heavier framework like
>>> Spring that has a lot more in it.
>>>
>>> --
>>> Regards,
>>> Uvindra
>>>
>>> Mobile: 33962
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> With regards,
>> *Manu*ranga Perera.
>>
>> phone : 071 7 70 20 50
>> mail : m...@wso2.com
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM DAO Interface usage for C5

2016-10-31 Thread Uvindra Dias Jayasinha
>
> Spring too supports dependency injection (since its first release in early
> 2000s), anyone knows the differences between Spring and Guice?
>
>
> I checked up on this, mainly Spring was the alternative to the bloated
JavaEE. So its a fully featured framework with lots of stuff including the
ability to do dependency injection.

Guice is a light weight library that only focuses on DI. So if you want
just DI, Guice is nice and simple as opposed to a heavier framework like
Spring that has a lot more in it.

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM DAO Interface usage for C5

2016-10-31 Thread Uvindra Dias Jayasinha
Hi Bhathiya,

The issue you have highlighted(Avoid fat constructor) is addressed by
Google Guice, so you dont need to have multiple constructors[1] since Guice
will take care of building the object for you. Having an alternative
constructor like the above introduces 3 lines of code that is not unit
testable :)

Please watch[2] for a demonstration of practical usage.

[1] https://github.com/google/guice/wiki/GettingStarted
[2] https://www.youtube.com/watch?v=hBVJbzAagfs

On 31 October 2016 at 14:56, Bhathiya Jayasekara <bhath...@wso2.com> wrote:

> Hi Uvindra,
>
> On Mon, Oct 31, 2016 at 1:54 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> +architecture
>>
>> This initially began as an internal APIM discussion for avoiding hard
>> coded dependencies within the code we are writing for C5 to make them more
>> unit testable(Being able to do dependency injection[1]). But as suggested
>> by Akila I think this is a good thing to talk about publicly as all
>> products can benefit.
>>
>> Akila suggested using something like Google Juice[2] for this purpose. Im
>> +1 for using this. Would like to here your thoughts on this.
>>
>> [1] https://www.youtube.com/watch?v=IKD2-MAkXyQ
>> [2] https://github.com/google/guice
>>
>>
>> On 31 October 2016 at 13:06, Uvindra Dias Jayasinha <uvin...@wso2.com>
>> wrote:
>>
>>>
>>>
>>>> *The solution*
>>>>> Pass an instance of the interface to the heavy weight resource as a
>>>>> constructor parameter to the class that needs to use it. So in this case 
>>>>> it
>>>>> should be,
>>>>>
>>>>> public APIConsumerImpl(ApiDAO apiDAO, ApplicationDAO appDAO,
>>>>> APISubscriptionDAO apiSubscriptionDAO) {
>>>>> this.apiDAO = apiDAO;
>>>>> this.appDAO = appDAO;
>>>>> this.apiSubscriptionDAO = apiSubscriptionDAO;
>>>>> }
>>>>>
>>>>> Passing it this way means we can now provide a mock implementation of
>>>>> the DOA interfaces when constructing the Consumer and Provider classes
>>>>> using a library such as mockito[1] and easily write unit tests.
>>>>>
>>>>
> +1 for above constructor to be used in unit tests. But I don't think we
> can use the same for real use. For that, we can have a default constructor
> as well, so that clients don't need to worry about injecting DAO
> dependencies.
>
> public APIConsumerImpl() {
> this.apiDAO = DAOFactory.getApiDAO();
> this.appDAO = DAOFactory.getAppDAO();
> this.apiSubscriptionDAO = DAOFactory.getSubscriptionDAO();
> }
>
> Thanks,
>
> --
> *Bhathiya Jayasekara*
> *Senior Software Engineer,*
> *WSO2 inc., http://wso2.com <http://wso2.com>*
>
> *Phone: +94715478185 <%2B94715478185>*
> *LinkedIn: http://www.linkedin.com/in/bhathiyaj
> <http://www.linkedin.com/in/bhathiyaj>*
> *Twitter: https://twitter.com/bhathiyax <https://twitter.com/bhathiyax>*
> *Blog: http://movingaheadblog.blogspot.com
> <http://movingaheadblog.blogspot.com/>*
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM DAO Interface usage for C5

2016-10-31 Thread Uvindra Dias Jayasinha
+architecture

This initially began as an internal APIM discussion for avoiding hard coded
dependencies within the code we are writing for C5 to make them more unit
testable(Being able to do dependency injection[1]). But as suggested by
Akila I think this is a good thing to talk about publicly as all products
can benefit.

Akila suggested using something like Google Juice[2] for this purpose. Im
+1 for using this. Would like to here your thoughts on this.

[1] https://www.youtube.com/watch?v=IKD2-MAkXyQ
[2] https://github.com/google/guice

On 31 October 2016 at 13:06, Uvindra Dias Jayasinha <uvin...@wso2.com>
wrote:

>
> Hi Sajith,
>
> On 31 October 2016 at 12:34, Sajith Kariyawasam <saj...@wso2.com> wrote:
>
>> Hi Uvindra,
>>
>> On Mon, Oct 31, 2016 at 10:37 AM, Uvindra Dias Jayasinha <
>> uvin...@wso2.com> wrote:
>>
>>> Hi All,
>>>
>>> Im addressing the DOA interface usage for C5 in this mail but this
>>> applies to all new code we write in C5 so its a good thing for us all to
>>> note down.
>>>
>>> Those of you working on the APIConsumerImpl and APIProviderImpl for C5
>>> were asking what is the preferable way to use the DAO interface in that
>>> code. At the time I said you can use it however you want to, but thinking
>>> about this I think we should all conform to the following usage.
>>>
>>> You should not have any hard coded dependency to the the DOA interface
>>> within the code of APIConsumerImpl and APIProviderImpl. That means doing,
>>>
>>> ApiDAO apiDAO = DAOFactory.getApiDAO();
>>>
>>> inside the Consumer or Provider classes is a *no no*.
>>>
>>> Ok now let me explain why,
>>>
>>> *The problem*
>>> If an external dependency is hard coded inside a given class you cannot
>>> unit test that class without having that external dependency present. So in
>>> this case the external dependency is the DAO which is an interface to the
>>> database. If its hard coded in the Consumer or Provider classes we cannot
>>> unit test the Consumer/Provider classes without configuring a real
>>> database. Heavy weight resources such as databases or file systems make
>>> unit testing difficult to do and discourage it.
>>>
>>>
>> can't we just *mock(DAOFactory.class)* ?
>>
>
> We can mock it provided we haven't already constructed an instance in our
> code. But calling DAOFactory.getApiDAO() in our code means we have already
> committed to the implementation provided by the factory and cannot use a
> mocked implementation.
>
>
>> At some point of time of the code, we *have* to invoke
>> DAOFactory.getApiDAO() methods to do the object creation, meaning the class
>> we write that code is not unit testable ?
>>
>
> Yes, that code will not be unit testable. The goal is to reduce the amount
> of non unit testable code as much as possible.
>
>
>>
>> Shouldn't this solution introduce a fat constructor?
>>
>
> It does but the advantage we get from doing things this way far outweighs
> the constructor being fat. The consumer and Provider classes will only be
> constructed in one place I believe so that will not be an issue. We can
> also provide a helper method that calls that fat constructor internally to
> make it easier for the caller.
>
>
>> *The solution*
>>> Pass an instance of the interface to the heavy weight resource as a
>>> constructor parameter to the class that needs to use it. So in this case it
>>> should be,
>>>
>>> public APIConsumerImpl(ApiDAO apiDAO, ApplicationDAO appDAO,
>>> APISubscriptionDAO apiSubscriptionDAO) {
>>> this.apiDAO = apiDAO;
>>> this.appDAO = appDAO;
>>> this.apiSubscriptionDAO = apiSubscriptionDAO;
>>> }
>>>
>>> Passing it this way means we can now provide a mock implementation of
>>> the DOA interfaces when constructing the Consumer and Provider classes
>>> using a library such as mockito[1] and easily write unit tests.
>>>
>>> The reason why we had to rely on integration tests heavily earlier even
>>> to do the simplest form of testing was because we did not follow this
>>> practice of not having hard coded dependencies. Please keep this principle
>>> in mind when designing your classes for C5 or our testing strategy will not
>>> work.
>>>
>>> I'm sure some of you will have questions, so please ask.
>>>
>>> [1] http://site.mockito.org/
>>>
>>> --
>>> Regards,
>>> Uvindra
>>>
>>> Mobile: 33962
>>>
>>
>>
>>
>> --
>> Sajith Kariyawasam
>> *Associate Tech Lead*
>> *WSO2 Inc.; http://wso2.com <http://wso2.com/>*
>> *Committer and PMC member, Apache Stratos *
>> *AMIE (SL)*
>> *Mobile: 0772269575*
>>
>
>
>
> --
> Regards,
> Uvindra
>
> Mobile: 33962
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Review DB table audit fields for API Manager C5

2016-10-24 Thread Uvindra Dias Jayasinha
The flat file approach is also an option for us, thanks for suggesting it
Akila. At the moment we dont have a feature to make use of audit
information and it is not a priority for us to implement. So we will
revisit our options latter on if we do decide to implement this.



On 23 October 2016 at 09:51, Akila Ravihansa Perera <raviha...@wso2.com>
wrote:

> Hi,
>
> What exactly is the purpose of Audit table or tables? Will those be used
> to query the history and display it to the user through the system? Or is
> it only for auditing purposes in which APIM will never directly query the
> data but a separate system or tool will use.
>
> If it is the latter case then why not simply use an auditing log file? We
> define a well structured format and keep appending audit events. It should
> be a flat structure. In production, logs are usually centralized (ELK,
> Splunk etc) so file-system errors won't matter.
>
> Thanks.
>
> On Sat, Oct 22, 2016 at 11:21 AM, Abimaran Kugathasan <abima...@wso2.com>
> wrote:
>
>>
>>
>> On Fri, Oct 21, 2016 at 12:10 PM, Bhathiya Jayasekara <bhath...@wso2.com>
>> wrote:
>>
>>>
>>> On Wed, Oct 12, 2016 at 12:30 PM, Inosh Goonewardena <in...@wso2.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Oct 11, 2016 at 2:40 PM, Uvindra Dias Jayasinha <
>>>> uvin...@wso2.com> wrote:
>>>>
>>>>> Thanks for the feedback, some interesting points were brought up
>>>>>
>>>>> @Abimaran, the problem with maintaining a rigid structure like old/new
>>>>> column is that if a user changes the value of 5 columns at a given time
>>>>> that would mean 5 different inserts to the table, when in actual fact it
>>>>> was a single transaction that took place when the user did the change and
>>>>> saved. So its better to use a implementation like
>>>>> google-diff-match-patch[1] to record the string diff between the values of
>>>>> the columns before the change took place and after the update. Though we
>>>>> dont need to worry about this implementation detail for now. The idea of
>>>>> using a single table to store the history of all tables that will require
>>>>> auditing sounds good.
>>>>>
>>>>> @Sanjeewa, yes this would improve performance when trying to retreive
>>>>> the LAST_UPDATED_TIME for a given entity.
>>>>>
>>>>> Let me elaborate a bit on Sanjeewa's point. So there can be only one
>>>>> CREATED_BY and CREATED_TIME for a given entity so that can remain as part
>>>>> of the original entities schema. Having the LAST_UPDATED_TIME as part of
>>>>> the original entities schema gives a performance advantage on checking if 
>>>>> a
>>>>> given entity has been modified since it was last checked. This is vital 
>>>>> for
>>>>> features such as ETags support for the REST API. So CREATED_BY,
>>>>> CREATED_TIME, LAST_UPDATED_TIME can remain with the original entities
>>>>> schema.
>>>>>
>>>>> We can still use the master audit table(building on Abimarans idea) to
>>>>> actually keep track of change history of a given entity, so that table
>>>>> could look like this,
>>>>>
>>>>> ENTRY_ID   PK
>>>>> TABLE_NAME VARCHAR
>>>>> ENTITY_ID  FK
>>>>> DIFF  BLOB
>>>>> ACTION*VARCHAR*
>>>>> ACTION_BY  *VARCHAR*
>>>>> ACTION_TIME   *TIMESTAMP*
>>>>>
>>>>>
>>>> +1 for having single audit table and recording a diff value. Using a
>>>> structure with old/new column could become unmanageable when the changes
>>>> happen to multiple columns. Also we are planning to do audit table updates
>>>> from our code right? Database level triggers can be used in such cases but
>>>> IMO we should avoid using triggers since it could affect the performance.
>>>>
>>>
>>> Another reason to avoid database triggers is that if we use triggers, we
>>> complemetely depend on trigger support in each database. So we may not be
>>> able to support all databases, and we'll have to spend a lot of time and
>>> effort to support auditing for different databases.
>>>
>>> Another important thing is that when we implement a

Re: [Architecture] Defining specific custom exceptions for API Manager C5

2016-10-21 Thread Uvindra Dias Jayasinha
If you really want to handle all exceptions being thrown in the same way
you can use a multi exception catch block[1](supported from Java 7) to do
this. That way the component developer doesn't have to worry about defining
exception hierarchies, which is really not the concern of the component
developer. Though I still think that this requirement is an excuse for
being lazy and will lead to problems for others who have to maintain the
code. Its always better to be explicit whenever possible for the sake of
clarity.

[1]

catch (IOException|FileNotFoundException ex) {
}


On 21 October 2016 at 12:37, Susinda Perera <susi...@wso2.com> wrote:

> +1 for having custom exceptions. However, as Malintha mentioned I believe
> it will help us having exception hierarchy. There may be cases where
> catching top level exceptions may be sufficient. If we have a hierarchy,
> programmer can decide which one to use depending on the context.
>
> Thanks
> Susinda
>
> On Fri, Oct 21, 2016 at 12:26 PM, Isuru Perera <isu...@wso2.com> wrote:
>
>> +1 for having separate exception classes. It's also good if you want to
>> define different hierarchies of exceptions, but we need to think carefully
>> and properly group those exceptions.
>>
>>
>>
>> On Fri, Oct 21, 2016 at 12:08 PM, Harsha Kumara <hars...@wso2.com> wrote:
>>
>>>
>>>
>>> On Fri, Oct 21, 2016 at 11:14 AM, Malith Jayasinghe <mali...@wso2.com>
>>> wrote:
>>>
>>>> Hi Harsha,
>>>>
>>>> It makes sense to define specific exceptions. However, I am wondering
>>>> (in most of these cases) whether the caller can do anything specially to
>>>> handle these exception (I guess this will depend on how you define the
>>>> exception (and when it will be thrown)?
>>>>
>>>> For example, if you throw APIMDAOException when you get a SQL exception
>>>> within a DAO method, how would the client code handle it?  Can you give an
>>>> example?
>>>>
>>> Basically if we find a exception in DAO layer, let's say if we going to
>>> search users by name and it returns a SQL exception, we need catch it and
>>> throw defined exception with error message saying it's failed when
>>> searching users. In upper layers it will be useful. Also DAO layer may not
>>> only throw SQL exceptions, there might me different types of exceptions
>>> thrown from it. In service layer it might not need to aware all about it.
>>> We shouldn't expose DAO layer exceptions for end users as well. Because DAO
>>> layer exceptions might contains information regarding queries, schema and
>>> etc. Also with having defined exception we can send response back to the
>>> client accordingly. If we encounter server error we may return 404 while if
>>> we encounter security exception we may send response with other error
>>> accordingly.
>>>
>>> [1] http://tutorials.jenkov.com/java-exception-handling/exce
>>> ption-wrapping.html
>>>
>>>>
>>>> Thanks
>>>>
>>>> Malith
>>>>
>>>>
>>>>
>>>> On Thu, Oct 20, 2016 at 3:31 PM, Harsha Kumara <hars...@wso2.com>
>>>> wrote:
>>>>
>>>>> Definitely we need to have well defined set of exceptions instead
>>>>> using APIManagementException in all places. As Uvindra mentioned, for API
>>>>> Manager database access, we can have APIMDAOExcepton. For security related
>>>>> exceptions we can have APIMSecurityException and so on. In jaggary level
>>>>> also we check the instance type of a thrown exception and return responses
>>>>> accordingly.
>>>>>
>>>>> [1] http://howtodoinjava.com/best-practices/java-exception-h
>>>>> andling-best-practices/
>>>>>
>>>>> On Thu, Oct 20, 2016 at 2:27 PM, Uvindra Dias Jayasinha <
>>>>> uvin...@wso2.com> wrote:
>>>>>
>>>>>> I would like to know your thoughts on $subject.
>>>>>>
>>>>>> Previously we have a single custom exception class in APIM called
>>>>>> 'APIManagementException' and this was used when throwing exceptions
>>>>>> specific to the APIM product. Pros and Cons of doing it this way are,
>>>>>>
>>>>>> *Pros* - Only one APIM specific exception needs to be handled in the
>>>>>> entire APIM code base
>>>>>>
>>>>>> *Cons* - When an exception is thrown it

Re: [Architecture] Defining specific custom exceptions for API Manager C5

2016-10-21 Thread Uvindra Dias Jayasinha
On 21 October 2016 at 10:09, Chamila Adhikarinayake <chami...@wso2.com>
wrote:

>
>
> On Thu, Oct 20, 2016 at 2:51 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Hi Malintha,
>>
>> What do we gain by defining an exception hierarchy? As long as we can
>> differentiate between exceptions that should be enough.
>>
>> Hi
>
> I think one gain we can get by malintha's suggestion is that we can throw
> a single type of exception  from a method instead of throwing multiple
> types. This is one of the suggestions provided by sonar as well [1]. By
> using malintha's method we can throw the super class. If not we have to
> catch the exception and wrap it and throw it
>
> [1] https://sonar.spring.io/rules/show/squid:S1160
>

We are not talking about throwing multiple exceptions from the same method.
Its about throwing a specific exception that highlights clearly what has
happened.

>
>
>> On 20 October 2016 at 14:46, Malintha Amarasinghe <malint...@wso2.com>
>> wrote:
>>
>>> Hi Uvindra,
>>>
>>> +1 for defining specific exceptions.
>>> How about when defining a new APIMDAOException, extending it from a
>>> general APIManagementException? When there's another specific DAO
>>> related exception we can extend it from APIMDAOException.
>>> In that way, we can group specific types of exceptions in a hierarchical
>>> manner.
>>> WDYT?
>>>
>>> Thanks!
>>>
>>> On Thu, Oct 20, 2016 at 2:27 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> I would like to know your thoughts on $subject.
>>>>
>>>> Previously we have a single custom exception class in APIM called
>>>> 'APIManagementException' and this was used when throwing exceptions
>>>> specific to the APIM product. Pros and Cons of doing it this way are,
>>>>
>>>> *Pros* - Only one APIM specific exception needs to be handled in the
>>>> entire APIM code base
>>>>
>>>> *Cons* - When an exception is thrown its very difficult to pin point
>>>> the reason for the exception to be thrown. This results in,
>>>>
>>>> 1. Having to solely depend on the error message of the exception to
>>>> determine what has happened(Provided that the error message is clear)
>>>>
>>>> 2. Difficult to handle exception appropriately at code level because we
>>>> are not aware of the specifics of the exception
>>>>
>>>>
>>>> In light of this doesn't it make sense to define specific custom
>>>> exceptions? For example when interacting with the DAO component of APIM if
>>>> any data access errors are encountered we could define an
>>>> 'APIMDAOException' and throw that. This allows users of the DAO component
>>>> to clearly identify and handle that specific exception appropriately.
>>>>
>>>>
>>>> This obviously needs to be applied in a way that makes sense at
>>>> component level(DAO, Gateway, etc.) so its clear which specific component
>>>> has run into an issue.
>>>>
>>>> WDYT?
>>>>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>
>>>
>>>
>>> --
>>> Malintha Amarasinghe
>>> Software Engineer
>>> *WSO2, Inc. - lean | enterprise | middleware*
>>> http://wso2.com/
>>>
>>> Mobile : +94 712383306
>>>
>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Regards,
> Chamila Adhikarinayake
> Software Engineer
> WSO2, Inc.
> Mobile - +94712346437
> Email  - chami...@wso2.com
> Blog  -  http://helpfromadhi.blogspot.com/
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Defining specific custom exceptions for API Manager C5

2016-10-20 Thread Uvindra Dias Jayasinha
Hi Harsha,

Defining error codes for the specific exception at component level is fine.
I only said that its difficult to define generic exceptions that can be
used by all components in the platform generically. This should be defined
by the individual component developers.

On 20 October 2016 at 15:34, Harsha Thirimanna <hars...@wso2.com> wrote:

> Yes, my concern was , even though we identified the exception clearly,
> that exception also can be thrown because of different reason in different
> level of same component. So to get some decision in higher level, we may
> have to identify that clearly and  to do that we can have error codes
> within a component at least. Am I wrong here ?
>
> *Harsha Thirimanna*
> Associate Tech Lead | WSO2
>
> Email: hars...@wso2.com
> Mob: +94715186770
> Blog: http://harshathirimanna.blogspot.com/
> Twitter: http://twitter.com/harshathirimann
> Linked-In: linked-in: http://www.linkedin.com/pub/
> harsha-thirimanna/10/ab8/122
> <http://wso2.com/signature>
>
> On Thu, Oct 20, 2016 at 3:31 PM, Harsha Kumara <hars...@wso2.com> wrote:
>
>> Definitely we need to have well defined set of exceptions instead using
>> APIManagementException in all places. As Uvindra mentioned, for API Manager
>> database access, we can have APIMDAOExcepton. For security related
>> exceptions we can have APIMSecurityException and so on. In jaggary level
>> also we check the instance type of a thrown exception and return responses
>> accordingly.
>>
>> [1] http://howtodoinjava.com/best-practices/java-exception-h
>> andling-best-practices/
>>
>> On Thu, Oct 20, 2016 at 2:27 PM, Uvindra Dias Jayasinha <uvin...@wso2.com
>> > wrote:
>>
>>> I would like to know your thoughts on $subject.
>>>
>>> Previously we have a single custom exception class in APIM called
>>> 'APIManagementException' and this was used when throwing exceptions
>>> specific to the APIM product. Pros and Cons of doing it this way are,
>>>
>>> *Pros* - Only one APIM specific exception needs to be handled in the
>>> entire APIM code base
>>>
>>> *Cons* - When an exception is thrown its very difficult to pin point
>>> the reason for the exception to be thrown. This results in,
>>>
>>> 1. Having to solely depend on the error message of the exception to
>>> determine what has happened(Provided that the error message is clear)
>>>
>>> 2. Difficult to handle exception appropriately at code level because we
>>> are not aware of the specifics of the exception
>>>
>>>
>>> In light of this doesn't it make sense to define specific custom
>>> exceptions? For example when interacting with the DAO component of APIM if
>>> any data access errors are encountered we could define an
>>> 'APIMDAOException' and throw that. This allows users of the DAO component
>>> to clearly identify and handle that specific exception appropriately.
>>>
>>>
>>> This obviously needs to be applied in a way that makes sense at
>>> component level(DAO, Gateway, etc.) so its clear which specific component
>>> has run into an issue.
>>>
>>> WDYT?
>>>
>>> --
>>> Regards,
>>> Uvindra
>>>
>>> Mobile: 33962
>>>
>>
>>
>>
>> --
>> Harsha Kumara
>> Software Engineer, WSO2 Inc.
>> Mobile: +94775505618
>> Blog:harshcreationz.blogspot.com
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Defining specific custom exceptions for API Manager C5

2016-10-20 Thread Uvindra Dias Jayasinha
HI Malintha,

On 20 October 2016 at 15:33, Malintha Amarasinghe <malint...@wso2.com>
wrote:

> Hi Uvindra,
>
> Would we be able to get the benefits of both "Keeping a single 
> APIManagementException"
> and "Having Specific Exceptions" when we have a hierarchy isn't it?
> The Pro you mentioned, "*Only one APIM specific exception needs to be
> handled in the entire APIM code base" *when we have single exception class*.
> *When we catch an exception, there might be situations that, we do not
> need to handle specific DAO exceptions, but handling a general 
> APIMDAOException
> would be enough. If we do not have a hierarchy we might need to have
> multiple catch blocks catching specific exceptions but doing the same
> thing. But if all specific DAO exceptions are extended from APIMDAOException,
> we can have a single catch block with catching APIMDAOException?
>

When I mentioned this point as a Pro, the only advantage is less code(Helps
us to be lazy). But it actually is a disadvantage because of the Cons I
mentioned. We should have multiple catch blocks if that is the requirement.
But we can minimise this as well by writing more localised code instead of
one huge try block that has everything in it(This makes the code more
readable as well)

We discourage throwing the generic java 'Exception' for a reason. So
defining and using our very own custom generic exception amounts to the
same thing. Generic exceptions have no meaning by themselves. So if you are
using the DAO component you have to handle the DAO exception, you cant opt
out of doing that(if you do the whole point is lost).


> Thanks,
> Malintha
>
> On Thu, Oct 20, 2016 at 2:51 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Hi Malintha,
>>
>> What do we gain by defining an exception hierarchy? As long as we can
>> differentiate between exceptions that should be enough.
>>
>>
>> On 20 October 2016 at 14:46, Malintha Amarasinghe <malint...@wso2.com>
>> wrote:
>>
>>> Hi Uvindra,
>>>
>>> +1 for defining specific exceptions.
>>> How about when defining a new APIMDAOException, extending it from a
>>> general APIManagementException? When there's another specific DAO
>>> related exception we can extend it from APIMDAOException.
>>> In that way, we can group specific types of exceptions in a hierarchical
>>> manner.
>>> WDYT?
>>>
>>> Thanks!
>>>
>>> On Thu, Oct 20, 2016 at 2:27 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> I would like to know your thoughts on $subject.
>>>>
>>>> Previously we have a single custom exception class in APIM called
>>>> 'APIManagementException' and this was used when throwing exceptions
>>>> specific to the APIM product. Pros and Cons of doing it this way are,
>>>>
>>>> *Pros* - Only one APIM specific exception needs to be handled in the
>>>> entire APIM code base
>>>>
>>>> *Cons* - When an exception is thrown its very difficult to pin point
>>>> the reason for the exception to be thrown. This results in,
>>>>
>>>> 1. Having to solely depend on the error message of the exception to
>>>> determine what has happened(Provided that the error message is clear)
>>>>
>>>> 2. Difficult to handle exception appropriately at code level because we
>>>> are not aware of the specifics of the exception
>>>>
>>>>
>>>> In light of this doesn't it make sense to define specific custom
>>>> exceptions? For example when interacting with the DAO component of APIM if
>>>> any data access errors are encountered we could define an
>>>> 'APIMDAOException' and throw that. This allows users of the DAO component
>>>> to clearly identify and handle that specific exception appropriately.
>>>>
>>>>
>>>> This obviously needs to be applied in a way that makes sense at
>>>> component level(DAO, Gateway, etc.) so its clear which specific component
>>>> has run into an issue.
>>>>
>>>> WDYT?
>>>>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>
>>>
>>>
>>> --
>>> Malintha Amarasinghe
>>> Software Engineer
>>> *WSO2, Inc. - lean | enterprise | middleware*
>>> http://wso2.com/
>>>
>>> Mobile : +94 712383306
>>>
>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>
>
>
> --
> Malintha Amarasinghe
> Software Engineer
> *WSO2, Inc. - lean | enterprise | middleware*
> http://wso2.com/
>
> Mobile : +94 712383306
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Defining specific custom exceptions for API Manager C5

2016-10-20 Thread Uvindra Dias Jayasinha
Hi Harsha,

I think its difficult to make this generic across the entire platform, for
one it would take us forever to come to an agreement :)

Important thing is that the users of the given component can identify what
has gone wrong clearly based on the exception thrown. So I think its fine
for each component to define their own exception independently.

On 20 October 2016 at 15:15, Harsha Thirimanna <hars...@wso2.com> wrote:

> +1 for that approach to have specific exceptions for each component to
> clearly identify the exception and where it come from.
> And when we catch some exception, we may have to decide what we should do
> based on the exception. May be exception type or may be the based on
> content. It would be easy if we can maintain error code in each component
> level and introduce error code in the exception class as well. Then we can
> catch and specifically check that if we want. This would be consistent and
> can be introduce for each component define its own error code or can be
> defined across the platform in more generic way. WDYT ?
>
>
> *Harsha Thirimanna*
> Associate Tech Lead | WSO2
>
> Email: hars...@wso2.com
> Mob: +94715186770
> Blog: http://harshathirimanna.blogspot.com/
> Twitter: http://twitter.com/harshathirimann
> Linked-In: linked-in: http://www.linkedin.com/pub/ha
> rsha-thirimanna/10/ab8/122
> <http://wso2.com/signature>
>
> On Thu, Oct 20, 2016 at 2:51 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> Hi Malintha,
>>
>> What do we gain by defining an exception hierarchy? As long as we can
>> differentiate between exceptions that should be enough.
>>
>>
>> On 20 October 2016 at 14:46, Malintha Amarasinghe <malint...@wso2.com>
>> wrote:
>>
>>> Hi Uvindra,
>>>
>>> +1 for defining specific exceptions.
>>> How about when defining a new APIMDAOException, extending it from a
>>> general APIManagementException? When there's another specific DAO
>>> related exception we can extend it from APIMDAOException.
>>> In that way, we can group specific types of exceptions in a hierarchical
>>> manner.
>>> WDYT?
>>>
>>> Thanks!
>>>
>>> On Thu, Oct 20, 2016 at 2:27 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> I would like to know your thoughts on $subject.
>>>>
>>>> Previously we have a single custom exception class in APIM called
>>>> 'APIManagementException' and this was used when throwing exceptions
>>>> specific to the APIM product. Pros and Cons of doing it this way are,
>>>>
>>>> *Pros* - Only one APIM specific exception needs to be handled in the
>>>> entire APIM code base
>>>>
>>>> *Cons* - When an exception is thrown its very difficult to pin point
>>>> the reason for the exception to be thrown. This results in,
>>>>
>>>> 1. Having to solely depend on the error message of the exception to
>>>> determine what has happened(Provided that the error message is clear)
>>>>
>>>> 2. Difficult to handle exception appropriately at code level because we
>>>> are not aware of the specifics of the exception
>>>>
>>>>
>>>> In light of this doesn't it make sense to define specific custom
>>>> exceptions? For example when interacting with the DAO component of APIM if
>>>> any data access errors are encountered we could define an
>>>> 'APIMDAOException' and throw that. This allows users of the DAO component
>>>> to clearly identify and handle that specific exception appropriately.
>>>>
>>>>
>>>> This obviously needs to be applied in a way that makes sense at
>>>> component level(DAO, Gateway, etc.) so its clear which specific component
>>>> has run into an issue.
>>>>
>>>> WDYT?
>>>>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>
>>>
>>>
>>> --
>>> Malintha Amarasinghe
>>> Software Engineer
>>> *WSO2, Inc. - lean | enterprise | middleware*
>>> http://wso2.com/
>>>
>>> Mobile : +94 712383306
>>>
>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Defining specific custom exceptions for API Manager C5

2016-10-20 Thread Uvindra Dias Jayasinha
Hi Malintha,

What do we gain by defining an exception hierarchy? As long as we can
differentiate between exceptions that should be enough.


On 20 October 2016 at 14:46, Malintha Amarasinghe <malint...@wso2.com>
wrote:

> Hi Uvindra,
>
> +1 for defining specific exceptions.
> How about when defining a new APIMDAOException, extending it from a
> general APIManagementException? When there's another specific DAO related
> exception we can extend it from APIMDAOException.
> In that way, we can group specific types of exceptions in a hierarchical
> manner.
> WDYT?
>
> Thanks!
>
> On Thu, Oct 20, 2016 at 2:27 PM, Uvindra Dias Jayasinha <uvin...@wso2.com>
> wrote:
>
>> I would like to know your thoughts on $subject.
>>
>> Previously we have a single custom exception class in APIM called
>> 'APIManagementException' and this was used when throwing exceptions
>> specific to the APIM product. Pros and Cons of doing it this way are,
>>
>> *Pros* - Only one APIM specific exception needs to be handled in the
>> entire APIM code base
>>
>> *Cons* - When an exception is thrown its very difficult to pin point the
>> reason for the exception to be thrown. This results in,
>>
>> 1. Having to solely depend on the error message of the exception to
>> determine what has happened(Provided that the error message is clear)
>>
>> 2. Difficult to handle exception appropriately at code level because we
>> are not aware of the specifics of the exception
>>
>>
>> In light of this doesn't it make sense to define specific custom
>> exceptions? For example when interacting with the DAO component of APIM if
>> any data access errors are encountered we could define an
>> 'APIMDAOException' and throw that. This allows users of the DAO component
>> to clearly identify and handle that specific exception appropriately.
>>
>>
>> This obviously needs to be applied in a way that makes sense at component
>> level(DAO, Gateway, etc.) so its clear which specific component has run
>> into an issue.
>>
>> WDYT?
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>
>
>
> --
> Malintha Amarasinghe
> Software Engineer
> *WSO2, Inc. - lean | enterprise | middleware*
> http://wso2.com/
>
> Mobile : +94 712383306
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Defining specific custom exceptions for API Manager C5

2016-10-20 Thread Uvindra Dias Jayasinha
I would like to know your thoughts on $subject.

Previously we have a single custom exception class in APIM called
'APIManagementException' and this was used when throwing exceptions
specific to the APIM product. Pros and Cons of doing it this way are,

*Pros* - Only one APIM specific exception needs to be handled in the entire
APIM code base

*Cons* - When an exception is thrown its very difficult to pin point the
reason for the exception to be thrown. This results in,

1. Having to solely depend on the error message of the exception to
determine what has happened(Provided that the error message is clear)

2. Difficult to handle exception appropriately at code level because we are
not aware of the specifics of the exception


In light of this doesn't it make sense to define specific custom
exceptions? For example when interacting with the DAO component of APIM if
any data access errors are encountered we could define an
'APIMDAOException' and throw that. This allows users of the DAO component
to clearly identify and handle that specific exception appropriately.


This obviously needs to be applied in a way that makes sense at component
level(DAO, Gateway, etc.) so its clear which specific component has run
into an issue.

WDYT?

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Storing configs in database for C5

2016-10-14 Thread Uvindra Dias Jayasinha
I think we should not be in the mind set that all configs are changed by
system admins. That kind of thinking will not help us in our cloud first
approach. Some configs govern how a particular feature may behave and its
the end user who will need to set those. These are the exact kind of
configs we need to make more accessible because it makes our products more
user friendly.

I've seen distributed systems that stored their configs in DB's and
functioned without any issue. But I understand that this may not be the
best fit for our products.

Im fine with configs being at file level as long as the UX does not suffer.
We should strive to improve usability more than ever before.

On 14 October 2016 at 12:34, Sagara Gunathunga <sag...@wso2.com> wrote:

>
>
> On Fri, Oct 14, 2016 at 10:55 AM, Uvindra Dias Jayasinha <uvin...@wso2.com
> > wrote:
>
>> Let me summarize this thread so far.
>>
>> The functionality provided by a DB makes it easy to implement reading,
>> updating and sharing configs, but there have been some legitimate concerns
>> that have been raised in this thread regarding that.
>>
>
> First, we need to separate out service configs from end user specific
> configs.
>
> For end user specific configs such as Publisher/Store we should evaluate
> them case by case through a user story driven approach, in such cases if
> storing config values in a DB and populating  through a UI is the right
> design/solution then we should go for that but it should not be a rule.
>
> Generally storing server configs in a DB is an anti-pattern for me.
>
> - Storing server configs in a DB involves great amount of development work
> compare to maintaining simple configuration file such as designing and
> implementing UIs, Writing JDBC/ORM code, maintaining DB schema for each
> vendor, writing UI tests etc. When you change or introduce new config means
> you have to modify all of above layers again.
>
> - People who suppose to change these server configs are system admins,
> they usually love config files over UIs.
>
> - Compare to number of people who will use these UIs are very limited so
> it's OK to let them to change config files while we can focus on Store or
> Publisher where user experience is more critical.
>
> - As stated this is not Container friendly
>
> - Previously we had very bad experience storing config in Registry DB
>
> - This seems like a re-inventing another Registry, if we really want to go
> in this path, we should use existing key-value store instead of writing
> from the scratch
>
>
> Sharing configs among several node is a valid use case to use DB, but it's
> kind of a lazy workaround to bring a DB instead of engineering the sharing
> configs among nodes problem properly :)
>
> Thanks !
>
>
>
>
>>
>> As has already been outlined deployment based configs should be part of
>> the container and so should be file based. We also have limitations with
>> certain components such as the Gateway which needs to be deployed in the
>> DMZ and so cannot connect with a DB in that case.
>>
>> At the end of the day a file is a persistent storage just like a DB,
>> albeit much less sophisticated and localized. So maybe we can achieve some
>> of the advantages I outlined in my initial mail using file configs. The
>> underlying storage really does not matter(just an implementation detail) as
>> long as we can achieve the benefits that we need.
>>
>>
>> On 14 October 2016 at 00:11, Lahiru Sandaruwan <lahi...@wso2.com> wrote:
>>
>>> Hi All,
>>>
>>> If we try to categorize the configurations we have in C4, using the way
>>> we manage,
>>>
>>>1. *Runtime configurations*
>>>   - Any config you login to an UI and do
>>>   - E.g. API creation/ subscription in APIM, Mediation flow in
>>>   Integration Server, Service providers/ secondary user stores in IS, 
>>> etc.)
>>>   - We are using file system, direct databases, and registry as per
>>>   the requirements for this type
>>>   2. *One time configurations*
>>>   - Any one time config you configure using the files
>>>   - E.g. Configs we have in carbon.xml, axis2.xml, user-mgt.xml
>>>   - We only use file system for this type
>>>
>>>
>>> With C5 tenant approach, tenants will get to configure a considerable
>>> set of configs out of #2, but not all. IMO we should use a database for
>>> that set only, keeping the other one time configurations in the file system
>>> from #2.
>>>
>>> Even hot deployable files are an option for suit

Re: [Architecture] Storing configs in database for C5

2016-10-13 Thread Uvindra Dias Jayasinha
t;>>>>>>
>>>>>>> 2. Gateways don't have access to the DB. So say you're enabling
>>>>>>> analytics (data publishing). You have to propagate that change to the
>>>>>>> Gateway nodes using some mechanism. And with no clustering on C5, this 
>>>>>>> is a
>>>>>>> challenge.
>>>>>>>
>>>>>>> If the objective of this is to make the Cloud (tenant) experience
>>>>>>> better, I think we should just restart the tenant's containers with the
>>>>>>> relevant configs in place.
>>>>>>>
>>>>>>
>>>>>> Still we have a problem with regard to how we are going to allow the
>>>>>> tenants to do the configuration changes. Currently we do it through the
>>>>>> registry which will not work for C5.
>>>>>>
>>>>>
>>>>> Yes, so my idea is to provide a UI to do the configs. Those configs we
>>>>> can store anywhere (maybe in a table) just for the sake of rendering that
>>>>> UI. The product code will still read from the config files. When you apply
>>>>> those configs through the press of a button, the container should get
>>>>> rebuilt and restarted with the necessary modification to the config files.
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Lakmali
>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>> NuwanD.
>>>>>>>
>>>>>>> On Wed, Oct 12, 2016 at 2:12 PM, Lakmali Baminiwatta <
>>>>>>> lakm...@wso2.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On 12 October 2016 at 13:47, Uvindra Dias Jayasinha <
>>>>>>>> uvin...@wso2.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Sajith,
>>>>>>>>>
>>>>>>>>> Yes even though the boot up time is not an issue in C5 the other
>>>>>>>>> advantages I have outlined are still there to be gained. There is a 
>>>>>>>>> huge
>>>>>>>>> effort we have to do on dev ops side to maintain those images you are
>>>>>>>>> talking about because of having everything at file level.
>>>>>>>>>
>>>>>>>>> Some examples from API Manager I can think of are turning
>>>>>>>>> notifications on/off, enable monetization, enable/disable stats, 
>>>>>>>>> configure
>>>>>>>>> work flows, Enable/Disable JWT token header.
>>>>>>>>>
>>>>>>>>
>>>>>>>> +1 to move feature related configurations to the database and make
>>>>>>>> them configurable through the UI.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Lakmali
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 12 October 2016 at 12:58, Sajith Kariyawasam <saj...@wso2.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Uvindra,
>>>>>>>>>>
>>>>>>>>>> With cloud deployment in mind, the idea is to boot up the nodes
>>>>>>>>>> in quick time, therefore the docker images are pre-configured with 
>>>>>>>>>> all the
>>>>>>>>>> configuration values, which will speed up the node start up. A 
>>>>>>>>>> change of
>>>>>>>>>> configuration means a new docker image will be created with the new
>>>>>>>>>> configs, and re-spawn the cluster.
>>>>>>>>>>
>>>>>>>>>> Therefore, IMO a node restart for a config change is not
>>>>>>>>>> relevant, also no need of a periodic config checks.
>>>>>>>>>>
>>>>>>>>>> Btw, can you give me some example configuration you were thinking
>>>>>>>>>> of?
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>

Re: [Architecture] Storing configs in database for C5

2016-10-12 Thread Uvindra Dias Jayasinha
Hi Sajith,

Yes even though the boot up time is not an issue in C5 the other advantages
I have outlined are still there to be gained. There is a huge effort we
have to do on dev ops side to maintain those images you are talking about
because of having everything at file level.

Some examples from API Manager I can think of are turning notifications
on/off, enable monetization, enable/disable stats, configure work flows,
Enable/Disable JWT token header.



On 12 October 2016 at 12:58, Sajith Kariyawasam <saj...@wso2.com> wrote:

> Hi Uvindra,
>
> With cloud deployment in mind, the idea is to boot up the nodes in quick
> time, therefore the docker images are pre-configured with all the
> configuration values, which will speed up the node start up. A change of
> configuration means a new docker image will be created with the new
> configs, and re-spawn the cluster.
>
> Therefore, IMO a node restart for a config change is not relevant, also no
> need of a periodic config checks.
>
> Btw, can you give me some example configuration you were thinking of?
>
> Thanks,
> Sajith
>
> On Wed, Oct 12, 2016 at 11:53 AM, Uvindra Dias Jayasinha <uvin...@wso2.com
> > wrote:
>
>> Was wondering about $subject
>>
>> Traditionally we have stored our product configs, be it carbon.xml,
>> api-manager.xml, identity.xml, etc. at file level. Some configs, such as
>> "port offset" are inherently bound to the server startup so it makes sense
>> for them to be at file level, since they come into affect during the
>> startup. But certain runtime configs actually get engaged only when a given
>> feature is used. But having those configs at file level require a restart
>> for the changes to take affect. In C4 API Manager avoided doing restarts
>> for certain config changes, like adding mediation extensions, by storing
>> them in the registry.
>>
>> For C5 a reusable implementation can exist at each node which
>> periodically reads the table(say once a minute) and updates the config
>> values in memory. Products communicate with this config library to get the
>> values of a given config. So eventually they will read the updated value in
>> a short time. If we were to store at least certain configs at DB level
>> there are several advantages.
>>
>> 1. Eliminate need for a restart for changes to take affect. I realize in
>> C5 a restart is relatively cheap so this might not be a big deal, but you
>> still need someone to initiate the restart after the config change.
>>
>> 2. Since the config DB table has a known structure a UI can be easily
>> developed to do CRUD operations for config changes and used by all
>> products. This is a lot more user friendly than asking users to change
>> files.
>>
>> 3. We can provide a REST API to allow config changes to be done on the DB
>> table alternatively.
>>
>> 4. Simplify dev ops by eliminating complicated puppet config templates
>> that need to constantly maintained with new releases.
>>
>> 5. Since configs are in a central DB its easy to manage them since all
>> nodes will read from the same table.
>>
>> 6. Configs can be backed up by simply backing up the table
>>
>>
>> Doing this makes sense for certain use cases of API Manger, I'm sure
>> there maybe similar benefits for other products as well. It may not make
>> sense for all configs but at least for some that govern feature
>> functionality its great to have. WDYT?
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Sajith Kariyawasam
> *Associate Tech Lead*
> *WSO2 Inc.; http://wso2.com <http://wso2.com/>*
> *Committer and PMC member, Apache Stratos *
> *AMIE (SL)*
> *Mobile: 0772269575*
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Storing configs in database for C5

2016-10-12 Thread Uvindra Dias Jayasinha
Was wondering about $subject

Traditionally we have stored our product configs, be it carbon.xml,
api-manager.xml, identity.xml, etc. at file level. Some configs, such as
"port offset" are inherently bound to the server startup so it makes sense
for them to be at file level, since they come into affect during the
startup. But certain runtime configs actually get engaged only when a given
feature is used. But having those configs at file level require a restart
for the changes to take affect. In C4 API Manager avoided doing restarts
for certain config changes, like adding mediation extensions, by storing
them in the registry.

For C5 a reusable implementation can exist at each node which periodically
reads the table(say once a minute) and updates the config values in memory.
Products communicate with this config library to get the values of a given
config. So eventually they will read the updated value in a short time. If
we were to store at least certain configs at DB level there are several
advantages.

1. Eliminate need for a restart for changes to take affect. I realize in C5
a restart is relatively cheap so this might not be a big deal, but you
still need someone to initiate the restart after the config change.

2. Since the config DB table has a known structure a UI can be easily
developed to do CRUD operations for config changes and used by all
products. This is a lot more user friendly than asking users to change
files.

3. We can provide a REST API to allow config changes to be done on the DB
table alternatively.

4. Simplify dev ops by eliminating complicated puppet config templates that
need to constantly maintained with new releases.

5. Since configs are in a central DB its easy to manage them since all
nodes will read from the same table.

6. Configs can be backed up by simply backing up the table


Doing this makes sense for certain use cases of API Manger, I'm sure there
maybe similar benefits for other products as well. It may not make sense
for all configs but at least for some that govern feature functionality its
great to have. WDYT?

-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Review DB table audit fields for API Manager C5

2016-10-11 Thread Uvindra Dias Jayasinha
On 12 October 2016 at 10:54, Lahiru Cooray  wrote:

>
>
> On Tue, Oct 11, 2016 at 1:44 PM, Sanjeewa Malalgoda 
> wrote:
>
>> I think we can manage audit table while still having CREATED_BY,
>> CREATED_TIME,UPDATED_BY, UPDATED_TIME  in same tables. So with that
>> approach we may never need to do table scan of audit table while fetching
>> updates. So each updates will recorded in separate table while original
>> table having all relevant information. WDYT?
>>
>
> +1
> Also we may need the highlighted separate audit table to keep a track of
> deleted rows.
>

Hi Lahiru, Yes I highlighted the need for auditing entity deletions in my
initial mail.

>
>> Thanks,
>> sanjeewa.
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Lahiru Cooray*
> Software Engineer
> WSO2, Inc.;http://wso2.com/
> lean.enterprise.middleware
>
> Mobile: +94 715 654154
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Review DB table audit fields for API Manager C5

2016-10-11 Thread Uvindra Dias Jayasinha
Thanks for the feedback

On 12 October 2016 at 09:33, Abimaran Kugathasan <abima...@wso2.com> wrote:

>
>
> On Tue, Oct 11, 2016 at 10:34 PM, Lakmali Baminiwatta <lakm...@wso2.com>
> wrote:
>
>>
>>
>> On 11 October 2016 at 14:40, Uvindra Dias Jayasinha <uvin...@wso2.com>
>> wrote:
>>
>>> Thanks for the feedback, some interesting points were brought up
>>>
>>> @Abimaran, the problem with maintaining a rigid structure like old/new
>>> column is that if a user changes the value of 5 columns at a given time
>>> that would mean 5 different inserts to the table, when in actual fact it
>>> was a single transaction that took place when the user did the change and
>>> saved. So its better to use a implementation like
>>> google-diff-match-patch[1] to record the string diff between the values of
>>> the columns before the change took place and after the update. Though we
>>> dont need to worry about this implementation detail for now. The idea of
>>> using a single table to store the history of all tables that will require
>>> auditing sounds good.
>>>
>>
>> IMO we have to think bit more on whether we really need to store the diff
>> correspond to the changes since it may result in a considerable growth of
>> the storage mainly during the development phase.  Also the implementation
>> could be complicated in terms of what columns to be considered since the
>> changes may take place in more than a single table for an artifact update.
>>
>
> We dont need to worry too much about how we are going to implement
auditing at the moment since that is not a core requirement for the initial
DB design. We could provide a config for users to turn on auditing when the
system goes into production to avoid the growth during the development
stages(ideally an auditing feature is a requirement for a production
system). The actual implementation maybe complicated or simpler depending
on what we want to do. But any auditing implementation is better than no
auditing at all.

>
> Yes, also, one major problem with this approach is, getting diff between
> two db updates and later use the diff to check the history.
>
> And, if we are going to have seperate audit table for each Major tables,
> and how we are going to update that audit table when there is an update? If
> we copy the entire row when there is an update to a column, later we don't
> know for which column update, we have copy the old data to the audit table?
> If we copy only the old value of a column which is updated, will increase
> the audit table's number of entries.
>
> Having one audit table with table name and column name and the timestamps
> like below will solve these problems.
>
> ENTRY_ID   PK
> TABLE_NAME  VARCHAR
> FIELD_NAME   VARCHAR
> OLD_VALUEVARCHAR
> NEW_VALUE   VARCHAR
> ACTION_BY VARCHAR
> ACTION_TIME  VARCHAR
>

What is important is that we have the provision to implement this when we
actually want to, which is our initial goal. So there will be no actual
diff auditing to begin with. We will just record who did the update on a
given entity along with the time stamp like we do now. But to answer your
questions,

1. When we update an entity we replace the existing row with with the
updated values.

2. Most of the values wouldn't have changed, but since we have a single
update statement for APIs(We dont have separate update statements for
updating individual rows in the API table) we dont actually know which
columns have changed

3. So anyway we need to do a diff existing row and new data sent in the
update statement to figure out which columns have actually changed

We wont be having a separate audit table, just one. We dont need to copy
the entire row over either. If we get the diff we can store that only,
whcih should be smaller. We are not trying to replace DB backups :)

>
>
>> Thanks,
>> Lakmali
>>
>>>
>>> @Sanjeewa, yes this would improve performance when trying to retreive
>>> the LAST_UPDATED_TIME for a given entity.
>>>
>>> Let me elaborate a bit on Sanjeewa's point. So there can be only one
>>> CREATED_BY and CREATED_TIME for a given entity so that can remain as part
>>> of the original entities schema. Having the LAST_UPDATED_TIME as part of
>>> the original entities schema gives a performance advantage on checking if a
>>> given entity has been modified since it was last checked. This is vital for
>>> features such as ETags support for the REST API. So CREATED_BY,
>>> CREATED_TIME, LAST_UPDATED_TIME can remain with the original entities
>>> schema.
>>>
>>> We can still use 

Re: [Architecture] Review DB table audit fields for API Manager C5

2016-10-11 Thread Uvindra Dias Jayasinha
Thanks for the feedback, some interesting points were brought up

@Abimaran, the problem with maintaining a rigid structure like old/new
column is that if a user changes the value of 5 columns at a given time
that would mean 5 different inserts to the table, when in actual fact it
was a single transaction that took place when the user did the change and
saved. So its better to use a implementation like
google-diff-match-patch[1] to record the string diff between the values of
the columns before the change took place and after the update. Though we
dont need to worry about this implementation detail for now. The idea of
using a single table to store the history of all tables that will require
auditing sounds good.

@Sanjeewa, yes this would improve performance when trying to retreive the
LAST_UPDATED_TIME for a given entity.

Let me elaborate a bit on Sanjeewa's point. So there can be only one
CREATED_BY and CREATED_TIME for a given entity so that can remain as part
of the original entities schema. Having the LAST_UPDATED_TIME as part of
the original entities schema gives a performance advantage on checking if a
given entity has been modified since it was last checked. This is vital for
features such as ETags support for the REST API. So CREATED_BY,
CREATED_TIME, LAST_UPDATED_TIME can remain with the original entities
schema.

We can still use the master audit table(building on Abimarans idea) to
actually keep track of change history of a given entity, so that table
could look like this,

ENTRY_ID   PK
TABLE_NAME VARCHAR
ENTITY_ID  FK
DIFF  BLOB
ACTION*VARCHAR*
ACTION_BY  *VARCHAR*
ACTION_TIME   *TIMESTAMP*



[1] https://code.google.com/p/google-diff-match-patch/

On 11 October 2016 at 13:44, Sanjeewa Malalgoda  wrote:

> I think we can manage audit table while still having CREATED_BY,
> CREATED_TIME,UPDATED_BY, UPDATED_TIME  in same tables. So with that
> approach we may never need to do table scan of audit table while fetching
> updates. So each updates will recorded in separate table while original
> table having all relevant information. WDYT?
>
> Thanks,
> sanjeewa.
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Review DB table audit fields for API Manager C5

2016-10-11 Thread Uvindra Dias Jayasinha
*Context*
We have started to look into API Manager's DB design for C5 and want to
evaluate what was done in the past and see if there is room for improvement.

This is specifically to talk about the below audit columns,

CREATED_BY*VARCHAR*
CREATED_TIME*TIMESTAMP*
UPDATED_BY*VARCHAR*
UPDATED_TIME*TIMESTAMP*


that are currently present in many of the tables such as AM_API,
AM_APPLICATION, AM_SUBSCRIBER, AM_SUBSCRIPTION, etc.

*Cons of current approach*

1. Since the columns are part of the entities definition, we can only store
a single value for UPDATED_BY and UPDATED_TIME for a given entity.
Currently we store only the latest values so we can actually keep track of
history.

2. Cant keep track of when an entity is deleted(Maybe we need this, maybe
we dont)

3. In order to truly be bale to audit we need to keep track of the changes
that were done to a given entity along with who changed it and when, but
this is not possible at the moment with our current design.

*Pros of current approach*

1. Easy to manage data since audit columns are part of the entity itself(No
need for a separate table to store history and no need to deal with growing
historical audit data)


*Possible alternative*

So things are much more simple now and easier to manage but does it really
achieve what it sets out to do? To keep track of auditing information we
ideally need to keep track of every single action that was performed on a
given entity(n number of actions for a given entity). So this requires a
separate table definition. I'm *NOT* proposing that we define a separate
audit table for every single entity in the DB, just for a few important
ones such as AM_API. So for example for
APIs we could have an audit table with columns like below,

ENTRY_ID *INTEGER PRIMARY KEY*
API_ID  *INTEGER*
ACTION*VARCHAR*
ACTION_BY  *VARCHAR*
ACTION_TIME   *TIMESTAMP*

*Cons of new approach*

1. Data could grow for a given entity and may need to be managed separately

*Pros of new approach*

1.  Above design does not cater for keeping track of the actual change that
took place. I left that out on purpose because we dont support this
currently also, but having separate table means we have provision to add a
diff column and implement this feature in the future.

2. We can keep track of all the changes that the entity went through, who
did it and when(including deletions if we want to).

I would like to here your thoughts on this. Is this worth exploring or
should we just do things the way we have always done?


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] API Manager - API Product feature

2016-10-05 Thread Uvindra Dias Jayasinha
Currently for APIs we have the permissions *creator* and *publish*er.

Regarding the permissions for API Products, it makes sense to allow those
with API publish permissions to be able to create and publish products,
since the API Publisher is the business user usually. But after chatting
with NuwanD thought it was better to have a separate permission for the
product to give a more fine grained permission model to users.

On 5 October 2016 at 15:15, Uvindra Dias Jayasinha <uvin...@wso2.com> wrote:

> *Brief*
>
> The API Product feature being developed for API Manager allows AP
> Publishers to bundle a set of  APIs and expose them as a product which will
> have its own set of policies that will apply all APIs that are bundled
> within the product. API Consumers will have the option of subscribing to
> the exposed product which will mean that they are subscribed to all APIs
> within the same product.
>
> *Rationale*
>
> This will enable the consideration the API as a technical artifact which
> will not specify any usage policies and the API Product as a business
> artifact which will define what polices will be applied. This will allowing
> for,
>
>1. Grouping of related APIs together by API Publishers
>2. Packaging the same set of APIs pared with different policies in
>different products
>3. API consumers can easily discover and subscribe to all related APIs
>at once
>
>
>  *Conditions*
>
>1. This feature will be implemented without relying on the Governance
>Registry. Alternative means will be found for specific features that the
>registry is relied upon
>2. The current way that APIs are created and published will so this
>feature will ensure backward compatibility.
>
>
> --
> Regards,
> Uvindra
>
> Mobile: 33962
>



-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] API Manager - API Product feature

2016-10-05 Thread Uvindra Dias Jayasinha
*Brief*

The API Product feature being developed for API Manager allows AP
Publishers to bundle a set of  APIs and expose them as a product which will
have its own set of policies that will apply all APIs that are bundled
within the product. API Consumers will have the option of subscribing to
the exposed product which will mean that they are subscribed to all APIs
within the same product.

*Rationale*

This will enable the consideration the API as a technical artifact which
will not specify any usage policies and the API Product as a business
artifact which will define what polices will be applied. This will allowing
for,

   1. Grouping of related APIs together by API Publishers
   2. Packaging the same set of APIs pared with different policies in
   different products
   3. API consumers can easily discover and subscribe to all related APIs
   at once


 *Conditions*

   1. This feature will be implemented without relying on the Governance
   Registry. Alternative means will be found for specific features that the
   registry is relied upon
   2. The current way that APIs are created and published will so this
   feature will ensure backward compatibility.


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM REST API DCR support for removing registered clients

2016-09-15 Thread Uvindra Dias Jayasinha
Our own apps will not run into this problem, but those who deploy APIM as a
PaaS or cloud offering may be faced with this problem from 3rd parties.
Apps would like to do this as well because it provides additional security
by revoking the associated tokens that were being used when the
registration is deleted.


For those who want to enforce a policy that require dynamic clients to be
temporary without depending on clients itself to remove their own
registrations, they will need to remove the corresponding SP from their end.

On 15 September 2016 at 12:36, Nuwan Dias <nuw...@wso2.com> wrote:

>
>
> On Thu, Sep 15, 2016 at 12:26 PM, Uvindra Dias Jayasinha <uvin...@wso2.com
> > wrote:
>
>> In the Cloud it could be API Creators, Publisher or Subscribers,
>> basically anyone who can call the REST API
>>
>
> So its the Store/Publisher webapps right? In that case it just creates 1
> client per app (2 in total) to be used by all who're logging into it. Can
> we really remove those clients? I think not. Because once you create them
> it has to be there for future use.
>
> I'm +1 to implement this as a feature but I don't recall an instance where
> our own Apps will end up creating 100s of clients through DCR. So at least
> for our own apps, they won't need to use this feature. For third party apps
> which may create many clients, unless those apps specifically call the
> removal operation, we'll still end up with lots of clients in the DB.
>
>>
>> On 15 September 2016 at 12:21, Nuwan Dias <nuw...@wso2.com> wrote:
>>
>>>
>>>
>>> On Thu, Sep 15, 2016 at 12:14 PM, Uvindra Dias Jayasinha <
>>> uvin...@wso2.com> wrote:
>>>
>>>> While using APIM's REST API Dynamic Client Registration(DCR) I noticed
>>>> that after registering a client there is no way for the client to remove
>>>> the registration entry. You need to provide a unique clientName in order to
>>>> register with the DCR[1]
>>>>
>>>> I think it might be good to provide this ability via the REST API.
>>>> Usually an App might create an entry for itself upon registration and
>>>> simply reuse that same entry. But there maybe scenarios where the
>>>> registration might be temporary only while using the API. At that point if
>>>> we dont provide a way of removing the registration this will continue to
>>>> remain in the database. If we think of the cloud scenario we could end up
>>>> with a large number of registration entries.
>>>>
>>>
>>> In the cloud scenario, who is the consumer of the DCR endpoint?
>>>
>>>>
>>>> Also currently if you want to change any of the information that you
>>>> send in the DCR request(callBackURL, tokenScope, grantType, etc) you have
>>>> no way of doing it if you have already registered. Providing an option of
>>>> removing the existing entry will solve this issue.
>>>>
>>>> WDYT?
>>>>
>>>>
>>>> [1] https://docs.wso2.com/display/AM200/apidocs/publisher/index.
>>>> html#guide, https://docs.wso2.com/display/
>>>> AM200/apidocs/store/index.html#guide
>>>>
>>>> --
>>>> Regards,
>>>> Uvindra
>>>>
>>>> Mobile: 33962
>>>>
>>>
>>>
>>>
>>> --
>>> Nuwan Dias
>>>
>>> Software Architect - WSO2, Inc. http://wso2.com
>>> email : nuw...@wso2.com
>>> Phone : +94 777 775 729
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Regards,
>> Uvindra
>>
>> Mobile: 33962
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Regards,
Uvindra

Mobile: 33962
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


  1   2   >