Re: [Architecture] [RRT] XACML based scope validator (during OAuth2 token validation)

2018-01-23 Thread Johann Nallathamby
On Tue, Jan 23, 2018 at 9:49 AM, Senthalan Kanagalingam 
wrote:

> Hi all,
>
> I have completed the scope validation implementation. But in this
> implementation, the entitlement engine has to run for every token
> validation request even there is no policy defined by the user for a
> particular service provider. PDP have to go through all existing policies
> to select the applicable policies. Its an overhead in token validation
> time.
>
> To avoid this we can introduce "Enable XACML based scope validator"
> checkbox under Local & Outbound Authentication Configuration.
>

This should be under OAuth2 section because it's OAuth2 specific. We can't
have "scope" under "Local & Outbound Authentication Configuration".

Regards,
Johann.


> Then users can enable or disable scope validation for that particular
> service provider. This will be a simple select query and we can use
> caching. We can check whether the user has enabled the scope validation or
> not and continue.
>
> Any suggestions or improvements are highly appreciated.
>
> Thanks and Regards,
> Senthalan
>
> On Fri, Jan 19, 2018 at 6:42 PM, Senthalan Kanagalingam <
> sentha...@wso2.com> wrote:
>
>> Hi,
>>
>> Here is the architecture of the XACML based scope validator.
>>
>>
>> After whether access token has expired, the scope of the token will be
>> validated using JDBCScopeValidator and XACMLScopeValidator.
>> The JDBCScopeValidator was already implemented. The XACMLScopeValidator
>> will create an XACML request from access token and validate using
>> EntitlementService.
>>
>>
>> Thanks and Regards,
>> Senthalan
>>
>> On Tue, Jan 16, 2018 at 8:59 PM, Dimuthu Leelarathne 
>> wrote:
>>
>>> Hi Johann,
>>>
>>> On Tue, Jan 16, 2018 at 8:49 PM, Johann Nallathamby 
>>> wrote:
>>>
 Hi Senthalan,

 On Tue, Jan 16, 2018 at 12:05 PM, Senthalan Kanagalingam <
 sentha...@wso2.com> wrote:

> Hi Johann,
>
> Thanks for the feedback. Currently, I am checking that feature.
>
> According to my understanding, this feature will be useful to validate
> the token scopes against resource scopes. As this validation is done by
> JDBCScopeValidator and my implementation will be parallel to it (IS allows
> multiple scope validators), do I have to implement validation of the token
> scopes against the resource scopes as well?
>

 -1 to have two implementation. There should be only one implementation
 which is based on XACML. Otherwise it will create overhead in configuring
 and doesn't work well with tenant model.

>>>
 The current scope-role based validation we introduced in IS 5.4.0 will
 need to be implemented using XACML and be the default policy. The other
 policies you were planning could be additional template policies we ship
 with the product. In addition users can have any new policies they want
 (per tenant).


>
> Because I have checked with identity-application-authz-xacml[1
> ]
> and planned to implement validating scopes against the role base and time
> base policies only.
>

 Yes, you can use this code and implement a XACML PEP to send a XACML
 request. But the validation has to happen on the XACML PDP side.

 What is the difference between the role based policy you are talking
 and the role based scope validation we implemented in IS 5.4.0?

>>>
>>> XACML based scope validation would give fine-grained control and
>>> flexilibility. I don't have experience with JDBC scope validator but from
>>> what I know, it is hard to have a generic implementation out of it.
>>>
>>> The added avantage is flexibility. You can write your custom XACML
>>> policies and control how authorization happens.
>>>
>>> Let it be XACML or Javascript, we need detailed control to cater for
>>> different requirements.
>>>
>>> thanks,
>>> Dimuthu
>>>
>>>
 Time based policies can be one of the additional policy templates we
 ship.

 Regards,
 Johann.


>
> [1] - https://github.com/wso2-extensions/identity-application-au
> thz-xacml
>
> Regards,
> Senthalan
>
> On Mon, Jan 15, 2018 at 8:13 PM, Johann Nallathamby 
> wrote:
>
>> *[-IAM, RRT]*
>>
>> On Mon, Jan 15, 2018 at 8:13 PM, Johann Nallathamby 
>> wrote:
>>
>>> Hi Senthalan,
>>>
>>> Did you check [1]? In this feature *@Isuranga* implement XACML
>>> policy to evaluate the permission tree. For this he had to come up with 
>>> a
>>> policy, that defined a custom function.
>>>
>>> In the above feature if you replace permission with OAuth2 scopes
>>> (which is also a representation of permissions in OAuth2 world, and can 
>>> be
>>> assigned to roles from IS 5.4.0 onwards IINM) you will get what you 

Re: [Architecture] [RRT] XACML based scope validator (during OAuth2 token validation)

2018-01-23 Thread Senthalan Kanagalingam
Hi all,

As discussed with Ruwan about the tickbox implementation, the extension
itself has to take care of keeping configuration and UI implementation
without changing core implementation. So rather than adding a column to the
already existing table, the extension has to create a table and maintain
its configuration. For integrating UI elements we need UI advisory, which
can take advice from extensions and render UI.
[image: Inline image 1]
Here we have to finalize,

   1. How to create the table when for the first time the extension is
   added?
   2. UI advisory in IS core.

looking forward to any suggestions or comments.

Thanks,
Senthalan

On Tue, Jan 23, 2018 at 10:40 AM, Dimuthu Leelarathne 
wrote:

>
>
> On Tue, Jan 23, 2018 at 9:49 AM, Senthalan Kanagalingam <
> sentha...@wso2.com> wrote:
>
>> Hi all,
>>
>> I have completed the scope validation implementation. But in this
>> implementation, the entitlement engine has to run for every token
>> validation request even there is no policy defined by the user for a
>> particular service provider. PDP have to go through all existing policies
>> to select the applicable policies. Its an overhead in token validation
>> time.
>>
>> To avoid this we can introduce "Enable XACML based scope validator"
>> checkbox under Local & Outbound Authentication Configuration. Then users
>> can enable or disable scope validation for that particular service
>> provider. This will be a simple select query and we can use caching. We can
>> check whether the user has enabled the scope validation or not and
>> continue.
>>
>
> +1 due to following reason,
>
> 1. Performance improvement at SP
> 2. After db read and cached, it can seve millions of auth requests until
> the cache expires
> 3. This is how the existing XACML policy authorization happens during
> token issuance time
>
> thanks,
> Dimuthu
>
>
>> Any suggestions or improvements are highly appreciated.
>>
>> Thanks and Regards,
>> Senthalan
>>
>> On Fri, Jan 19, 2018 at 6:42 PM, Senthalan Kanagalingam <
>> sentha...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> Here is the architecture of the XACML based scope validator.
>>>
>>>
>>> After whether access token has expired, the scope of the token will be
>>> validated using JDBCScopeValidator and XACMLScopeValidator.
>>> The JDBCScopeValidator was already implemented. The XACMLScopeValidator
>>> will create an XACML request from access token and validate using
>>> EntitlementService.
>>>
>>>
>>> Thanks and Regards,
>>> Senthalan
>>>
>>> On Tue, Jan 16, 2018 at 8:59 PM, Dimuthu Leelarathne 
>>> wrote:
>>>
 Hi Johann,

 On Tue, Jan 16, 2018 at 8:49 PM, Johann Nallathamby 
 wrote:

> Hi Senthalan,
>
> On Tue, Jan 16, 2018 at 12:05 PM, Senthalan Kanagalingam <
> sentha...@wso2.com> wrote:
>
>> Hi Johann,
>>
>> Thanks for the feedback. Currently, I am checking that feature.
>>
>> According to my understanding, this feature will be useful to
>> validate the token scopes against resource scopes. As this validation is
>> done by JDBCScopeValidator and my implementation will be parallel to it 
>> (IS
>> allows multiple scope validators), do I have to implement validation of 
>> the
>> token scopes against the resource scopes as well?
>>
>
> -1 to have two implementation. There should be only one implementation
> which is based on XACML. Otherwise it will create overhead in configuring
> and doesn't work well with tenant model.
>

> The current scope-role based validation we introduced in IS 5.4.0 will
> need to be implemented using XACML and be the default policy. The other
> policies you were planning could be additional template policies we ship
> with the product. In addition users can have any new policies they want
> (per tenant).
>
>
>>
>> Because I have checked with identity-application-authz-xacml[1
>> ]
>> and planned to implement validating scopes against the role base and time
>> base policies only.
>>
>
> Yes, you can use this code and implement a XACML PEP to send a XACML
> request. But the validation has to happen on the XACML PDP side.
>
> What is the difference between the role based policy you are talking
> and the role based scope validation we implemented in IS 5.4.0?
>

 XACML based scope validation would give fine-grained control and
 flexilibility. I don't have experience with JDBC scope validator but from
 what I know, it is hard to have a generic implementation out of it.

 The added avantage is flexibility. You can write your custom XACML
 policies and control how authorization happens.

 Let it be XACML or Javascript, we need detailed control to cater for
 different requirements.

 thanks,
 Dimuthu

Re: [Architecture] [RRT] CMIS Connector

2018-01-23 Thread Nirthika Rajendran
Hi All,

I was able to test CMIS Connector with Alfresco server.

When I try to test with *Sharepoint*, I was unable to access CMIS API of
Sharepoint.
So for the initial version we plan to release CMIS Connector, only tested
with *Alfresco*

Can I proceed with this?

Regards,
*Nirthika Rajendran*
*Associate Software Engineer*
WSO2 Inc : http://wso2.org
Mobile   : +94 77 719 8368
LinkedIn: https://www.linkedin.com/in/nirthika/
Blog  :
*http://nirthika-tech-stuff.blogspot.com/
*

On Fri, Jan 12, 2018 at 3:17 PM, Nirthika Rajendran 
wrote:

> Hi All,
>
> *CMIS* (Content Management Interoperability Services) is the OASIS
> specification for content management interoperability. It allows client and
> servers to talk together in HTTP (REST with JSON, AtomPub or SOAP) using a
> unified domain model.
>
> We have planned to implement CMIS connector for EI by referring the common 
> *OASIS
> CMIS API* documentation [1] version 1.1.
>
> For testing purposes, I am planning to use *Alfresco* [2] which offers an
> open source CMIS product.
>
> Now planning to cover the following Methods [3] for the initial version.
>
> [1] http://docs.oasis-open.org/cmis/CMIS/v1.1/errata01/os/CMIS-
> v1.1-errata01-os-complete.html
> [2] https://www.alfresco.com/cmis
> [3]
> *Category*   *Operations*
> Repository   getRepositories
> Object   getAllowableActions
>
> getProperties
>
>
> Please let us know if you have any concern.
>
> Regards,
> *Nirthika Rajendran*
> *Associate Software Engineer*
> WSO2 Inc : http://wso2.org
> Mobile   : +94 77 719 8368 <+94%2077%20719%208368>
> LinkedIn: https://www.linkedin.com/in/nirthika/
> Blog  :
> *http://nirthika-tech-stuff.blogspot.com/
> *
>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [RRT] Improving caching based on cache-control and ETag headers

2018-01-23 Thread Keerthika Mahendralingam
>
>
> What will happen in the following case?
>
>-  Cache Expiry < Max-age && and the cache entry is evicted?
>
> I believe in that case we have to fetch it from BE?
>
Yes, if the Cache expiry time is less than the Max-age then cached response
will be invalidated in the expiration time limit. So we ned to get the
response from BE.

>
> thanks,
> Dimuthu
>
>
> On Wed, Jan 24, 2018 at 8:02 AM, Riyafa Abdul Hameed 
> wrote:
>
>> Hi,
>>
>> It was required to support native JSON in the cache mediator and hence we
>> had to use the JsonStreamBuilder. At the time of releasing it was mentioned
>> that APIM still uses JsonBuilder and I created an issue[1] to address this
>> if required.
>>
>> [1] https://github.com/wso2/product-ei/issues/916
>>
>> Thanks,
>> Riyafa
>>
>> On Wed, Jan 24, 2018 at 3:40 AM, Dushan Abeyruwan 
>> wrote:
>>
>>> Hi Kreethika,
>>>   Yes, this is a long pending initiative that is required under the
>>> cache mediator. Anyway, I believe this may be more meaningful if you draw
>>> flow diagram + sequence diagram so, audience in this list able to fully
>>> understand the picture and the interaction of the middleman (i.e
>>> Integration layer) and that may be helpful when writing documentation
>>>
>> Will send those ASAP Dushan.

Thanks,
Keerthika.

>
>>> Cheers,
>>> Dushan
>>>
>>> On Fri, Jan 12, 2018 at 1:37 AM, Keerthika Mahendralingam <
>>> keerth...@wso2.com> wrote:
>>>
 +1. Thanks Riyafa for the suggestion.


 Thanks,
 Keerthika.

 On Fri, Jan 12, 2018 at 3:05 PM, Riyafa Abdul Hameed 
 wrote:

> Hi Keerthika,
>
> We should have an option for disregarding the cache-control headers
> and the default value should be that the cache-control headers be
> disregarded. This is because the current cache mediator is written so that
> it is fully backward compatible with the older versions of the cache
> mediators. Any one using cache mediator in a synape configuration in an
> older version can use the same synapse configuration in the new version 
> and
> can expect the same behavior. If he/she wants to make use of the new
> features he/she may do so by editing the synapse configurations.
>
> Thanks,
> Riyafa
>
>
> On Fri, Jan 12, 2018 at 12:24 PM, Keerthika Mahendralingam <
> keerth...@wso2.com> wrote:
>
>> Thanks Isuru. Will check the existing functionality.
>>
>> @Vijitha,
>> +1 for providing the configuration option for omitting the
>> cache-control headers.
>>
>> @Sanjeewa
>> Will check with the latest cache mediator.
>>
>> Thanks,
>> Keerthika.
>>
>> On Fri, Jan 12, 2018 at 12:16 PM, Vijitha Ekanayake <
>> vijit...@wso2.com> wrote:
>>
>>> Hi Sanjeewa,
>>>
>>>
>>> On Fri, Jan 12, 2018 at 12:01 PM, Sanjeewa Malalgoda <
>>> sanje...@wso2.com> wrote:
>>>
 So i think we can add latest cache mediator dependency to API
 Manager 2.2.0 branch and test this feature.
 If there are any gaps in documents or implementation we will be
 able to fix them and officially support this feature from 2.2.0 onward.
 WDYT?

>>>
>>> +1 for this approach.
>>>

 @Vijitha, Cache mediator can engage per API basis. So if someone do
 not interested on caching they can simply remove cache mediator for 
 that
 particular mediation flow.

>>>
>>> I intended to state just an option of disregarding the HTTP caching
>>> however not the response caching. Shouldn't it be valuable to have a 
>>> design
>>> alternative to disregard the HTTP Caching yet not the default response
>>> caching?
>>>
>>> Thanks.
>>>

 Thanks,
 Sanjeewa.

 On Fri, Jan 12, 2018 at 11:07 AM, Isuru Udana 
 wrote:

> Hi Keerthika,
>
> ETag caching support is already implemented at the http transport
> level.
> This feature was introduced long time ago but still the
> documentation is not added to the wiki.
> Please refer to following jiras for more information.
>
>
> https://wso2.org/jira/browse/ESBJAVA-3504
>
> https://wso2.org/jira/browse/DOCUMENTATION-1435
>
>
> Thanks.
>
>
> On Fri, Jan 12, 2018 at 10:51 AM, Keerthika Mahendralingam <
> keerth...@wso2.com> wrote:
>
>> Hi Shazni,
>>
>> Please find the answers inline.
>>
>>>
>>> 1. Does the user specify whether the ETag header should present
>>> in the response or not? Or is it always available if the cache 
>>> mediator is
>>> used?
>>>
>> If the backend returns the response with ETag header, cahce

Re: [Architecture] [IS] SAML SSO Agent for .NET

2018-01-23 Thread Chiran Wijesekara
Hi Dushan and Ruwan,

In effect, I am working on the preparation of comprehensive documentation
to support quick setup.

Unlike *component space*, this project is done with the focus improving
usability as well. Further, the .NET developers should be able to plug this
into their applications with minimal effort. Thus, the SAML authentication
should be available* just by adding the .dll and doing configurations in
web.config*.

Thank you for your constructive feedback.

Thank you.

On Wed, Jan 24, 2018 at 7:48 AM, Ruwan Abeykoon  wrote:

> Hi Dushan,
> Thanks for sharing the "Componentspace". It seems a complete and
> comprehensive solution.
>
> This purpose of this "agent" ( we need to rename this, as it is not an
> agent, but a library), is to be included in VS solution. We have no plan to
> install this library in IIS.
>
> +1 on comprehensive documentation.
> I think we need to include,
> 1. The architecture of the library and the rest of the app, + WSO2 IS.
> 2. What a developer has to do on VS( step by step)
> 3. How to change the values in production.
>
> Cheers,
> Ruwan
>
> On Wed, Jan 24, 2018 at 4:02 AM, Dushan Abeyruwan  wrote:
>
>> Hi Chiran,
>>  Interesting work, please do come up with a documentation for the
>> implementation you have done (i.e working sample illustration with images,
>> the README.txt for the git project). I need to visualize the complete agent
>> integration stepwise. I had looked the repo. I believe once the agent
>> installed to the .net web application we may need to install the agent.dll
>> then we may need to complete following[1]
>>   I used to work with the [2] for some demos, however, just need to
>> understand the differences between the Componentspace [2] vs the agent
>> feature that we are offering
>>
>> [1]
>>
>> 
>> 
>> 
>> http://localhost:49763/
>> sample/callback"/>
>> 
>> 
>> 
>> 
>> 
>>
>> [2] https://www.componentspace.com/
>>
>> Cheers,
>> Dushan
>>
>> On Sun, Jan 21, 2018 at 10:22 PM, Chiran Wijesekara 
>> wrote:
>>
>>> Architecture diagram is attached below. It's not showing up in the
>>> original Email due to an issue with the format.
>>>
>>>
>>> On Mon, Jan 22, 2018 at 10:56 AM, Chiran Wijesekara 
>>> wrote:
>>>

 *Introduction:*

 Suppose someone has an ASP.NET web application or else he/she is going
 to create a new one. One of your major concerns would be to provide a
 secure mechanism for handling user authentication and authorization.

 With the introduction of this SAML Agent, you can easily incorporate
 this agent into your ASP.NET web application and it will take care of
 all the things related to SAML authentication mechanism.

 *Solution Architecture:*


 *Note: 2,7,8,3 of the above diagram denotes the resolving of the
 current request of interest.*

 The above diagram depicts the architecture for the .NET SAML agent. The
 agent is designed in such a way that all the requests that are coming to
 the ASP.NET web application will be directed to the
 *FilteringHttpModule*. This *FilteringHttpModule* is a  class that
 implements the *IHttpModule *interface (i.e. a custom HTTP handler).
 And this *FilteringHttpModule* is responsible for handling the SAML
 authentication related request. It will call the relevant method of 
 *SAMLManager
 *class to process the request.

 *How to incorporate Agent into a given ASP.NET  web
 application:*

 This agent is developed in a way such that it has minimum possible
 dependencies on the ASP.NET web application. Hence, when someone wants
 to incorporate SAML authentication into his/her ASp.NET web app, that could
 be done with a minimum effort.

 Following is the list of items to configure SAML Agent for a given
 ASP.NET web application.

 The process of incorporating *SAML authentication with wso2 identity
 server* via SAML agent can be explained in few steps as follows.

1.

*Add* - the agent.dll reference to your Asp.NET web application(You
can get this via NuGet package manager or else from the git repo)
2.

*Configure* - the mandatory properties in your ASP.NET web
application’s web.config file. Furthermore, you have to get the .jks 
 from
the wso2 Identity Server you are using and convert it to a *pkcs*
using keytool.(Or else use your own pkcs12). Add the .pfx / .p12 to the
Local Machine Certificate Store.
3.

*Register* - the “FilteringHttpModule” in your ASP.NET web
application to handle the requests related to SAML authentication 
 mechanism.
4.

*Set* - your application’s login controls to refer SAML intensive
segments. That is, suppose you have a login link in your web 
 

Re: [Architecture] [RRT] Improving caching based on cache-control and ETag headers

2018-01-23 Thread Riyafa Abdul Hameed
Hi,

Sorry ignore my previous mail. I meant to send it as a reply to another
mail and mistakenly sent it here.

Thanks,
Riyafa

On Wed, Jan 24, 2018 at 9:05 AM, Dimuthu Leelarathne 
wrote:

> Hi Keerthika,
>
> What will happen in the following case?
>
>-  Cache Expiry < Max-age && and the cache entry is evicted?
>
> I believe in that case we have to fetch it from BE?
>
> thanks,
> Dimuthu
>
>
> On Wed, Jan 24, 2018 at 8:02 AM, Riyafa Abdul Hameed 
> wrote:
>
>> Hi,
>>
>> It was required to support native JSON in the cache mediator and hence we
>> had to use the JsonStreamBuilder. At the time of releasing it was mentioned
>> that APIM still uses JsonBuilder and I created an issue[1] to address this
>> if required.
>>
>> [1] https://github.com/wso2/product-ei/issues/916
>>
>> Thanks,
>> Riyafa
>>
>> On Wed, Jan 24, 2018 at 3:40 AM, Dushan Abeyruwan 
>> wrote:
>>
>>> Hi Kreethika,
>>>   Yes, this is a long pending initiative that is required under the
>>> cache mediator. Anyway, I believe this may be more meaningful if you draw
>>> flow diagram + sequence diagram so, audience in this list able to fully
>>> understand the picture and the interaction of the middleman (i.e
>>> Integration layer) and that may be helpful when writing documentation
>>>
>>> Cheers,
>>> Dushan
>>>
>>> On Fri, Jan 12, 2018 at 1:37 AM, Keerthika Mahendralingam <
>>> keerth...@wso2.com> wrote:
>>>
 +1. Thanks Riyafa for the suggestion.


 Thanks,
 Keerthika.

 On Fri, Jan 12, 2018 at 3:05 PM, Riyafa Abdul Hameed 
 wrote:

> Hi Keerthika,
>
> We should have an option for disregarding the cache-control headers
> and the default value should be that the cache-control headers be
> disregarded. This is because the current cache mediator is written so that
> it is fully backward compatible with the older versions of the cache
> mediators. Any one using cache mediator in a synape configuration in an
> older version can use the same synapse configuration in the new version 
> and
> can expect the same behavior. If he/she wants to make use of the new
> features he/she may do so by editing the synapse configurations.
>
> Thanks,
> Riyafa
>
>
> On Fri, Jan 12, 2018 at 12:24 PM, Keerthika Mahendralingam <
> keerth...@wso2.com> wrote:
>
>> Thanks Isuru. Will check the existing functionality.
>>
>> @Vijitha,
>> +1 for providing the configuration option for omitting the
>> cache-control headers.
>>
>> @Sanjeewa
>> Will check with the latest cache mediator.
>>
>> Thanks,
>> Keerthika.
>>
>> On Fri, Jan 12, 2018 at 12:16 PM, Vijitha Ekanayake <
>> vijit...@wso2.com> wrote:
>>
>>> Hi Sanjeewa,
>>>
>>>
>>> On Fri, Jan 12, 2018 at 12:01 PM, Sanjeewa Malalgoda <
>>> sanje...@wso2.com> wrote:
>>>
 So i think we can add latest cache mediator dependency to API
 Manager 2.2.0 branch and test this feature.
 If there are any gaps in documents or implementation we will be
 able to fix them and officially support this feature from 2.2.0 onward.
 WDYT?

>>>
>>> +1 for this approach.
>>>

 @Vijitha, Cache mediator can engage per API basis. So if someone do
 not interested on caching they can simply remove cache mediator for 
 that
 particular mediation flow.

>>>
>>> I intended to state just an option of disregarding the HTTP caching
>>> however not the response caching. Shouldn't it be valuable to have a 
>>> design
>>> alternative to disregard the HTTP Caching yet not the default response
>>> caching?
>>>
>>> Thanks.
>>>

 Thanks,
 Sanjeewa.

 On Fri, Jan 12, 2018 at 11:07 AM, Isuru Udana 
 wrote:

> Hi Keerthika,
>
> ETag caching support is already implemented at the http transport
> level.
> This feature was introduced long time ago but still the
> documentation is not added to the wiki.
> Please refer to following jiras for more information.
>
>
> https://wso2.org/jira/browse/ESBJAVA-3504
>
> https://wso2.org/jira/browse/DOCUMENTATION-1435
>
>
> Thanks.
>
>
> On Fri, Jan 12, 2018 at 10:51 AM, Keerthika Mahendralingam <
> keerth...@wso2.com> wrote:
>
>> Hi Shazni,
>>
>> Please find the answers inline.
>>
>>>
>>> 1. Does the user specify whether the ETag header should present
>>> in the response or not? Or is it always available if the cache 
>>> mediator is
>>> used?
>>>
>> If the backend returns the response with ETag header, cahce

Re: [Architecture] [RRT] Improving caching based on cache-control and ETag headers

2018-01-23 Thread Dimuthu Leelarathne
Hi Keerthika,

What will happen in the following case?

   -  Cache Expiry < Max-age && and the cache entry is evicted?

I believe in that case we have to fetch it from BE?

thanks,
Dimuthu


On Wed, Jan 24, 2018 at 8:02 AM, Riyafa Abdul Hameed 
wrote:

> Hi,
>
> It was required to support native JSON in the cache mediator and hence we
> had to use the JsonStreamBuilder. At the time of releasing it was mentioned
> that APIM still uses JsonBuilder and I created an issue[1] to address this
> if required.
>
> [1] https://github.com/wso2/product-ei/issues/916
>
> Thanks,
> Riyafa
>
> On Wed, Jan 24, 2018 at 3:40 AM, Dushan Abeyruwan  wrote:
>
>> Hi Kreethika,
>>   Yes, this is a long pending initiative that is required under the cache
>> mediator. Anyway, I believe this may be more meaningful if you draw flow
>> diagram + sequence diagram so, audience in this list able to fully
>> understand the picture and the interaction of the middleman (i.e
>> Integration layer) and that may be helpful when writing documentation
>>
>> Cheers,
>> Dushan
>>
>> On Fri, Jan 12, 2018 at 1:37 AM, Keerthika Mahendralingam <
>> keerth...@wso2.com> wrote:
>>
>>> +1. Thanks Riyafa for the suggestion.
>>>
>>>
>>> Thanks,
>>> Keerthika.
>>>
>>> On Fri, Jan 12, 2018 at 3:05 PM, Riyafa Abdul Hameed 
>>> wrote:
>>>
 Hi Keerthika,

 We should have an option for disregarding the cache-control headers and
 the default value should be that the cache-control headers be disregarded.
 This is because the current cache mediator is written so that it is fully
 backward compatible with the older versions of the cache mediators. Any one
 using cache mediator in a synape configuration in an older version can use
 the same synapse configuration in the new version and can expect the same
 behavior. If he/she wants to make use of the new features he/she may do so
 by editing the synapse configurations.

 Thanks,
 Riyafa


 On Fri, Jan 12, 2018 at 12:24 PM, Keerthika Mahendralingam <
 keerth...@wso2.com> wrote:

> Thanks Isuru. Will check the existing functionality.
>
> @Vijitha,
> +1 for providing the configuration option for omitting the
> cache-control headers.
>
> @Sanjeewa
> Will check with the latest cache mediator.
>
> Thanks,
> Keerthika.
>
> On Fri, Jan 12, 2018 at 12:16 PM, Vijitha Ekanayake  > wrote:
>
>> Hi Sanjeewa,
>>
>>
>> On Fri, Jan 12, 2018 at 12:01 PM, Sanjeewa Malalgoda <
>> sanje...@wso2.com> wrote:
>>
>>> So i think we can add latest cache mediator dependency to API
>>> Manager 2.2.0 branch and test this feature.
>>> If there are any gaps in documents or implementation we will be able
>>> to fix them and officially support this feature from 2.2.0 onward.
>>> WDYT?
>>>
>>
>> +1 for this approach.
>>
>>>
>>> @Vijitha, Cache mediator can engage per API basis. So if someone do
>>> not interested on caching they can simply remove cache mediator for that
>>> particular mediation flow.
>>>
>>
>> I intended to state just an option of disregarding the HTTP caching
>> however not the response caching. Shouldn't it be valuable to have a 
>> design
>> alternative to disregard the HTTP Caching yet not the default response
>> caching?
>>
>> Thanks.
>>
>>>
>>> Thanks,
>>> Sanjeewa.
>>>
>>> On Fri, Jan 12, 2018 at 11:07 AM, Isuru Udana 
>>> wrote:
>>>
 Hi Keerthika,

 ETag caching support is already implemented at the http transport
 level.
 This feature was introduced long time ago but still the
 documentation is not added to the wiki.
 Please refer to following jiras for more information.


 https://wso2.org/jira/browse/ESBJAVA-3504

 https://wso2.org/jira/browse/DOCUMENTATION-1435


 Thanks.


 On Fri, Jan 12, 2018 at 10:51 AM, Keerthika Mahendralingam <
 keerth...@wso2.com> wrote:

> Hi Shazni,
>
> Please find the answers inline.
>
>>
>> 1. Does the user specify whether the ETag header should present
>> in the response or not? Or is it always available if the cache 
>> mediator is
>> used?
>>
> If the backend returns the response with ETag header, cahce
> mediator always need to validate the response before sending the 
> cached
> response to the user.
>
>>
>>>- If it is available and ETag is present in the cached
>>>response, make a request with "If-None-Match" header with the 
>>> ETag value.
>>>
>>>
>>>- If the server returns "304 Not 

Re: [Architecture] [RRT] Improving caching based on cache-control and ETag headers

2018-01-23 Thread Riyafa Abdul Hameed
Hi,

It was required to support native JSON in the cache mediator and hence we
had to use the JsonStreamBuilder. At the time of releasing it was mentioned
that APIM still uses JsonBuilder and I created an issue[1] to address this
if required.

[1] https://github.com/wso2/product-ei/issues/916

Thanks,
Riyafa

On Wed, Jan 24, 2018 at 3:40 AM, Dushan Abeyruwan  wrote:

> Hi Kreethika,
>   Yes, this is a long pending initiative that is required under the cache
> mediator. Anyway, I believe this may be more meaningful if you draw flow
> diagram + sequence diagram so, audience in this list able to fully
> understand the picture and the interaction of the middleman (i.e
> Integration layer) and that may be helpful when writing documentation
>
> Cheers,
> Dushan
>
> On Fri, Jan 12, 2018 at 1:37 AM, Keerthika Mahendralingam <
> keerth...@wso2.com> wrote:
>
>> +1. Thanks Riyafa for the suggestion.
>>
>>
>> Thanks,
>> Keerthika.
>>
>> On Fri, Jan 12, 2018 at 3:05 PM, Riyafa Abdul Hameed 
>> wrote:
>>
>>> Hi Keerthika,
>>>
>>> We should have an option for disregarding the cache-control headers and
>>> the default value should be that the cache-control headers be disregarded.
>>> This is because the current cache mediator is written so that it is fully
>>> backward compatible with the older versions of the cache mediators. Any one
>>> using cache mediator in a synape configuration in an older version can use
>>> the same synapse configuration in the new version and can expect the same
>>> behavior. If he/she wants to make use of the new features he/she may do so
>>> by editing the synapse configurations.
>>>
>>> Thanks,
>>> Riyafa
>>>
>>>
>>> On Fri, Jan 12, 2018 at 12:24 PM, Keerthika Mahendralingam <
>>> keerth...@wso2.com> wrote:
>>>
 Thanks Isuru. Will check the existing functionality.

 @Vijitha,
 +1 for providing the configuration option for omitting the
 cache-control headers.

 @Sanjeewa
 Will check with the latest cache mediator.

 Thanks,
 Keerthika.

 On Fri, Jan 12, 2018 at 12:16 PM, Vijitha Ekanayake 
 wrote:

> Hi Sanjeewa,
>
>
> On Fri, Jan 12, 2018 at 12:01 PM, Sanjeewa Malalgoda <
> sanje...@wso2.com> wrote:
>
>> So i think we can add latest cache mediator dependency to API Manager
>> 2.2.0 branch and test this feature.
>> If there are any gaps in documents or implementation we will be able
>> to fix them and officially support this feature from 2.2.0 onward.
>> WDYT?
>>
>
> +1 for this approach.
>
>>
>> @Vijitha, Cache mediator can engage per API basis. So if someone do
>> not interested on caching they can simply remove cache mediator for that
>> particular mediation flow.
>>
>
> I intended to state just an option of disregarding the HTTP caching
> however not the response caching. Shouldn't it be valuable to have a 
> design
> alternative to disregard the HTTP Caching yet not the default response
> caching?
>
> Thanks.
>
>>
>> Thanks,
>> Sanjeewa.
>>
>> On Fri, Jan 12, 2018 at 11:07 AM, Isuru Udana 
>> wrote:
>>
>>> Hi Keerthika,
>>>
>>> ETag caching support is already implemented at the http transport
>>> level.
>>> This feature was introduced long time ago but still the
>>> documentation is not added to the wiki.
>>> Please refer to following jiras for more information.
>>>
>>>
>>> https://wso2.org/jira/browse/ESBJAVA-3504
>>>
>>> https://wso2.org/jira/browse/DOCUMENTATION-1435
>>>
>>>
>>> Thanks.
>>>
>>>
>>> On Fri, Jan 12, 2018 at 10:51 AM, Keerthika Mahendralingam <
>>> keerth...@wso2.com> wrote:
>>>
 Hi Shazni,

 Please find the answers inline.

>
> 1. Does the user specify whether the ETag header should present in
> the response or not? Or is it always available if the cache mediator 
> is
> used?
>
 If the backend returns the response with ETag header, cahce
 mediator always need to validate the response before sending the cached
 response to the user.

>
>>- If it is available and ETag is present in the cached
>>response, make a request with "If-None-Match" header with the 
>> ETag value.
>>
>>
>>- If the server returns "304 Not Modified" response returns
>>the cached response to the user.
>>
>> 2. If the caller makes a request with "If-None-Match" header with
> the ETag value and if it matched, why would you need to respond with 
> the
> cached message. Shouldn't it be only 304 with empty message as the 
> response
> hasn't changed?
>
 I considered only the use case where the 

Re: [Architecture] [IS] SAML SSO Agent for .NET

2018-01-23 Thread Ruwan Abeykoon
Hi Dushan,
Thanks for sharing the "Componentspace". It seems a complete and
comprehensive solution.

This purpose of this "agent" ( we need to rename this, as it is not an
agent, but a library), is to be included in VS solution. We have no plan to
install this library in IIS.

+1 on comprehensive documentation.
I think we need to include,
1. The architecture of the library and the rest of the app, + WSO2 IS.
2. What a developer has to do on VS( step by step)
3. How to change the values in production.

Cheers,
Ruwan

On Wed, Jan 24, 2018 at 4:02 AM, Dushan Abeyruwan  wrote:

> Hi Chiran,
>  Interesting work, please do come up with a documentation for the
> implementation you have done (i.e working sample illustration with images,
> the README.txt for the git project). I need to visualize the complete agent
> integration stepwise. I had looked the repo. I believe once the agent
> installed to the .net web application we may need to install the agent.dll
> then we may need to complete following[1]
>   I used to work with the [2] for some demos, however, just need to
> understand the differences between the Componentspace [2] vs the agent
> feature that we are offering
>
> [1]
>
> 
> 
> 
> http://localhost:49763/
> sample/callback"/>
> 
> 
> 
> 
> 
>
> [2] https://www.componentspace.com/
>
> Cheers,
> Dushan
>
> On Sun, Jan 21, 2018 at 10:22 PM, Chiran Wijesekara 
> wrote:
>
>> Architecture diagram is attached below. It's not showing up in the
>> original Email due to an issue with the format.
>>
>>
>> On Mon, Jan 22, 2018 at 10:56 AM, Chiran Wijesekara 
>> wrote:
>>
>>>
>>> *Introduction:*
>>>
>>> Suppose someone has an ASP.NET web application or else he/she is going
>>> to create a new one. One of your major concerns would be to provide a
>>> secure mechanism for handling user authentication and authorization.
>>>
>>> With the introduction of this SAML Agent, you can easily incorporate
>>> this agent into your ASP.NET web application and it will take care of
>>> all the things related to SAML authentication mechanism.
>>>
>>> *Solution Architecture:*
>>>
>>>
>>> *Note: 2,7,8,3 of the above diagram denotes the resolving of the current
>>> request of interest.*
>>>
>>> The above diagram depicts the architecture for the .NET SAML agent. The
>>> agent is designed in such a way that all the requests that are coming to
>>> the ASP.NET web application will be directed to the
>>> *FilteringHttpModule*. This *FilteringHttpModule* is a  class that
>>> implements the *IHttpModule *interface (i.e. a custom HTTP handler).
>>> And this *FilteringHttpModule* is responsible for handling the SAML
>>> authentication related request. It will call the relevant method of 
>>> *SAMLManager
>>> *class to process the request.
>>>
>>> *How to incorporate Agent into a given ASP.NET  web
>>> application:*
>>>
>>> This agent is developed in a way such that it has minimum possible
>>> dependencies on the ASP.NET web application. Hence, when someone wants
>>> to incorporate SAML authentication into his/her ASp.NET web app, that could
>>> be done with a minimum effort.
>>>
>>> Following is the list of items to configure SAML Agent for a given
>>> ASP.NET web application.
>>>
>>> The process of incorporating *SAML authentication with wso2 identity
>>> server* via SAML agent can be explained in few steps as follows.
>>>
>>>1.
>>>
>>>*Add* - the agent.dll reference to your Asp.NET web application(You
>>>can get this via NuGet package manager or else from the git repo)
>>>2.
>>>
>>>*Configure* - the mandatory properties in your ASP.NET web
>>>application’s web.config file. Furthermore, you have to get the .jks from
>>>the wso2 Identity Server you are using and convert it to a *pkcs*
>>>using keytool.(Or else use your own pkcs12). Add the .pfx / .p12 to the
>>>Local Machine Certificate Store.
>>>3.
>>>
>>>*Register* - the “FilteringHttpModule” in your ASP.NET web
>>>application to handle the requests related to SAML authentication 
>>> mechanism.
>>>4.
>>>
>>>*Set* - your application’s login controls to refer SAML intensive
>>>segments. That is, suppose you have a login link in your web application.
>>>All you have to do is set the attribute, href = “/samlsso”.
>>>
>>>
>>>
>>> Link to the Repo: https://github.com/chirankavin
>>> da123/saml-sso-agent-DOT-NET
>>> 
>>> Any suggestion/recommendation to improve this agent's architecture would
>>> be much appreciated.
>>>
>>> Thank you.
>>> --
>>> *Chiran Wijesekara*
>>>
>>>
>>> *Software Engineering Intern | WSO2*Email: chir...@wso2.com
>>> Mobile: +94712990173web: www.wso2.com
>>>
>>> [image: https://wso2.com/signature] 
>>>
>>
>>
>>
>> --
>> *Chiran Wijesekara*
>>
>>
>> *Software Engineering Intern | WSO2*Email: chir...@wso2.com
>> Mobile: 

Re: [Architecture] [RRT]Calculating a risk score for authentication requests

2018-01-23 Thread Ruwan Abeykoon
Hi Darshana,
Yes, We can use the same architecture in 5.3.0/5.4.0 and 5.5.0, if we do
with proper extension mechanism.
Only difference is how we call the function. With custom authenticators
written in Java on 5.3.0/5.4.0 and Javascripts (unlocked) in 5.5.0

What I am really proposing is to implement this "Risk Calculation/Risk
Evaluation" in a separate micro-service(s), and only implement extension
functions within IS. This is the direction we are moving forward.
The extension function simply offload the real "Risk Evaluation" to
external fast Micro-Service. The "Risk Calculator" can be any analytical
engine including DAS, which is by definition heavy and slow.

Cheers,
Ruwan


On Tue, Jan 23, 2018 at 11:59 PM, Darshana Gunawardana 
wrote:

> Hi Pamoda,
>
> What are the use cases we try to implement with the calculated risk score?
>
> On Tue, Jan 23, 2018 at 10:43 PM, Ruwan Abeykoon  wrote:
>
>> Hi Dimuthu,
>> +1 on using existing infrastructure with IS.
>>
>> We need to implement "Risk Calculator" logic in DAS, with Spark and
>> Siddhi queries. This should not be inside the IS.
>>
>> What IS needs to do is to query the "Risk Data" with lucene while
>> performing the authentication flow. This component can be added as an
>> extension.
>>
>> What we want to do is to decouple "Risk Calculator" and "Risk Evaluator".
>>
>
> +1
>
> @Ruwan: If we wanted to adopt real time elevated authentication mechanism,
> can we use the same architecture? Or are you proposing different mechanism
> for that?
>
> Thanks,
>
>
>
>> Risk Calculator -Should be able to implement by any analytics engine. Not
>> only with WSO2 DAS.
>> Risk Evaluator - This is simple Java Function which does lucene query.
>> The lucene Store needs to be very close to IS cluster, as IS can not do any
>> blocking call to external systems, during authentication flow.
>>
>> I did a PoC for my proposed architecture. Please refer [1], which can now
>> be implemented with IS 5.5.0-M1. The same architecture can be used on IS
>> 5.3.0/5.4.0 with custom authenticators too, but in hard way.
>>
>> [1] https://github.com/ruwanta/wso2is-examples/tree/master/
>> is530/example-functions/components/org.wso2.carbon.
>> identity.sample.extension.feedback
>>
>> Cheers,
>> Ruwan
>>
>>
>>
>>
>> On Fri, Jan 19, 2018 at 9:36 AM, Dimuthu Leelarathne 
>> wrote:
>>
>>> Hi Ruwan,
>>>
>>> Btw .. we are doing this for 5.X series.
>>>
>>> thanks,
>>> Dimuthu
>>>
>>>
>>> On Fri, Jan 19, 2018 at 9:34 AM, Dimuthu Leelarathne 
>>> wrote:
>>>
 Hi Ruwan,

 I am thinking of using the existing architecture as it is. Right now
 there is an eventing listeners that publish data to DAS. I propose we reuse
 it as it is. Those event listeners that publish data can be
 X-EventListener, Y-EventListener, etc ... There are a lot of data that we
 can reuse in IS-analytics.


 ​

 Whatever the risk calculator does is to reuse the existing data-stores
 as the above diagram.

 thanks,
 Dimuthu

 ​

 On Fri, Jan 19, 2018 at 9:04 AM, Ruwan Abeykoon 
 wrote:

> Hi Pamoda,
> Can we enhance the architecture a little bit. We need to decouple
> "Risk Calculator" and "Identity Framework" further.
>
> IS needs a mechanism to receive the feedback from the pub/sub channel
> and make changes in authentication flow.
>
> 
>
> 1. The Temporal data is a Lucene store. Held at IS side. Central
> location for all IS cluster.
> 2. MQ is used, so that any third  party can publish "Risk" or any
> other information.
> 3. The authenticator will not request anything from the "Risk
> Calculator", but queries its own store. This will make things more
> resilient on chaos scenarios.
>
>
> This allows us to do lot more, e.g
>
>
>
>-
>
>Use stream analytics to make fast decisions.
>-
>
>   E.g. Too many authentications attempts coming from a particular
>   IP, on a given time window, then upgrade the flow to Two factor
>   authentication.
>   -
>
>Use batch analytics to perform simple behavioural decisions
>-
>
>   E.g. Users who has logged in and has session(not logged out),
>   tries to log in on another machine, could be prompted with another 
> screen
>   saying they have existing sessions on other machine.
>   -
>
>Throttling and Shaping based on billing tier exceeding conditions.
>-
>
>Use ML to do advanced behavioural decisions
>-
>
>   (Seshika will be interested in this)
>
>
> e.g.
>
> var agentChanged = queryAnalytics('lucene', ' e.g. name
> : agent-change-stream, subject: authenticatedSubjectId');
>
> if 

Re: [Architecture] [IS] SAML SSO Agent for .NET

2018-01-23 Thread Dushan Abeyruwan
Hi Chiran,
 Interesting work, please do come up with a documentation for the
implementation you have done (i.e working sample illustration with images,
the README.txt for the git project). I need to visualize the complete agent
integration stepwise. I had looked the repo. I believe once the agent
installed to the .net web application we may need to install the agent.dll
then we may need to complete following[1]
  I used to work with the [2] for some demos, however, just need to
understand the differences between the Componentspace [2] vs the agent
feature that we are offering

[1]




http://localhost:49763/
sample/callback"/>






[2] https://www.componentspace.com/

Cheers,
Dushan

On Sun, Jan 21, 2018 at 10:22 PM, Chiran Wijesekara 
wrote:

> Architecture diagram is attached below. It's not showing up in the
> original Email due to an issue with the format.
>
>
> On Mon, Jan 22, 2018 at 10:56 AM, Chiran Wijesekara 
> wrote:
>
>>
>> *Introduction:*
>>
>> Suppose someone has an ASP.NET web application or else he/she is going
>> to create a new one. One of your major concerns would be to provide a
>> secure mechanism for handling user authentication and authorization.
>>
>> With the introduction of this SAML Agent, you can easily incorporate this
>> agent into your ASP.NET web application and it will take care of all the
>> things related to SAML authentication mechanism.
>>
>> *Solution Architecture:*
>>
>>
>> *Note: 2,7,8,3 of the above diagram denotes the resolving of the current
>> request of interest.*
>>
>> The above diagram depicts the architecture for the .NET SAML agent. The
>> agent is designed in such a way that all the requests that are coming to
>> the ASP.NET web application will be directed to the *FilteringHttpModule*.
>> This *FilteringHttpModule* is a  class that implements the *IHttpModule 
>> *interface
>> (i.e. a custom HTTP handler). And this *FilteringHttpModule* is
>> responsible for handling the SAML authentication related request. It will
>> call the relevant method of *SAMLManager *class to process the request.
>>
>> *How to incorporate Agent into a given ASP.NET  web
>> application:*
>>
>> This agent is developed in a way such that it has minimum possible
>> dependencies on the ASP.NET web application. Hence, when someone wants
>> to incorporate SAML authentication into his/her ASp.NET web app, that could
>> be done with a minimum effort.
>>
>> Following is the list of items to configure SAML Agent for a given
>> ASP.NET web application.
>>
>> The process of incorporating *SAML authentication with wso2 identity
>> server* via SAML agent can be explained in few steps as follows.
>>
>>1.
>>
>>*Add* - the agent.dll reference to your Asp.NET web application(You
>>can get this via NuGet package manager or else from the git repo)
>>2.
>>
>>*Configure* - the mandatory properties in your ASP.NET web
>>application’s web.config file. Furthermore, you have to get the .jks from
>>the wso2 Identity Server you are using and convert it to a *pkcs*
>>using keytool.(Or else use your own pkcs12). Add the .pfx / .p12 to the
>>Local Machine Certificate Store.
>>3.
>>
>>*Register* - the “FilteringHttpModule” in your ASP.NET web
>>application to handle the requests related to SAML authentication 
>> mechanism.
>>4.
>>
>>*Set* - your application’s login controls to refer SAML intensive
>>segments. That is, suppose you have a login link in your web application.
>>All you have to do is set the attribute, href = “/samlsso”.
>>
>>
>>
>> Link to the Repo: https://github.com/chirankavin
>> da123/saml-sso-agent-DOT-NET
>> 
>> Any suggestion/recommendation to improve this agent's architecture would
>> be much appreciated.
>>
>> Thank you.
>> --
>> *Chiran Wijesekara*
>>
>>
>> *Software Engineering Intern | WSO2*Email: chir...@wso2.com
>> Mobile: +94712990173web: www.wso2.com
>>
>> [image: https://wso2.com/signature] 
>>
>
>
>
> --
> *Chiran Wijesekara*
>
>
> *Software Engineering Intern | WSO2*Email: chir...@wso2.com
> Mobile: +94712990173web: www.wso2.com
>
> [image: https://wso2.com/signature] 
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Dushan Abeyruwan | Architect
Technical Support,MV
PMC Member Apache Synpase
WSO2 Inc. http://wso2.com/
Blog:*http://www.dushantech.com/ *
LinkedIn:*https://www.linkedin.com/in/dushanabeyruwan
*
Mobile:(001)408-791-9312
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [RRT] Improving caching based on cache-control and ETag headers

2018-01-23 Thread Dushan Abeyruwan
Hi Kreethika,
  Yes, this is a long pending initiative that is required under the cache
mediator. Anyway, I believe this may be more meaningful if you draw flow
diagram + sequence diagram so, audience in this list able to fully
understand the picture and the interaction of the middleman (i.e
Integration layer) and that may be helpful when writing documentation

Cheers,
Dushan

On Fri, Jan 12, 2018 at 1:37 AM, Keerthika Mahendralingam <
keerth...@wso2.com> wrote:

> +1. Thanks Riyafa for the suggestion.
>
>
> Thanks,
> Keerthika.
>
> On Fri, Jan 12, 2018 at 3:05 PM, Riyafa Abdul Hameed 
> wrote:
>
>> Hi Keerthika,
>>
>> We should have an option for disregarding the cache-control headers and
>> the default value should be that the cache-control headers be disregarded.
>> This is because the current cache mediator is written so that it is fully
>> backward compatible with the older versions of the cache mediators. Any one
>> using cache mediator in a synape configuration in an older version can use
>> the same synapse configuration in the new version and can expect the same
>> behavior. If he/she wants to make use of the new features he/she may do so
>> by editing the synapse configurations.
>>
>> Thanks,
>> Riyafa
>>
>>
>> On Fri, Jan 12, 2018 at 12:24 PM, Keerthika Mahendralingam <
>> keerth...@wso2.com> wrote:
>>
>>> Thanks Isuru. Will check the existing functionality.
>>>
>>> @Vijitha,
>>> +1 for providing the configuration option for omitting the cache-control
>>> headers.
>>>
>>> @Sanjeewa
>>> Will check with the latest cache mediator.
>>>
>>> Thanks,
>>> Keerthika.
>>>
>>> On Fri, Jan 12, 2018 at 12:16 PM, Vijitha Ekanayake 
>>> wrote:
>>>
 Hi Sanjeewa,


 On Fri, Jan 12, 2018 at 12:01 PM, Sanjeewa Malalgoda  wrote:

> So i think we can add latest cache mediator dependency to API Manager
> 2.2.0 branch and test this feature.
> If there are any gaps in documents or implementation we will be able
> to fix them and officially support this feature from 2.2.0 onward.
> WDYT?
>

 +1 for this approach.

>
> @Vijitha, Cache mediator can engage per API basis. So if someone do
> not interested on caching they can simply remove cache mediator for that
> particular mediation flow.
>

 I intended to state just an option of disregarding the HTTP caching
 however not the response caching. Shouldn't it be valuable to have a design
 alternative to disregard the HTTP Caching yet not the default response
 caching?

 Thanks.

>
> Thanks,
> Sanjeewa.
>
> On Fri, Jan 12, 2018 at 11:07 AM, Isuru Udana  wrote:
>
>> Hi Keerthika,
>>
>> ETag caching support is already implemented at the http transport
>> level.
>> This feature was introduced long time ago but still the documentation
>> is not added to the wiki.
>> Please refer to following jiras for more information.
>>
>>
>> https://wso2.org/jira/browse/ESBJAVA-3504
>>
>> https://wso2.org/jira/browse/DOCUMENTATION-1435
>>
>>
>> Thanks.
>>
>>
>> On Fri, Jan 12, 2018 at 10:51 AM, Keerthika Mahendralingam <
>> keerth...@wso2.com> wrote:
>>
>>> Hi Shazni,
>>>
>>> Please find the answers inline.
>>>

 1. Does the user specify whether the ETag header should present in
 the response or not? Or is it always available if the cache mediator is
 used?

>>> If the backend returns the response with ETag header, cahce mediator
>>> always need to validate the response before sending the cached response 
>>> to
>>> the user.
>>>

>- If it is available and ETag is present in the cached
>response, make a request with "If-None-Match" header with the ETag 
> value.
>
>
>- If the server returns "304 Not Modified" response returns
>the cached response to the user.
>
> 2. If the caller makes a request with "If-None-Match" header with
 the ETag value and if it matched, why would you need to respond with 
 the
 cached message. Shouldn't it be only 304 with empty message as the 
 response
 hasn't changed?

>>> I considered only the use case where the backend server response has
>>> the ETag header. But we need to consider the request as well. As you 
>>> said,
>>> if the user sends a request with "If-None-Match" header with the
>>> ETag value and if it is matched with the cached response ETag value, 
>>> then
>>> we need to send 304 response. If it is not matched, cache mediator 
>>> should
>>> validate the cached response with the backend and return the response to
>>> the user. Thanks for pointing this out.
>>>


> *Honor "max-age" 

Re: [Architecture] [RRT]Calculating a risk score for authentication requests

2018-01-23 Thread Darshana Gunawardana
Hi Pamoda,

What are the use cases we try to implement with the calculated risk score?

On Tue, Jan 23, 2018 at 10:43 PM, Ruwan Abeykoon  wrote:

> Hi Dimuthu,
> +1 on using existing infrastructure with IS.
>
> We need to implement "Risk Calculator" logic in DAS, with Spark and Siddhi
> queries. This should not be inside the IS.
>
> What IS needs to do is to query the "Risk Data" with lucene while
> performing the authentication flow. This component can be added as an
> extension.
>
> What we want to do is to decouple "Risk Calculator" and "Risk Evaluator".
>

+1

@Ruwan: If we wanted to adopt real time elevated authentication mechanism,
can we use the same architecture? Or are you proposing different mechanism
for that?

Thanks,



> Risk Calculator -Should be able to implement by any analytics engine. Not
> only with WSO2 DAS.
> Risk Evaluator - This is simple Java Function which does lucene query. The
> lucene Store needs to be very close to IS cluster, as IS can not do any
> blocking call to external systems, during authentication flow.
>
> I did a PoC for my proposed architecture. Please refer [1], which can now
> be implemented with IS 5.5.0-M1. The same architecture can be used on IS
> 5.3.0/5.4.0 with custom authenticators too, but in hard way.
>
> [1] https://github.com/ruwanta/wso2is-examples/tree/master/is530/example-
> functions/components/org.wso2.carbon.identity.sample.extension.feedback
>
> Cheers,
> Ruwan
>
>
>
>
> On Fri, Jan 19, 2018 at 9:36 AM, Dimuthu Leelarathne 
> wrote:
>
>> Hi Ruwan,
>>
>> Btw .. we are doing this for 5.X series.
>>
>> thanks,
>> Dimuthu
>>
>>
>> On Fri, Jan 19, 2018 at 9:34 AM, Dimuthu Leelarathne 
>> wrote:
>>
>>> Hi Ruwan,
>>>
>>> I am thinking of using the existing architecture as it is. Right now
>>> there is an eventing listeners that publish data to DAS. I propose we reuse
>>> it as it is. Those event listeners that publish data can be
>>> X-EventListener, Y-EventListener, etc ... There are a lot of data that we
>>> can reuse in IS-analytics.
>>>
>>>
>>> ​
>>>
>>> Whatever the risk calculator does is to reuse the existing data-stores
>>> as the above diagram.
>>>
>>> thanks,
>>> Dimuthu
>>>
>>> ​
>>>
>>> On Fri, Jan 19, 2018 at 9:04 AM, Ruwan Abeykoon  wrote:
>>>
 Hi Pamoda,
 Can we enhance the architecture a little bit. We need to decouple "Risk
 Calculator" and "Identity Framework" further.

 IS needs a mechanism to receive the feedback from the pub/sub channel
 and make changes in authentication flow.

 

 1. The Temporal data is a Lucene store. Held at IS side. Central
 location for all IS cluster.
 2. MQ is used, so that any third  party can publish "Risk" or any other
 information.
 3. The authenticator will not request anything from the "Risk
 Calculator", but queries its own store. This will make things more
 resilient on chaos scenarios.


 This allows us to do lot more, e.g



-

Use stream analytics to make fast decisions.
-

   E.g. Too many authentications attempts coming from a particular
   IP, on a given time window, then upgrade the flow to Two factor
   authentication.
   -

Use batch analytics to perform simple behavioural decisions
-

   E.g. Users who has logged in and has session(not logged out),
   tries to log in on another machine, could be prompted with another 
 screen
   saying they have existing sessions on other machine.
   -

Throttling and Shaping based on billing tier exceeding conditions.
-

Use ML to do advanced behavioural decisions
-

   (Seshika will be interested in this)


 e.g.

 var agentChanged = queryAnalytics('lucene', ' e.g. name
 : agent-change-stream, subject: authenticatedSubjectId');

 if (agentChanged) {

   executeStep({'id' : '2'});

 }

 On Fri, Jan 19, 2018 at 8:49 AM, Pamoda Wimalasiri 
 wrote:

> Hi all,
>
> The figure shows a high-level architecture for the risk score
> calculation.
> [image: Inline image 2]
>
>
>- Authentication Data Publisher in the Identity Framework
>publishes the authentication events to a database
>- Authenticator requests a risk score from the risk score
>calculator.
>- Risk score calculator accesses the user login and geolocation
>databases and calculates the risk score.
>
> We will be considering
>
> IP address
> Geolocation
> Number of failed attempts between two successful logins
>
> when generating the rules to calculate the risk score.
>
> Regards,
> Pamoda
>
>
> On Tue, Jan 16, 

Re: [Architecture] [RRT]Calculating a risk score for authentication requests

2018-01-23 Thread Ashen Weerathunga
Hi All,

Currently, we have implemented two types of alerts [1] in IS Analytics to
monitor suspicious login attempts and abnormal login sessions. We have
defined set of rules to detect such abnormal login activities using Spark
and Siddhi queries. So you can improve and reuse them as well for
calculating the risk score IMO.

Right now users can only get alerts and they have to act themselves to
mitigate any suspicious login activities. But if we can do the risk score
calculation in real time as for the proposed architecture it can be used to
mitigate any risks from the identity framework level itself. We may be able
to block the user or at least terminate the user session immediately if the
risk score is greater than a particular threshold value. But again we have
to be careful with the false positives as well since it can affect the user
experience.

[1] https://docs.wso2.com/display/IS540/Alert+Types

Thanks,
Ashen

On Fri, Jan 19, 2018 at 9:34 AM, Dimuthu Leelarathne 
wrote:

> Hi Ruwan,
>
> I am thinking of using the existing architecture as it is. Right now there
> is an eventing listeners that publish data to DAS. I propose we reuse it as
> it is. Those event listeners that publish data can be X-EventListener,
> Y-EventListener, etc ... There are a lot of data that we can reuse in
> IS-analytics.
>
>
> ​
>
> Whatever the risk calculator does is to reuse the existing data-stores as
> the above diagram.
>
> thanks,
> Dimuthu
>
> ​
>
> On Fri, Jan 19, 2018 at 9:04 AM, Ruwan Abeykoon  wrote:
>
>> Hi Pamoda,
>> Can we enhance the architecture a little bit. We need to decouple "Risk
>> Calculator" and "Identity Framework" further.
>>
>> IS needs a mechanism to receive the feedback from the pub/sub channel and
>> make changes in authentication flow.
>>
>> 
>>
>> 1. The Temporal data is a Lucene store. Held at IS side. Central location
>> for all IS cluster.
>> 2. MQ is used, so that any third  party can publish "Risk" or any other
>> information.
>> 3. The authenticator will not request anything from the "Risk
>> Calculator", but queries its own store. This will make things more
>> resilient on chaos scenarios.
>>
>>
>> This allows us to do lot more, e.g
>>
>>
>>
>>-
>>
>>Use stream analytics to make fast decisions.
>>-
>>
>>   E.g. Too many authentications attempts coming from a particular
>>   IP, on a given time window, then upgrade the flow to Two factor
>>   authentication.
>>   -
>>
>>Use batch analytics to perform simple behavioural decisions
>>-
>>
>>   E.g. Users who has logged in and has session(not logged out),
>>   tries to log in on another machine, could be prompted with another 
>> screen
>>   saying they have existing sessions on other machine.
>>   -
>>
>>Throttling and Shaping based on billing tier exceeding conditions.
>>-
>>
>>Use ML to do advanced behavioural decisions
>>-
>>
>>   (Seshika will be interested in this)
>>
>>
>> e.g.
>>
>> var agentChanged = queryAnalytics('lucene', ' e.g. name :
>> agent-change-stream, subject: authenticatedSubjectId');
>>
>> if (agentChanged) {
>>
>>   executeStep({'id' : '2'});
>>
>> }
>>
>> On Fri, Jan 19, 2018 at 8:49 AM, Pamoda Wimalasiri 
>> wrote:
>>
>>> Hi all,
>>>
>>> The figure shows a high-level architecture for the risk score
>>> calculation.
>>> [image: Inline image 2]
>>>
>>>
>>>- Authentication Data Publisher in the Identity Framework publishes
>>>the authentication events to a database
>>>- Authenticator requests a risk score from the risk score calculator.
>>>- Risk score calculator accesses the user login and geolocation
>>>databases and calculates the risk score.
>>>
>>> We will be considering
>>>
>>> IP address
>>> Geolocation
>>> Number of failed attempts between two successful logins
>>>
>>> when generating the rules to calculate the risk score.
>>>
>>> Regards,
>>> Pamoda
>>>
>>>
>>> On Tue, Jan 16, 2018 at 9:48 AM, Hasitha Hiranya 
>>> wrote:
>>>
 Hi Ruwan,


 On Tue, Jan 16, 2018 at 9:39 AM, Ruwan Abeykoon 
 wrote:

> Hi Hasitha,
> There is a question about MAC address, which is not available beyond
> an IP router. What we do is browser fingerprinting with a cookie or
> something.
>
> *>> i.e I usually login to my personal Gmail using my phone. If I use
> my MAC machine suddenly, google sends an email if this is you. *
> IS 5.5.0 has default ability to do this with "Conditional
> Authentication", by fingerprinting the browser.
>
> Got it! Thanks for the explanation.

>
>
> Cheers,
> Ruwan
>
>
> On Tue, Jan 16, 2018 at 9:20 AM, Hasitha Hiranya 
> wrote:
>
>> Hi all,
>>
>> We can also consider the MAC address or some machine ID of last
>> successful login as 

Re: [Architecture] [RRT]Calculating a risk score for authentication requests

2018-01-23 Thread Ruwan Abeykoon
Hi Dimuthu,
+1 on using existing infrastructure with IS.

We need to implement "Risk Calculator" logic in DAS, with Spark and Siddhi
queries. This should not be inside the IS.

What IS needs to do is to query the "Risk Data" with lucene while
performing the authentication flow. This component can be added as an
extension.

What we want to do is to decouple "Risk Calculator" and "Risk Evaluator".
Risk Calculator -Should be able to implement by any analytics engine. Not
only with WSO2 DAS.
Risk Evaluator - This is simple Java Function which does lucene query. The
lucene Store needs to be very close to IS cluster, as IS can not do any
blocking call to external systems, during authentication flow.

I did a PoC for my proposed architecture. Please refer [1], which can now
be implemented with IS 5.5.0-M1. The same architecture can be used on IS
5.3.0/5.4.0 with custom authenticators too, but in hard way.

[1]
https://github.com/ruwanta/wso2is-examples/tree/master/is530/example-functions/components/org.wso2.carbon.identity.sample.extension.feedback

Cheers,
Ruwan




On Fri, Jan 19, 2018 at 9:36 AM, Dimuthu Leelarathne 
wrote:

> Hi Ruwan,
>
> Btw .. we are doing this for 5.X series.
>
> thanks,
> Dimuthu
>
>
> On Fri, Jan 19, 2018 at 9:34 AM, Dimuthu Leelarathne 
> wrote:
>
>> Hi Ruwan,
>>
>> I am thinking of using the existing architecture as it is. Right now
>> there is an eventing listeners that publish data to DAS. I propose we reuse
>> it as it is. Those event listeners that publish data can be
>> X-EventListener, Y-EventListener, etc ... There are a lot of data that we
>> can reuse in IS-analytics.
>>
>>
>> ​
>>
>> Whatever the risk calculator does is to reuse the existing data-stores as
>> the above diagram.
>>
>> thanks,
>> Dimuthu
>>
>> ​
>>
>> On Fri, Jan 19, 2018 at 9:04 AM, Ruwan Abeykoon  wrote:
>>
>>> Hi Pamoda,
>>> Can we enhance the architecture a little bit. We need to decouple "Risk
>>> Calculator" and "Identity Framework" further.
>>>
>>> IS needs a mechanism to receive the feedback from the pub/sub channel
>>> and make changes in authentication flow.
>>>
>>> 
>>>
>>> 1. The Temporal data is a Lucene store. Held at IS side. Central
>>> location for all IS cluster.
>>> 2. MQ is used, so that any third  party can publish "Risk" or any other
>>> information.
>>> 3. The authenticator will not request anything from the "Risk
>>> Calculator", but queries its own store. This will make things more
>>> resilient on chaos scenarios.
>>>
>>>
>>> This allows us to do lot more, e.g
>>>
>>>
>>>
>>>-
>>>
>>>Use stream analytics to make fast decisions.
>>>-
>>>
>>>   E.g. Too many authentications attempts coming from a particular
>>>   IP, on a given time window, then upgrade the flow to Two factor
>>>   authentication.
>>>   -
>>>
>>>Use batch analytics to perform simple behavioural decisions
>>>-
>>>
>>>   E.g. Users who has logged in and has session(not logged out),
>>>   tries to log in on another machine, could be prompted with another 
>>> screen
>>>   saying they have existing sessions on other machine.
>>>   -
>>>
>>>Throttling and Shaping based on billing tier exceeding conditions.
>>>-
>>>
>>>Use ML to do advanced behavioural decisions
>>>-
>>>
>>>   (Seshika will be interested in this)
>>>
>>>
>>> e.g.
>>>
>>> var agentChanged = queryAnalytics('lucene', ' e.g. name :
>>> agent-change-stream, subject: authenticatedSubjectId');
>>>
>>> if (agentChanged) {
>>>
>>>   executeStep({'id' : '2'});
>>>
>>> }
>>>
>>> On Fri, Jan 19, 2018 at 8:49 AM, Pamoda Wimalasiri 
>>> wrote:
>>>
 Hi all,

 The figure shows a high-level architecture for the risk score
 calculation.
 [image: Inline image 2]


- Authentication Data Publisher in the Identity Framework publishes
the authentication events to a database
- Authenticator requests a risk score from the risk score
calculator.
- Risk score calculator accesses the user login and geolocation
databases and calculates the risk score.

 We will be considering

 IP address
 Geolocation
 Number of failed attempts between two successful logins

 when generating the rules to calculate the risk score.

 Regards,
 Pamoda


 On Tue, Jan 16, 2018 at 9:48 AM, Hasitha Hiranya 
 wrote:

> Hi Ruwan,
>
>
> On Tue, Jan 16, 2018 at 9:39 AM, Ruwan Abeykoon 
> wrote:
>
>> Hi Hasitha,
>> There is a question about MAC address, which is not available beyond
>> an IP router. What we do is browser fingerprinting with a cookie or
>> something.
>>
>> *>> i.e I usually login to my personal Gmail using my phone. If I use
>> my MAC machine suddenly, google sends an email if this 

[Architecture] [RRT] Calculate Age value of cached response

2018-01-23 Thread Keerthika Mahendralingam
Hi All,

I am trying to add an Age header when returning the cached response(as
discussed in [1]). I am following the steps as follows:

   - If the response doesn't have the Date header, add Date header (with
   current time) when caching the response[2].
   - For the subsequent request, get the Date header value of the cached
   response, timeout and current requested time and find the TTL value as:

TTL = difference ((DateHeaderValue + timeout), CurrentTime)


   - Set the TTL value as Age header.

Please let me know if you have any concerns on this.

[1]. [Architecture][RRT] Improving caching based on cache-control and ETag
headers
[2]. https://tools.ietf.org/html/rfc2616#page-124

Thanks,
Keerthika.
-- 

Keerthika Mahendralingam
Software Engineer
Mobile :+94 (0) 776 121144 <+94%2077%20612%201144>
keerth...@wso2.com
WSO2, Inc.
lean . enterprise . middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [MB4] HA Support

2018-01-23 Thread Waruna Jayaweera
Hi Maryam,

We also store the permissions related data in both memory and persistent
storage but permissions will be retrieved from memory. In that case if new
permissions added to active node, passive node will have inconsistency with
permissions.
I hope we can use same above approach for permissions as well.

Thanks,
Waruna





On Fri, Jan 19, 2018 at 6:01 PM, Maryam Ziyad  wrote:

> Hi Asanka,
>
> The required queue/binding/exchange related data will be loaded once the
> previously passive node is notified of becoming the active node.
>
> This is considering the possibility of inconsistencies in a continuous
> sync, which could also require reloading on becoming the active node.
>
> Thank you,
> Maryam
>
> On Mon, Jan 15, 2018 at 8:14 AM, Asanka Abeyweera 
> wrote:
>
>> Hi Maryam,
>>
>> Are we keeping the passive node in sync with the active node or are we
>> reloading context and message data when a passive node becomes active?
>>
>> On Fri, Jan 12, 2018 at 6:46 PM, Maryam Ziyad  wrote:
>>
>>> Hi All,
>>>
>>> As mentioned, an extensible point "org.wso2.broker.coordination.HaStrategy"
>>> has been introduced, which can be implemented to provide variations upon
>>> which HA support will be based.
>>>
>>> Any new/custom implementation would have to notify the listeners
>>> listening on node state changes (when a node state changes from active to
>>> passive or passive to active).
>>>
>>> Currently the modules listening on state changes are as follows:
>>>
>>>- broker-core
>>>- broker-transport
>>>- broker-rest-runner
>>>
>>> All nodes will start up in "passive" mode, and only change state to
>>> "active" on notification by the HA strategy. This requires that the
>>> listeners are registered prior to starting the HA strategy (identification
>>> of the active node).
>>>
>>>
>>> *Default RDBMS Coordinator Election based HA Strategy *
>>>
>>> With the default HA strategy (RdbmsHaStrategy - implemented based on the
>>> RDBMS based coordinator election approach [1]), the possible coordination
>>> node states are mapped to active/passive as follows:
>>>
>>>- COORDINATOR - Active
>>>- CANDIDATE - Passive
>>>- ELECTION - Passive
>>>
>>> Thus notification of the HA listeners happens when:
>>>
>>>- election is triggered and the node was previously the coordinator
>>>node (active → passive)
>>>- election resulted in the node becoming the coordinator node
>>>(passive → active)
>>>
>>> [1] https://github.com/wso2/message-broker/pull/74
>>>
>>> Feedback/suggestions would be highly appreciated.
>>>
>>> Thank you,
>>> Maryam
>>>
>>> On Fri, Jan 12, 2018 at 6:41 PM, Maryam Ziyad  wrote:
>>>
 Hi Asanka,

 Renamed "haConfig" to "failover" based on the offline discussion.

 Thank you,
 Maryam

 On Tue, Dec 19, 2017 at 7:05 PM, Asanka Abeyweera 
 wrote:

> Hi Maryam,
>
> Shall we rename the "haConfig" to "ha-clustering"? I'm not sure if we
> should use camel case in the yaml config.
>
> On Tue, Dec 19, 2017 at 4:42 PM, Maryam Ziyad 
> wrote:
>
>> Hi All,
>>
>> We are currently working on introducing $subject [1]. Please find
>> below a high level description of the approach.
>>
>> An extension point (HaStrategy) will be introduced, allowing
>> straightforward introduction of different implementations of 
>> identification
>> of the active node, where the only requirements would be that these
>> approaches extend the common class and invoke particular methods when the
>> node state changes.
>>
>> The broker-core and broker-transport (broker-amqp) modules would
>> introduce listeners to receive notifications of node states changes
>> (active/passive), and change behaviour accordingly.
>>
>>
>> ​
>>
>> *Configuration*
>>
>> The HA related configuration would be specified in the broker.yaml
>> file including whether HA is enabled and the HA strategy to use.
>>
>> haConfig:
>>  enabled: true
>>  strategy: org.wso2.broker.coordination.rdbms.RdbmsHaStrategy
>>
>>
>> The basic/initial HA strategy implementation will be the
>> RdbmsHaStrategy based on the RDBMS based coordinator election approach
>> previously introduced for MB 3.2.0. [2, 3]. ​If HA enabled is set to true
>> but no strategy is specified, the RdbmsHaStrategy will be used.
>>
>>
>> *RDBMS Coordinator Election based HA Strategy (RdbmsHaStrategy)*
>>
>> The RDBMS based coordinator election algorithm would be extended to
>> provide HA support, by specifying the node elected as coordinator to 
>> always
>> be the active node, while the other node(s) will be considered passive. 
>> The
>> RDBMS coordinator election based approach, which would also be the 
>> default
>> HA 

Re: [Architecture] Password Rotation Policy Authenticator

2018-01-23 Thread Johann Nallathamby
According to the discussion we had with Jayanga in the solution
architecture team's bootcamp meeting, this is something we are considering
in the WUM roadmap, to make connectors first class citizens of WUM.

*@Jayanga/Kishanthan*: your thoughts.

Regards,
Johann.

On Tue, Jan 23, 2018 at 11:38 AM, Ruwan Abeykoon  wrote:

> Hi All,
> I agree with Johann,
> i.e.
> a) to publish as connector for current version of the product.
> b) to include this in next viable release in product itself.
>
> On a different but related note,
> IMO connectors are really features, technically. Hence we need to look a
> way to be publish htme as features.
> Currently we add connectors to dropins. But this is not correct in terms
> of OSGI or the user experience perspective. I would think, "dropins" are
> for those components developed by users, not by WSO2.
>
> But there are problems in our WUM model when we do feature installation.
> We need to work on this too.
>
> Cheers,
> Ruwan
>
> On Tue, Jan 23, 2018 at 11:21 AM, Johann Nallathamby 
> wrote:
>
>>
>>
>> On Tue, Jan 23, 2018 at 11:06 AM, Nadun De Silva  wrote:
>>
>>> Hi,
>>>
>>> I have been working on *publishing events to IS Analytics* for the
>>> notification system for the expired passwords.
>>>
>>> In the existing implementation to publish events to IS Analytics, a
>>> stream and a publisher for each event type had been bundled together with
>>> IS. (The artifacts are installed by the p2-feature at product-is build time)
>>>
>>> The publishing works as follows.
>>>
>>>1. AbstractEventListeners in IS injects events into the stream.
>>>2. The publisher connected to the stream publishes to IS Analytics.
>>>
>>> If I am *to follow the same implementation* of publishing for the
>>> password changed event, We would *need to add the relevant xml files to
>>> the server*.
>>>
>>> There are several approaches that we can employ, that I could come up
>>> with.
>>>
>>>- Publish the connector as a p2-feature. (However, AFAIK, all the IS
>>>connectors are published as jar files and therefore this may not be
>>>suitable.)
>>>
>>> +1 to publish as connector for current version of the product. New
>> config additions also can be added in connectors.
>>
>>>
>>>-
>>>- Bundle this along with the next release of IS.
>>>
>>> Must do this. At this point it will get published as p2 feature.
>>
>>>
>>>- Let the user copy the files (This IMO is not very user-friendly.)
>>>
>>> Even if you publish as a connector we need to manually copy files.
>>
>> Regards,
>> Johann.
>>
>>
>>> What are your ideas on these approaches? Is there a better alternative?
>>>
>>> Any comments or suggestions are welcome.
>>>
>>> Thank you!
>>>
>>> Regards,
>>> Nadun De Silva
>>>
>>> On Fri, Jan 19, 2018 at 2:21 PM, Nadun De Silva  wrote:
>>>
 Hi,

 *@Johann* Thank you for the information. I was able to extend the
 handler and listen to password change events.

 Now I am working on publishing data to IS Analytics using the
 EventStreamService.

 I will keep the thread updated.

 Thank you!

 Regards,
 NadunD

 On Wed, Jan 17, 2018 at 2:14 PM, Johann Nallathamby 
 wrote:

>
>
> On Wed, Jan 17, 2018 at 12:43 PM, Nadun De Silva 
> wrote:
>
>> Hi Johann,
>>
>> On Tue, Jan 16, 2018 at 9:30 PM, Johann Nallathamby 
>> wrote:
>>
>>> Hi Nadun,
>>>
>>> On Tue, Jan 16, 2018 at 11:16 AM, Nadun De Silva 
>>> wrote:
>>>
 Hi,

 At the moment the authenticator only has the *"password expiration
 time period"* in the password expiration policy.

 So I can start off by altering the authenticator to publish the
 following to analytics

- The password expiration time period config change
- The password changed event

 Also, the high-level architecture would be as follows.


 ​

 Any comments or improvements are highly appreciated.

>>>
>>> There is a problem in this architecture. You are only considering
>>> the password change events sent from the password rotation policy
>>> authenticator. There are other channels also. E.g. SCIM2 and Admin 
>>> Console.
>>> So you need to publish the same event from there as well. This
>>> should be pretty easy to do in IS with the handler architecture we
>>> have. We should be already getting a password update event to the system
>>> whenever user password is updated via any one of the above channels.
>>> Therefore all you need to do is write a handler (or reuse an existing
>>> handler appropriately) and create the siddhi streams and publish.
>>>
>>> My diagram was a bit incorrect. Sorry about the confusion 

Re: [Architecture] OIDC request object support

2018-01-23 Thread Johann Nallathamby
Hi Farasath,

On Tue, Jan 23, 2018 at 12:13 PM, Farasath Ahamed 
wrote:

>
>
> On Tuesday, January 23, 2018, Johann Nallathamby  wrote:
>
>> Hi Hasanthi,
>>
>> On Tue, Jan 23, 2018 at 9:31 AM, Hasanthi Purnima Dissanayake <
>> hasan...@wso2.com> wrote:
>>
>>> Hi Johann,
>>>
>>> Is there any instance in which IS will throw error to client because it
 cannot send the claim?

 Because in the spec it says the following.

 Note that even if the Claims are not available because the End-User did
 not authorize their release or they are not present, the Authorization
 Server MUST NOT generate an error when Claims are not returned, whether
 they are Essential or Voluntary, unless otherwise specified in the
 description of the specific claim.

 So IMO we need to have a property for each claim that says whether we
 return an error or not.

 Wdyt?

>>>
>>> What I understand from the above is, if a claim is marked as essential
>>> or voluntary and if the server can not return the claim the flow should not
>>> break and server should not throw an error if it is not specially specified
>>> in the server side. In this scope we don't specify this from server side.
>>> Though this is not a MUST we can add this as an improvement as it adds a
>>> value.
>>>
>>
>> So IIUC in any circumstance we don't send an error to client. Correct?
>>
>> Yes, we can add that property as an improvement.
>>
>>>
>>>
> 2. The claims like "nickname" it will act as a default claim and will
> control by both requested scopes and the requested claims.
>

 What do you mean by controlling using requested scope? Do you mean if
 the client doesn't request at least one scope that includes this claim we
 won't return that claim? I don't think that is mentioned in the spec. Can
 you clarify?

>>>
>>> The spec does not directly specify how should we treat for the Voluntary
>>> Claim from the server side. So what we have planned to do is to honour the
>>> scopes and server level requested claims when returning this claim.
>>>
>>
>> IMO, because the spec doesn't say to do anything special on the OP side
>> about not being able to release a particular claim (it says not to break
>> the normal flow), there is nothing special we can differentiate between
>> essential and voluntary claims. Only thing we may be able to do is, give a
>> warning to user saying that if s/he doesn't approve an essential claim s/he
>> won't be able to work with the application smoothly. We can't do anything
>> beyond that right?
>>
>> When you say scopes which scopes are you referring to? Are they the
>> requested scopes in the request or the defined scopes in the registry? I
>> fail to understand what scopes have to do with claims in this case.
>> Following is what I find in spec related to this.
>>
>> "It is also the only way to request specific combinations of the standard
>> Claims that cannot be specified using scope values. "
>>
>> As I understand if the specific requested OIDC claim, is defined in the
>> OIDC dialect, the user has a value for that claim and s/he has approved
>> that claim for the RP, then we can send them to the RP, regardless of
>> whether it is defined in scope or not. Otherwise we are contradicting the
>> above statement right?
>>
>> Also regarding requested claims in service provider configuration, IIRC
>> we used it as a way to control access to certain claims by service
>> providers, which overrides the requested scopes and requested claims. *But
>> that is only if the requested claims list is not empty in service provider
>> configuration*. I.e. requested claims in service provider configuration
>> must have at least 1 claim. Otherwise what will happen is for every service
>> provider we need to add all the OIDC claims if they are going to request
>> claims dynamically, using scopes or requested claims in the request. Do I
>> make sense or am I missing something?
>>
>
> In our current implementation requested claims and claims included in
> requested scopes act as two filters for user claims sent to service
> provider.
>
> First user claims are filtered by requested claims and then by claims
> included in requested scope. So if an SP doesn't have any claims configured
> as requested claims then no claims will be sent our in id_token or userinfo
> response.
>
> From IS 5.2.0 this has been the behaviour.
>

Thanks for the info. This is not violating anything. But I feel it's a huge
pain to configure all the claims as requested claims for all the service
providers. I think we have around 25 OIDC claims. And if we have around 10
service providers that means we need to configure 250 claims.

I believe we need to improve this. I think for OIDC implementation we can
interpret requested claims the way I have mentioned above. I.e. if the list
is empty we don't have to restrict anything. We can send all the claims
defined under 

Re: [Architecture] Decoupling Client Authentication from OAuth2 Flow

2018-01-23 Thread Hasintha Indrajee
Anyway the application should know whether it's an application (client) or
a user is authenticated. From end application logic perspective a user and
and client won't be the same. The existing services data structures and
model classes are more bound towards user authentication. For an example
the result will consist of authenticated users information including tenant
domain, userstore domain. But client authentication service will result an
authenticated client id.

Suppose we merge these two and bring a hybrid result after authentication.
But still the application needs to know who actually is authenticated. ie
application logic needs to know explicitly handle results of these two
types of authentication. Considering the changes required to do in the
existing code we thought of bringing this as a separate service.

On Tue, Jan 23, 2018 at 3:02 PM, Darshana Gunawardana 
wrote:

>
>
> On Tue, Jan 23, 2018 at 2:19 PM, Hasintha Indrajee 
> wrote:
>
>>
>>
>> On Tue, Jan 23, 2018 at 2:09 PM, Darshana Gunawardana 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jan 23, 2018 at 1:48 PM, Hasintha Indrajee 
>>> wrote:
>>>


 On Tue, Jan 23, 2018 at 1:18 PM, Darshana Gunawardana <
 darsh...@wso2.com> wrote:

> Hi Hasintha,
>
> On Tue, Jan 9, 2018 at 5:53 PM, Hasintha Indrajee 
> wrote:
>
>> We have had several discussions with the objective of making these
>> logics more reusable. One of the ideas was to use our carbon-auth-rest
>> valve to authenticate client. Since it has below concerns and gaps we
>> thought of implementing these authenticators as CXF interceptors.
>>
>> 1) Current implementation of rest-auth valves does not have a
>> mechanism to engage authenticators per context.
>> 2) Current implementation of rest-auth valves does not have a way to
>> order the execution sequence of authenticators.
>> 3) Also it seems like using a valve to intercept these specific
>> requests which comes to a context doesn't seem logically correct. Tomcat
>> level doesn't seem to be the correct place to intercept. Instead specific
>> intercepting specific context seems more logical.
>>
>
> Yes, point #3 is a valid point to not to use a tomcat valve, whereas
> other two points are limitations of the existing implementations and I
> could not see them as blockers since if we are anyway have to implement
> those functionalities.
>
>
>>
>> Hence we will be going forward with CXF interceptors for
>> authentication.
>>
>
> Can we consider this interceptor as an another enforcement point like
> tomcat valve and add a new module to identity-carbon-auth-rest component?
>
> The reason is https://github.com/wso2-extens
> ions/identity-carbon-auth-rest component desined to enforce authn &
> authz for rest endponints. And tomcat valve implementation is only an one
> way of intercepting method which not suitable for this context, but we
> still could reuse org.wso2.carbon.identity.auth.service
> and org.wso2.carbon.identity.authz.service modules to manage central
> operations of authenticators.
>
> If this implementation is only applicable for oauth endpoints and if
> there is no usage of any other rest endpoints for similar authentication
> mechanisms, its ok to develop these as separate modules, but we have to
> clearly decide what to use when.
>

 This is a tomcat valve which is getting invoked for all incoming
 requests to the server. It's fine to do a user authentication from a tomcat
 valve since user authentication is a concept which belongs / relevant to
 the whole server. OAuth client authentication is just limited to oauth.
 Hence at tomcat level we don't need to implement application specific
 logics such as retrieving oauth app info and doing authentication. It's not
 ideal.

 Further client credentials (including jwts) can come in the body of the
 request. At tomcat valve level if we are to consume input stream, it's a
 heavy and costly operation. Body consumption is required to decide whether
 the request can be handled or not by the client authenticator. Furthermore,
 if we consume the input stream at tomcat level we need to wrap the original
 request with a backed input stream to make sure rest of the flows are
 working fine (At jax-rs level also they read the input stream in order to
 build params). These stuff look more workarounds and not ideal to do since
 anyway this is costly.

 Furthermore, if we use jax-rs interceptors, we already have consumed
 body as params. Hence we don't need to worry about the overhead of building
 the body of the request (In certain phases of the jax-rs interceptors we
 have consumed body, eg - PRE_INVOKE).

>>>

Re: [Architecture] Decoupling Client Authentication from OAuth2 Flow

2018-01-23 Thread Darshana Gunawardana
On Tue, Jan 23, 2018 at 2:19 PM, Hasintha Indrajee 
wrote:

>
>
> On Tue, Jan 23, 2018 at 2:09 PM, Darshana Gunawardana 
> wrote:
>
>>
>>
>> On Tue, Jan 23, 2018 at 1:48 PM, Hasintha Indrajee 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jan 23, 2018 at 1:18 PM, Darshana Gunawardana >> > wrote:
>>>
 Hi Hasintha,

 On Tue, Jan 9, 2018 at 5:53 PM, Hasintha Indrajee 
 wrote:

> We have had several discussions with the objective of making these
> logics more reusable. One of the ideas was to use our carbon-auth-rest
> valve to authenticate client. Since it has below concerns and gaps we
> thought of implementing these authenticators as CXF interceptors.
>
> 1) Current implementation of rest-auth valves does not have a
> mechanism to engage authenticators per context.
> 2) Current implementation of rest-auth valves does not have a way to
> order the execution sequence of authenticators.
> 3) Also it seems like using a valve to intercept these specific
> requests which comes to a context doesn't seem logically correct. Tomcat
> level doesn't seem to be the correct place to intercept. Instead specific
> intercepting specific context seems more logical.
>

 Yes, point #3 is a valid point to not to use a tomcat valve, whereas
 other two points are limitations of the existing implementations and I
 could not see them as blockers since if we are anyway have to implement
 those functionalities.


>
> Hence we will be going forward with CXF interceptors for
> authentication.
>

 Can we consider this interceptor as an another enforcement point like
 tomcat valve and add a new module to identity-carbon-auth-rest component?

 The reason is https://github.com/wso2-extens
 ions/identity-carbon-auth-rest component desined to enforce authn &
 authz for rest endponints. And tomcat valve implementation is only an one
 way of intercepting method which not suitable for this context, but we
 still could reuse org.wso2.carbon.identity.auth.service
 and org.wso2.carbon.identity.authz.service modules to manage central
 operations of authenticators.

 If this implementation is only applicable for oauth endpoints and if
 there is no usage of any other rest endpoints for similar authentication
 mechanisms, its ok to develop these as separate modules, but we have to
 clearly decide what to use when.

>>>
>>> This is a tomcat valve which is getting invoked for all incoming
>>> requests to the server. It's fine to do a user authentication from a tomcat
>>> valve since user authentication is a concept which belongs / relevant to
>>> the whole server. OAuth client authentication is just limited to oauth.
>>> Hence at tomcat level we don't need to implement application specific
>>> logics such as retrieving oauth app info and doing authentication. It's not
>>> ideal.
>>>
>>> Further client credentials (including jwts) can come in the body of the
>>> request. At tomcat valve level if we are to consume input stream, it's a
>>> heavy and costly operation. Body consumption is required to decide whether
>>> the request can be handled or not by the client authenticator. Furthermore,
>>> if we consume the input stream at tomcat level we need to wrap the original
>>> request with a backed input stream to make sure rest of the flows are
>>> working fine (At jax-rs level also they read the input stream in order to
>>> build params). These stuff look more workarounds and not ideal to do since
>>> anyway this is costly.
>>>
>>> Furthermore, if we use jax-rs interceptors, we already have consumed
>>> body as params. Hence we don't need to worry about the overhead of building
>>> the body of the request (In certain phases of the jax-rs interceptors we
>>> have consumed body, eg - PRE_INVOKE).
>>>
>>
>> Yes.. As mentioned in my earlier reply, I'm also +1 for adding a new
>> enforcement method.
>>
>> My question is this is something generic enough to add to
>> the identity-carbon-auth-rest or this is oauth specific which does not have
>> any re-usable logic other than the oauth endpoints.
>>
>> hmm.. This interceptor doesn't have much things which are reusable. The
> logic is more jaxrs specific.
>
Every enforcement implementation is specific to the enforcement method
provided by the relevant framework.

I'm thinking whether we need to call, org.wso2.carbon.identity.auth.service
from the interceptor or is it ok to invoke oauth specific implementation
(OAuthClientAuthnService).

For example, currently DCR endpoint secured using
the org.wso2.carbon.identity.auth.service. If we have CXF interceptor
calling org.wso2.carbon.identity.auth.service, do we need to remove using
that and move to this method(using OAuthClientAuthnService)? If so why?

Thanks,


> Hence anyway we need to put this either 

Re: [Architecture] Decoupling Client Authentication from OAuth2 Flow

2018-01-23 Thread Hasintha Indrajee
On Tue, Jan 23, 2018 at 2:09 PM, Darshana Gunawardana 
wrote:

>
>
> On Tue, Jan 23, 2018 at 1:48 PM, Hasintha Indrajee 
> wrote:
>
>>
>>
>> On Tue, Jan 23, 2018 at 1:18 PM, Darshana Gunawardana 
>> wrote:
>>
>>> Hi Hasintha,
>>>
>>> On Tue, Jan 9, 2018 at 5:53 PM, Hasintha Indrajee 
>>> wrote:
>>>
 We have had several discussions with the objective of making these
 logics more reusable. One of the ideas was to use our carbon-auth-rest
 valve to authenticate client. Since it has below concerns and gaps we
 thought of implementing these authenticators as CXF interceptors.

 1) Current implementation of rest-auth valves does not have a mechanism
 to engage authenticators per context.
 2) Current implementation of rest-auth valves does not have a way to
 order the execution sequence of authenticators.
 3) Also it seems like using a valve to intercept these specific
 requests which comes to a context doesn't seem logically correct. Tomcat
 level doesn't seem to be the correct place to intercept. Instead specific
 intercepting specific context seems more logical.

>>>
>>> Yes, point #3 is a valid point to not to use a tomcat valve, whereas
>>> other two points are limitations of the existing implementations and I
>>> could not see them as blockers since if we are anyway have to implement
>>> those functionalities.
>>>
>>>

 Hence we will be going forward with CXF interceptors for authentication.

>>>
>>> Can we consider this interceptor as an another enforcement point like
>>> tomcat valve and add a new module to identity-carbon-auth-rest component?
>>>
>>> The reason is https://github.com/wso2-extens
>>> ions/identity-carbon-auth-rest component desined to enforce authn &
>>> authz for rest endponints. And tomcat valve implementation is only an one
>>> way of intercepting method which not suitable for this context, but we
>>> still could reuse org.wso2.carbon.identity.auth.service
>>> and org.wso2.carbon.identity.authz.service modules to manage central
>>> operations of authenticators.
>>>
>>> If this implementation is only applicable for oauth endpoints and if
>>> there is no usage of any other rest endpoints for similar authentication
>>> mechanisms, its ok to develop these as separate modules, but we have to
>>> clearly decide what to use when.
>>>
>>
>> This is a tomcat valve which is getting invoked for all incoming requests
>> to the server. It's fine to do a user authentication from a tomcat valve
>> since user authentication is a concept which belongs / relevant to the
>> whole server. OAuth client authentication is just limited to oauth. Hence
>> at tomcat level we don't need to implement application specific logics such
>> as retrieving oauth app info and doing authentication. It's not ideal.
>>
>> Further client credentials (including jwts) can come in the body of the
>> request. At tomcat valve level if we are to consume input stream, it's a
>> heavy and costly operation. Body consumption is required to decide whether
>> the request can be handled or not by the client authenticator. Furthermore,
>> if we consume the input stream at tomcat level we need to wrap the original
>> request with a backed input stream to make sure rest of the flows are
>> working fine (At jax-rs level also they read the input stream in order to
>> build params). These stuff look more workarounds and not ideal to do since
>> anyway this is costly.
>>
>> Furthermore, if we use jax-rs interceptors, we already have consumed body
>> as params. Hence we don't need to worry about the overhead of building the
>> body of the request (In certain phases of the jax-rs interceptors we have
>> consumed body, eg - PRE_INVOKE).
>>
>
> Yes.. As mentioned in my earlier reply, I'm also +1 for adding a new
> enforcement method.
>
> My question is this is something generic enough to add to
> the identity-carbon-auth-rest or this is oauth specific which does not have
> any re-usable logic other than the oauth endpoints.
>
> hmm.. This interceptor doesn't have much things which are reusable. The
logic is more jaxrs specific. Hence anyway we need to put this either under
lib/runtimes/cxf or should be bundled with the webapp. We cannot have this
interceptor jar in components/plugins or components/dropins. Because we
don't have jax-rs dependencies at OSGI runtime. Also the existing rest
valve is more coupled with user authentication where as this interceptor is
more coupled with oauth client authentication.

>
>> These authenticators are handlers. Hence we can control enabling /
>> disabling and changing priority through identity.xml configuration.
>>
> Can you list sample configuration on this?
>



This uses our existing handler architecture.

>
>
>> The config you quoted will no longer be used. Instead all authenticators
>> will bind as OSGI services at runtime and can be controlled as our usual
>> 

Re: [Architecture] Decoupling Client Authentication from OAuth2 Flow

2018-01-23 Thread Darshana Gunawardana
On Tue, Jan 23, 2018 at 1:48 PM, Hasintha Indrajee 
wrote:

>
>
> On Tue, Jan 23, 2018 at 1:18 PM, Darshana Gunawardana 
> wrote:
>
>> Hi Hasintha,
>>
>> On Tue, Jan 9, 2018 at 5:53 PM, Hasintha Indrajee 
>> wrote:
>>
>>> We have had several discussions with the objective of making these
>>> logics more reusable. One of the ideas was to use our carbon-auth-rest
>>> valve to authenticate client. Since it has below concerns and gaps we
>>> thought of implementing these authenticators as CXF interceptors.
>>>
>>> 1) Current implementation of rest-auth valves does not have a mechanism
>>> to engage authenticators per context.
>>> 2) Current implementation of rest-auth valves does not have a way to
>>> order the execution sequence of authenticators.
>>> 3) Also it seems like using a valve to intercept these specific requests
>>> which comes to a context doesn't seem logically correct. Tomcat level
>>> doesn't seem to be the correct place to intercept. Instead specific
>>> intercepting specific context seems more logical.
>>>
>>
>> Yes, point #3 is a valid point to not to use a tomcat valve, whereas
>> other two points are limitations of the existing implementations and I
>> could not see them as blockers since if we are anyway have to implement
>> those functionalities.
>>
>>
>>>
>>> Hence we will be going forward with CXF interceptors for authentication.
>>>
>>
>> Can we consider this interceptor as an another enforcement point like
>> tomcat valve and add a new module to identity-carbon-auth-rest component?
>>
>> The reason is https://github.com/wso2-extens
>> ions/identity-carbon-auth-rest component desined to enforce authn &
>> authz for rest endponints. And tomcat valve implementation is only an one
>> way of intercepting method which not suitable for this context, but we
>> still could reuse org.wso2.carbon.identity.auth.service
>> and org.wso2.carbon.identity.authz.service modules to manage central
>> operations of authenticators.
>>
>> If this implementation is only applicable for oauth endpoints and if
>> there is no usage of any other rest endpoints for similar authentication
>> mechanisms, its ok to develop these as separate modules, but we have to
>> clearly decide what to use when.
>>
>
> This is a tomcat valve which is getting invoked for all incoming requests
> to the server. It's fine to do a user authentication from a tomcat valve
> since user authentication is a concept which belongs / relevant to the
> whole server. OAuth client authentication is just limited to oauth. Hence
> at tomcat level we don't need to implement application specific logics such
> as retrieving oauth app info and doing authentication. It's not ideal.
>
> Further client credentials (including jwts) can come in the body of the
> request. At tomcat valve level if we are to consume input stream, it's a
> heavy and costly operation. Body consumption is required to decide whether
> the request can be handled or not by the client authenticator. Furthermore,
> if we consume the input stream at tomcat level we need to wrap the original
> request with a backed input stream to make sure rest of the flows are
> working fine (At jax-rs level also they read the input stream in order to
> build params). These stuff look more workarounds and not ideal to do since
> anyway this is costly.
>
> Furthermore, if we use jax-rs interceptors, we already have consumed body
> as params. Hence we don't need to worry about the overhead of building the
> body of the request (In certain phases of the jax-rs interceptors we have
> consumed body, eg - PRE_INVOKE).
>

Yes.. As mentioned in my earlier reply, I'm also +1 for adding a new
enforcement method.

My question is this is something generic enough to add to
the identity-carbon-auth-rest or this is oauth specific which does not have
any re-usable logic other than the oauth endpoints.


> These authenticators are handlers. Hence we can control enabling /
> disabling and changing priority through identity.xml configuration.
>
Can you list sample configuration on this?


> The config you quoted will no longer be used. Instead all authenticators
> will bind as OSGI services at runtime and can be controlled as our usual
> handlers.
>
+1

Thanks,

>
>
> Since we currently don't have a concept as confidential clients (apps) we
> don't do this validation at authenticator levels. Even if authentication
> fails client ID will still be there in the context. Application logic
> decides whether the grant type is confidential or not.
>
> In a case of a non confidential clients, if the format of the way we send
> out client id is a standard way (as a body param) still the client can get
> a token even if authentication fails. If it's a non standard way we need to
> plug an authenticator which can extract the client ID from the incoming
> request.
>
> We don't support old authentication handlers. Hence yes, there will be a
> code migration. But I don't 

Re: [Architecture] Decoupling Client Authentication from OAuth2 Flow

2018-01-23 Thread Hasintha Indrajee
Also the interceptor is a different component, but it's coupled with oauth
client authentication.. if you want to enforce client authentication for
any of the other webapp (cxf) you can reuse this interceptor. ie it's just
a matter of engaging this interceptor. Application logic has to consume the
results from this interceptor.

On Tue, Jan 23, 2018 at 1:48 PM, Hasintha Indrajee 
wrote:

>
>
> On Tue, Jan 23, 2018 at 1:18 PM, Darshana Gunawardana 
> wrote:
>
>> Hi Hasintha,
>>
>> On Tue, Jan 9, 2018 at 5:53 PM, Hasintha Indrajee 
>> wrote:
>>
>>> We have had several discussions with the objective of making these
>>> logics more reusable. One of the ideas was to use our carbon-auth-rest
>>> valve to authenticate client. Since it has below concerns and gaps we
>>> thought of implementing these authenticators as CXF interceptors.
>>>
>>> 1) Current implementation of rest-auth valves does not have a mechanism
>>> to engage authenticators per context.
>>> 2) Current implementation of rest-auth valves does not have a way to
>>> order the execution sequence of authenticators.
>>> 3) Also it seems like using a valve to intercept these specific requests
>>> which comes to a context doesn't seem logically correct. Tomcat level
>>> doesn't seem to be the correct place to intercept. Instead specific
>>> intercepting specific context seems more logical.
>>>
>>
>> Yes, point #3 is a valid point to not to use a tomcat valve, whereas
>> other two points are limitations of the existing implementations and I
>> could not see them as blockers since if we are anyway have to implement
>> those functionalities.
>>
>>
>>>
>>> Hence we will be going forward with CXF interceptors for authentication.
>>>
>>
>> Can we consider this interceptor as an another enforcement point like
>> tomcat valve and add a new module to identity-carbon-auth-rest component?
>>
>> The reason is https://github.com/wso2-extens
>> ions/identity-carbon-auth-rest component desined to enforce authn &
>> authz for rest endponints. And tomcat valve implementation is only an one
>> way of intercepting method which not suitable for this context, but we
>> still could reuse org.wso2.carbon.identity.auth.service
>> and org.wso2.carbon.identity.authz.service modules to manage central
>> operations of authenticators.
>>
>> If this implementation is only applicable for oauth endpoints and if
>> there is no usage of any other rest endpoints for similar authentication
>> mechanisms, its ok to develop these as separate modules, but we have to
>> clearly decide what to use when.
>>
>
> This is a tomcat valve which is getting invoked for all incoming requests
> to the server. It's fine to do a user authentication from a tomcat valve
> since user authentication is a concept which belongs / relevant to the
> whole server. OAuth client authentication is just limited to oauth. Hence
> at tomcat level we don't need to implement application specific logics such
> as retrieving oauth app info and doing authentication. It's not ideal.
>
> Further client credentials (including jwts) can come in the body of the
> request. At tomcat valve level if we are to consume input stream, it's a
> heavy and costly operation. Body consumption is required to decide whether
> the request can be handled or not by the client authenticator. Furthermore,
> if we consume the input stream at tomcat level we need to wrap the original
> request with a backed input stream to make sure rest of the flows are
> working fine (At jax-rs level also they read the input stream in order to
> build params). These stuff look more workarounds and not ideal to do since
> anyway this is costly.
>
> Furthermore, if we use jax-rs interceptors, we already have consumed body
> as params. Hence we don't need to worry about the overhead of building the
> body of the request (In certain phases of the jax-rs interceptors we have
> consumed body, eg - PRE_INVOKE).
>
> These authenticators are handlers. Hence we can control enabling /
> disabling and changing priority through identity.xml configuration. The
> config you quoted will no longer be used. Instead all authenticators will
> bind as OSGI services at runtime and can be controlled as our usual
> handlers.
>
> Since we currently don't have a concept as confidential clients (apps) we
> don't do this validation at authenticator levels. Even if authentication
> fails client ID will still be there in the context. Application logic
> decides whether the grant type is confidential or not.
>
> In a case of a non confidential clients, if the format of the way we send
> out client id is a standard way (as a body param) still the client can get
> a token even if authentication fails. If it's a non standard way we need to
> plug an authenticator which can extract the client ID from the incoming
> request.
>
> We don't support old authentication handlers. Hence yes, there will be a
> code migration. But I don't think it's 

Re: [Architecture] Decoupling Client Authentication from OAuth2 Flow

2018-01-23 Thread Hasintha Indrajee
On Tue, Jan 23, 2018 at 1:18 PM, Darshana Gunawardana 
wrote:

> Hi Hasintha,
>
> On Tue, Jan 9, 2018 at 5:53 PM, Hasintha Indrajee 
> wrote:
>
>> We have had several discussions with the objective of making these logics
>> more reusable. One of the ideas was to use our carbon-auth-rest valve to
>> authenticate client. Since it has below concerns and gaps we thought of
>> implementing these authenticators as CXF interceptors.
>>
>> 1) Current implementation of rest-auth valves does not have a mechanism
>> to engage authenticators per context.
>> 2) Current implementation of rest-auth valves does not have a way to
>> order the execution sequence of authenticators.
>> 3) Also it seems like using a valve to intercept these specific requests
>> which comes to a context doesn't seem logically correct. Tomcat level
>> doesn't seem to be the correct place to intercept. Instead specific
>> intercepting specific context seems more logical.
>>
>
> Yes, point #3 is a valid point to not to use a tomcat valve, whereas other
> two points are limitations of the existing implementations and I could not
> see them as blockers since if we are anyway have to implement those
> functionalities.
>
>
>>
>> Hence we will be going forward with CXF interceptors for authentication.
>>
>
> Can we consider this interceptor as an another enforcement point like
> tomcat valve and add a new module to identity-carbon-auth-rest component?
>
> The reason is https://github.com/wso2-extensions/identity-carbon-auth-rest
> component desined to enforce authn & authz for rest endponints. And tomcat
> valve implementation is only an one way of intercepting method which not
> suitable for this context, but we still could reuse 
> org.wso2.carbon.identity.auth.service
> and org.wso2.carbon.identity.authz.service modules to manage central
> operations of authenticators.
>
> If this implementation is only applicable for oauth endpoints and if there
> is no usage of any other rest endpoints for similar authentication
> mechanisms, its ok to develop these as separate modules, but we have to
> clearly decide what to use when.
>

This is a tomcat valve which is getting invoked for all incoming requests
to the server. It's fine to do a user authentication from a tomcat valve
since user authentication is a concept which belongs / relevant to the
whole server. OAuth client authentication is just limited to oauth. Hence
at tomcat level we don't need to implement application specific logics such
as retrieving oauth app info and doing authentication. It's not ideal.

Further client credentials (including jwts) can come in the body of the
request. At tomcat valve level if we are to consume input stream, it's a
heavy and costly operation. Body consumption is required to decide whether
the request can be handled or not by the client authenticator. Furthermore,
if we consume the input stream at tomcat level we need to wrap the original
request with a backed input stream to make sure rest of the flows are
working fine (At jax-rs level also they read the input stream in order to
build params). These stuff look more workarounds and not ideal to do since
anyway this is costly.

Furthermore, if we use jax-rs interceptors, we already have consumed body
as params. Hence we don't need to worry about the overhead of building the
body of the request (In certain phases of the jax-rs interceptors we have
consumed body, eg - PRE_INVOKE).

These authenticators are handlers. Hence we can control enabling /
disabling and changing priority through identity.xml configuration. The
config you quoted will no longer be used. Instead all authenticators will
bind as OSGI services at runtime and can be controlled as our usual
handlers.

Since we currently don't have a concept as confidential clients (apps) we
don't do this validation at authenticator levels. Even if authentication
fails client ID will still be there in the context. Application logic
decides whether the grant type is confidential or not.

In a case of a non confidential clients, if the format of the way we send
out client id is a standard way (as a body param) still the client can get
a token even if authentication fails. If it's a non standard way we need to
plug an authenticator which can extract the client ID from the incoming
request.

We don't support old authentication handlers. Hence yes, there will be a
code migration. But I don't think it's costly.


> Regards,
>
>
>
>> On Tue, Jan 9, 2018 at 10:22 AM, Hasintha Indrajee 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jan 9, 2018 at 8:29 AM, Isura Karunaratne 
>>> wrote:
>>>


 On Mon, Jan 8, 2018 at 4:49 PM, Hasintha Indrajee 
 wrote:

> The idea behind this is to decouple the authentication mechanism used
> by OAuth2 clients from the rest of the OAuth2 logic, so that different
> types of client authenticators can be plugged. For an example according to