[Architecture] [Dev] [VOTE] Release WSO2 IoT Server 3.2.0 RC1

2018-01-29 Thread Rasika Perera
Hi Devs,

We are pleased to announce the release candidate of WSO2 IoT Server 3.2.0.

This is the first release candidate (RC) of the WSO2 IoT Server 3.2.0
release.

This release carries 275 issue fixes [1-12] over the last GA (3.1.0)
release.

Reported Issues:

   - https://github.com/wso2/product-iots/issues

Source and distribution packages:

   - https://github.com/wso2/product-iots/releases/tag/v3.2.0-RC1

Tag to be voted upon:

   - https://github.com/wso2/product-iots/releases/tag/v3.2.0-RC1

Please download, test, and vote. The README file under the distribution
contains guide and instructions on how to try it out locally.

[+] Stable - Go ahead and release
[-] Broken - Do not release (explain why)

This vote will be open for 72 hours or as needed.

[1] https://github.com/wso2/product-iots/milestone/3?closed=1
[2] https://github.com/wso2/product-iots/milestone/4?closed=1
[3] https://github.com/wso2/product-iots/milestone/5?closed=1
[4] https://github.com/wso2/product-iots/milestone/6?closed=1
[5] https://github.com/wso2/product-iots/milestone/7?closed=1
[6] https://github.com/wso2/product-iots/milestone/11?closed=1
[7] https://github.com/wso2/product-iots/milestone/12?closed=1
[8] https://github.com/wso2/product-iots/milestone/13?closed=1
[9] https://github.com/wso2/product-iots/milestone/14?closed=1
[10] https://github.com/wso2/product-iots/milestone/18?closed=1
[11] https://github.com/wso2/product-iots/milestone/19?closed=1
[12] https://github.com/wso2/product-iots/milestone/20?closed=1

Regards,
The WSO2 IoT Team.

-- 
With Regards,

*Rasika Perera*
Senior Software Engineer
LinkedIn: http://lk.linkedin.com/in/rasika90



WSO2 Inc. www.wso2.com
lean.enterprise.middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Proposal to Use a Single Set of WSO2 Docker Images for All Container Platforms

2018-01-29 Thread Lakmal Warusawithana
On Mon, Jan 29, 2018 at 7:36 AM, Imesh Gunaratne  wrote:

> On Mon, Jan 29, 2018 at 6:58 AM, Muhammed Shariq  wrote:
>
>> Hi Imesh / all,
>>
>> Personally, I think the best option going forward is to maintain a single
>> set of docker image across all platforms. It's true that there is a concern
>> of users having to do more work, but in reality, user's will have do quite
>> a lot of config changes such as copying jdbc drivers, create key stores and
>> update hostnames etc right? As long as we provide a clean option, which is
>> using volume mounts or in k8s case config maps, we should be good.
>>
>> For the evaluation / demo case, users can use the docker-compose
>> artifacts that's already preconfigured and ready to go.
>>
>> While it might seem attractive to maintain images per platform, I think
>> it would be very costly and hard to maintain in the long run. In the
>> future, we would have to do things like running some scans etc on the built
>> images before making it available to pull. Having to identify issues across
>> many different platforms and fixing them one by one would be cumbersome.
>>
>> I would suggest we go with a single set of images for all platforms and
>> then create per platform images if the need arises.
>>
>
> ​I completely agree with Shariq!
>
> The reason for starting this thread was that when we started creating
> DC/OS Docker images we found that we can simply use the Docker images
> created in the product Docker Git repositories (github.com/wso2/docker-
> ) without making any changes except for adding user group id for
> managing volume permissions (which would be common for any container
> platform).
>
> This model will allow us to efficiently manage WSO2 Docker images by only
> creating one image per product version (in EI and SP there will be one per
> profile) and use those on all container platforms by following above best
> practices. Most importantly, it will work well when releasing
> updates/patches.
>
> Regarding the concern of having additional steps for copying files to
> volumes, let's do a quick POC and see whether we can find a better way to
> overcome that problem in each platform for evaluation scenarios.
>
> @Lakmal: Would you mind sharing your thoughts on this?
>
>
Lets do a POC and then decide. IMO we should not kill optimization by doing
generalization. Users point of view, they need optimized dokcer image for
orchestration platform


> Thanks
> Imesh
>
>>
>>
>>
>> On Mon, Jan 22, 2018 at 5:30 PM, Imesh Gunaratne  wrote:
>>
>>> On Mon, Jan 22, 2018 at 2:46 PM, Pubudu Gunatilaka 
>>> wrote:
>>>
 Hi Imesh,

 It is very convenient if we can reuse the docker image. AFAIU if we
 follow the above approach we can use a single docker image in all the
 container platforms.

 One of the drawbacks I see with this approach is that the user has to
 update the volume mounts with the necessary jar files, JKS files, etc. If
 any user tries this approach in Kubernetes, he has to add those jar files
 and binary files to the NFS server (To the volume which holds NFS server
 data). This affects the installation experience.

 IMHO, we should minimize the effort in trying out the WSO2 products in
 Kubernetes or any container platform. Based on the user need, he can switch
 to their own deployment approach.

>>>
>>> Thanks for the quick response Pubudu! Yes, that's a valid concern. With
>>> the proposed approach user would need to execute an extra step to copy
>>> required files to a set of volume mounts before executing the deployment.
>>> In a production deployment I think that would be acceptable as there will
>>> be more manual steps involved such as creating databases, setting up CI/CD,
>>> deployment automation, etc. However, in an evaluation scenario when someone
>>> is executing a demo it might become an overhead.
>>>
>>> I also noticed that kubectl cp command can be used to copy files from a
>>> local machine to a container. Let's check whether we can use that approach
>>> to overcome this issue:
>>> https://kubernetes.io/docs/reference/generated/kubectl/kubec
>>> tl-commands#cp
>>>
>>> On Mon, Jan 22, 2018 at 3:03 PM, Isuru Haththotuwa 
>>> wrote:
>>>
 In API Manager K8s artifacts, what we have followed is not having an
 image-per-profile method. With the introduction of Config Maps, it has
 become only two base images - for APIM and Analytics. Its extremely helpful
 from the maintenance PoV that we have a single set of Dockerfiles, but has
 a tradeoff with automation level AFAIU, since the user might have manual
 steps to perform.

>>>
>>> ​Thanks Isuru for the quick response! What I meant by image per profile
>>> is that, in products like EI and SP, we would need a Docker image per
>>> profile due to their design.​
>>>

 Its would be still possible to to write a wrapper script for a 

Re: [Architecture] Proposal to Use a Single Set of WSO2 Docker Images for All Container Platforms

2018-01-29 Thread Dilan Udara Ariyaratne
Hi Imesh,

+1 to this suggestion.

My personal experience is that even our users find it confusing to see
dockerfile definitions for a product or product profile in multiple
repositories.
Thus, it would be quite intuitive to have a single source of truth per each
product or product profile in the relevant docker- repository from
this point onward.
And with the level of generalization that we have reached now in dockerfile
definitions, my gut feeling is that we can use the same definition for any
container platform without any specializations even in future.

Thanks,
Dilan.

*Dilan U. Ariyaratne*
Senior Software Engineer
WSO2 Inc. 
Mobile: +94766405580 <%2B94766405580>
lean . enterprise . middleware


On Mon, Jan 29, 2018 at 7:36 AM, Imesh Gunaratne  wrote:

> On Mon, Jan 29, 2018 at 6:58 AM, Muhammed Shariq  wrote:
>
>> Hi Imesh / all,
>>
>> Personally, I think the best option going forward is to maintain a single
>> set of docker image across all platforms. It's true that there is a concern
>> of users having to do more work, but in reality, user's will have do quite
>> a lot of config changes such as copying jdbc drivers, create key stores and
>> update hostnames etc right? As long as we provide a clean option, which is
>> using volume mounts or in k8s case config maps, we should be good.
>>
>> For the evaluation / demo case, users can use the docker-compose
>> artifacts that's already preconfigured and ready to go.
>>
>> While it might seem attractive to maintain images per platform, I think
>> it would be very costly and hard to maintain in the long run. In the
>> future, we would have to do things like running some scans etc on the built
>> images before making it available to pull. Having to identify issues across
>> many different platforms and fixing them one by one would be cumbersome.
>>
>> I would suggest we go with a single set of images for all platforms and
>> then create per platform images if the need arises.
>>
>
> ​I completely agree with Shariq!
>
> The reason for starting this thread was that when we started creating
> DC/OS Docker images we found that we can simply use the Docker images
> created in the product Docker Git repositories (github.com/wso2/docker-
> ) without making any changes except for adding user group id for
> managing volume permissions (which would be common for any container
> platform).
>
> This model will allow us to efficiently manage WSO2 Docker images by only
> creating one image per product version (in EI and SP there will be one per
> profile) and use those on all container platforms by following above best
> practices. Most importantly, it will work well when releasing
> updates/patches.
>
> Regarding the concern of having additional steps for copying files to
> volumes, let's do a quick POC and see whether we can find a better way to
> overcome that problem in each platform for evaluation scenarios.
>
> @Lakmal: Would you mind sharing your thoughts on this?
>
> Thanks
> Imesh
>
>>
>>
>>
>> On Mon, Jan 22, 2018 at 5:30 PM, Imesh Gunaratne  wrote:
>>
>>> On Mon, Jan 22, 2018 at 2:46 PM, Pubudu Gunatilaka 
>>> wrote:
>>>
 Hi Imesh,

 It is very convenient if we can reuse the docker image. AFAIU if we
 follow the above approach we can use a single docker image in all the
 container platforms.

 One of the drawbacks I see with this approach is that the user has to
 update the volume mounts with the necessary jar files, JKS files, etc. If
 any user tries this approach in Kubernetes, he has to add those jar files
 and binary files to the NFS server (To the volume which holds NFS server
 data). This affects the installation experience.

 IMHO, we should minimize the effort in trying out the WSO2 products in
 Kubernetes or any container platform. Based on the user need, he can switch
 to their own deployment approach.

>>>
>>> Thanks for the quick response Pubudu! Yes, that's a valid concern. With
>>> the proposed approach user would need to execute an extra step to copy
>>> required files to a set of volume mounts before executing the deployment.
>>> In a production deployment I think that would be acceptable as there will
>>> be more manual steps involved such as creating databases, setting up CI/CD,
>>> deployment automation, etc. However, in an evaluation scenario when someone
>>> is executing a demo it might become an overhead.
>>>
>>> I also noticed that kubectl cp command can be used to copy files from a
>>> local machine to a container. Let's check whether we can use that approach
>>> to overcome this issue:
>>> https://kubernetes.io/docs/reference/generated/kubectl/kubec
>>> tl-commands#cp
>>>
>>> On Mon, Jan 22, 2018 at 3:03 PM, Isuru Haththotuwa 
>>> wrote:
>>>
 In API Manager K8s artifacts, what we have followed is not having an
 image-per-profile method. With the introduction of 

Re: [Architecture] [APIM] CLI support for Importing and Exporting Applications

2018-01-29 Thread Randilu Soysa
Adding Sample Responses, Exports an Application from a desired environment

Commands

export-app

Flags
  Required
-n, --name string  Name of the Application to be exported
-i, --uuid string  UUID of the Application to be exported
-e, --environment string   Environment to which the
Application should be exported
  Optional
-u, --username string  Username
-p, --password string  Password

-k, --insecure Allow connections to SSL endpoints
without certs
--verbose  Enable verbose mode

apimcli export-app (--name  --uuid
 --environment
) [flags]

Examples:

apimcli export-app -n SampleApp
9f6affe2-4c97-4817-bded-717f8b01eee8 -e dev
apimcli export-app -n SampleApp
7bc2b94e-c6d2-4d4f-beb1-cdccb08cd87f -e prod


Sample Response:

Succesfully exported Application!
Find the exported Application at
/home/user/.wso2apimcli/exported/dev/admin_sampleApp.zip


Imports
an Application to a desired environment

Commands

import-app

Flags
Required
  -f, --file string  Name of the Application to be imported
  -e, --environment string   Environment from the which the
Application should be imported (default "default")
Optional
  -s, --addSubscriptions Adds subscriptions of the Application
  -o, --perserveOwnerPreserves app owner from the
original Environment
  -u, --username string  Username
  -p, --password string  Password


  -k, --insecure Allow connections to SSL
endpoints without certs
  --verbose  Enable verbose mode

apimcli import-app (--file  --environment
) [flags]

Examples:

apimcli import-app -f qa/sampleApp.zip -e dev
apimcli import-app -f staging/sampleApp.zip -e prod -u admin -p admin
apimcli import-app -f qa/sampleApp.zip --preserveOwner
--addSubscriptions -e prod


Sample Response:

ZipFilePath:
/home/user/.wso2apimcli/exported/staging/admin_sampleApp.zip
Succesfully imported Application!


Lists
the Applications available for a certain user

Commands

list apps

Flags
Required
-e, --environment
Optional
-u, --username
-p, --password


Examples:

wso2apim list apps -e dev
wso2ppim list apps -e staging
wso2ppim list apps -e staging -u admin -p 123456
wso2ppim list apps -e staging -u admin
wso2ppim list apps -e staging -p 123456

Sample Response:

Environment: staging
No. of Applications: 3

+--+++---+--+
|  ID  |NAME|
SUBSCRIBER |   TIER|  STATUS  |

+--+++---+--+
| 7bc2b94e-c6d2-4d4f-beb1-cdccb08cd87f | DefaultApplication |
admin  | 50PerMin  | APPROVED |
| b556d2f1-71be-4368-842e-482d0c9e5910 | sampleApp1 |
admin  | Unlimited | APPROVED |
| 3b1377e1-d8c6-4c64-a31c-af555407a14a | sampleApp2 |
admin  | Unlimited | CREATED  |

+--+++---+--+





On Thu, Jan 25, 2018 at 5:41 PM, Randilu Soysa  wrote:

> Hi everyone,
>
> I’m working on a project to introduce commands to provide application
> import export support for the import-export-cli for APIM 2.x. I am planning
> to introduce commands in order to list available applications of a specific
> user, export an application from a desired environment and import an
> application to a desired environment.
>
>
> The commands are as follows,
>
>
> Exports an Application from a desired environment
>
> Commands
>
> export-app
>
> Flags
>   Required
> -n, --name string  Name of the Application to be exported
> -i, --uuid string  UUID of the Application to be exported
> -e, --environment string   Environment from which the Application 
> should be exported
>   Optional
> -p, --password string  Password
> -u, --username string  Username
>
> -k, --insecure Allow connections to SSL endpoints without 
> certs
> --verbose  Enable verbose mode
>
> apimcli export-app (--name  --uuid 
>  --environment 
> ) [flags]
>
> Examples:
>
> apimcli export-app -n SampleApp 9f6affe2-4c97-4817-bded-717f8b01eee8 
> -e dev
> apimcli export-app -n SampleApp 7bc2b94e-c6d2-4d4f-beb1-cdccb08cd87f 
> -e prod
>
>
>
> 

Re: [Architecture] OIDC request object support

2018-01-29 Thread Hasanthi Purnima Dissanayake
Hi Johann,
>
>
> You might have missed to confirm on this. This is the most important point.
> Do you also agree that any requested claim in the request object must be
> returned regardless of the scopes we have requested or registered?
>

Agreed. I missed this previously.

Thanks,

On Mon, Jan 29, 2018 at 11:17 AM, Johann Nallathamby 
wrote:

> Hi Hasanthi,
>
> On Thu, Jan 25, 2018 at 11:30 AM, Johann Nallathamby 
> wrote:
>
>> Hi Hasanthi,
>>
>> On Wed, Jan 24, 2018 at 11:14 PM, Hasanthi Purnima Dissanayake <
>> hasan...@wso2.com> wrote:
>>
>>> Hi Johann,
>>>
>>> First of all apologies for the late reply :).
>>>
>>> Hi Hasanthi,

 On Tue, Jan 23, 2018 at 9:31 AM, Hasanthi Purnima Dissanayake <
 hasan...@wso2.com> wrote:

> Hi Johann,
>
> Is there any instance in which IS will throw error to client because
>> it cannot send the claim?
>>
>> Because in the spec it says the following.
>>
>> Note that even if the Claims are not available because the End-User
>> did not authorize their release or they are not present, the 
>> Authorization
>> Server MUST NOT generate an error when Claims are not returned, whether
>> they are Essential or Voluntary, unless otherwise specified in the
>> description of the specific claim.
>>
>> So IMO we need to have a property for each claim that says whether we
>> return an error or not.
>>
>> Wdyt?
>>
>
> What I understand from the above is, if a claim is marked as essential
> or voluntary and if the server can not return the claim the flow should 
> not
> break and server should not throw an error if it is not specially 
> specified
> in the server side. In this scope we don't specify this from server side.
> Though this is not a MUST we can add this as an improvement as it adds a
> value.
>

 So IIUC in any circumstance we don't send an error to client. Correct?

 Yes, we can add that property as an improvement.

>
>
>>> 2. The claims like "nickname" it will act as a default claim and
>>> will control by both requested scopes and the requested claims.
>>>
>>
>> What do you mean by controlling using requested scope? Do you mean if
>> the client doesn't request at least one scope that includes this claim we
>> won't return that claim? I don't think that is mentioned in the spec. Can
>> you clarify?
>>
>
> The spec does not directly specify how should we treat for the Voluntary
> Claim from the server side. So what we have planned to do is to honour the
> scopes and server level requested claims when returning this claim.
>

 IMO, because the spec doesn't say to do anything special on the OP side
 about not being able to release a particular claim (it says not to break
 the normal flow), there is nothing special we can differentiate between
 essential and voluntary claims. Only thing we may be able to do is, give a
 warning to user saying that if s/he doesn't approve an essential claim s/he
 won't be able to work with the application smoothly. We can't do anything
 beyond that right?

 When you say scopes which scopes are you referring to? Are they the
 requested scopes in the request or the defined scopes in the registry? I
 fail to understand what scopes have to do with claims in this case.
 Following is what I find in spec related to this.

>>>
>>> Here what I meant was requested scopes in the request. In this case the
>>> request object itself can contain scope values. If  there are scopes in the
>>> request object , the authorization request scopes will be overriden from
>>> those scopes.
>>>
>>
>> That is fine, if that's what the spec says. My concern is not about that.
>> My concern is if there is a requested claim in the request object we must
>> return it. We don't need check them against scopes.
>>
>
> You might have missed to confirm on this. This is the most important point.
> Do you also agree that any requested claim in the request object must be
> returned regardless of the scopes we have requested or registered?
>
> Regards,
> Johann.
>
>
>>>
 "It is also the only way to request specific combinations of the
 standard Claims that cannot be specified using scope values. "

 As I understand if the specific requested OIDC claim, is defined in the
 OIDC dialect, the user has a value for that claim and s/he has approved
 that claim for the RP, then we can send them to the RP, regardless of
 whether it is defined in scope or not. Otherwise we are contradicting the
 above statement right?

>>>
>>> BTW, the above statement is for 'claim' parameter not for 'request'
>>> parameter right?
>>>
>>
>> Yes it is about claim parameter.
>>
>>
>>> If it is related to 'request' parameter as well then I also feel it is
>>> wrong to depend on the 

Re: [Architecture] [RRT] Improving caching based on cache-control and ETag headers

2018-01-29 Thread Keerthika Mahendralingam
Hi Malaka,

Existing ETag support is implemented at the transport level to support
the client-side caching. ie, we calculate the ETag value and add it to the
response as ETag header. Client(Eg: browser) can cache this response and
validate the response to the subsequent request using that ETag. I am not
doing any modification in that implementation.

In my case, when we have the no-cache header in the backend response that
means we can cache the response at the cache mediator level but we need to
verify the cached response before returning it to the user for the
subsequent call. So what we need to do is, if we have the no-cache and ETag
are present in the cached response, then we need to send a request with the
"If-not-match" header to the backend and if we get 304 response then we
should return the response which is cached already to the user. Otherwise,
we need to cache the newly returned response and return it to the user.


Thanks,
Keerthika.

On Fri, Jan 26, 2018 at 4:50 PM, Malaka Gangananda  wrote:

> Hi Keerthika,
>
> As Isuru mentioned ETag caching support is already implemented.
> But it only supports Strong ETag validation since
> in PassThroughHttpSender we have implemented the Handle ETag caching, by
> just hashing the message context with digestGenerator.
> So this gives the support for strong Etag validation.
> But to support weak Etag validation we need to check Semantic equivalence
> of two representation.
>
> So are we going to implement support for Weak Etag validation as well ?.
>
> Thanks,
>
> On Wed, Jan 24, 2018 at 9:55 AM, Keerthika Mahendralingam <
> keerth...@wso2.com> wrote:
>
>>
>>> What will happen in the following case?
>>>
>>>-  Cache Expiry < Max-age && and the cache entry is evicted?
>>>
>>> I believe in that case we have to fetch it from BE?
>>>
>> Yes, if the Cache expiry time is less than the Max-age then cached
>> response will be invalidated in the expiration time limit. So we ned to get
>> the response from BE.
>>
>>>
>>> thanks,
>>> Dimuthu
>>>
>>>
>>> On Wed, Jan 24, 2018 at 8:02 AM, Riyafa Abdul Hameed 
>>> wrote:
>>>
 Hi,

 It was required to support native JSON in the cache mediator and hence
 we had to use the JsonStreamBuilder. At the time of releasing it was
 mentioned that APIM still uses JsonBuilder and I created an issue[1] to
 address this if required.

 [1] https://github.com/wso2/product-ei/issues/916

 Thanks,
 Riyafa

 On Wed, Jan 24, 2018 at 3:40 AM, Dushan Abeyruwan 
 wrote:

> Hi Kreethika,
>   Yes, this is a long pending initiative that is required under the
> cache mediator. Anyway, I believe this may be more meaningful if you draw
> flow diagram + sequence diagram so, audience in this list able to fully
> understand the picture and the interaction of the middleman (i.e
> Integration layer) and that may be helpful when writing documentation
>
 Will send those ASAP Dushan.
>>
>> Thanks,
>> Keerthika.
>>
>>>
> Cheers,
> Dushan
>
> On Fri, Jan 12, 2018 at 1:37 AM, Keerthika Mahendralingam <
> keerth...@wso2.com> wrote:
>
>> +1. Thanks Riyafa for the suggestion.
>>
>>
>> Thanks,
>> Keerthika.
>>
>> On Fri, Jan 12, 2018 at 3:05 PM, Riyafa Abdul Hameed > > wrote:
>>
>>> Hi Keerthika,
>>>
>>> We should have an option for disregarding the cache-control headers
>>> and the default value should be that the cache-control headers be
>>> disregarded. This is because the current cache mediator is written so 
>>> that
>>> it is fully backward compatible with the older versions of the cache
>>> mediators. Any one using cache mediator in a synape configuration in an
>>> older version can use the same synapse configuration in the new version 
>>> and
>>> can expect the same behavior. If he/she wants to make use of the new
>>> features he/she may do so by editing the synapse configurations.
>>>
>>> Thanks,
>>> Riyafa
>>>
>>>
>>> On Fri, Jan 12, 2018 at 12:24 PM, Keerthika Mahendralingam <
>>> keerth...@wso2.com> wrote:
>>>
 Thanks Isuru. Will check the existing functionality.

 @Vijitha,
 +1 for providing the configuration option for omitting the
 cache-control headers.

 @Sanjeewa
 Will check with the latest cache mediator.

 Thanks,
 Keerthika.

 On Fri, Jan 12, 2018 at 12:16 PM, Vijitha Ekanayake <
 vijit...@wso2.com> wrote:

> Hi Sanjeewa,
>
>
> On Fri, Jan 12, 2018 at 12:01 PM, Sanjeewa Malalgoda <
> sanje...@wso2.com> wrote:
>
>> So i think we can add latest cache mediator dependency to API
>> Manager 2.2.0 branch and test this feature.
>> If there are any gaps in 

Re: [Architecture] [RRT] [IAM] Hash code, access token, refresh token and client secret values before store them in the database

2018-01-29 Thread Nuwan Dias
On Mon, Jan 29, 2018 at 1:15 PM, Isura Karunaratne  wrote:

>
>
> On Mon, Jan 29, 2018 at 1:10 PM, Dimuthu Leelarathne 
> wrote:
>
>> Hi Nuwan,
>>
>> On Mon, Jan 29, 2018 at 1:08 PM, Nuwan Dias  wrote:
>>
>>> Hi Dimuthu,
>>>
>>> I don't think we can regenerate since the client-secret will be hashed
>>> too. So I think we have to completely disable showing the test token and
>>> remove it off from the Swagger Console as well.
>>>
>> Can't we use clientId to regrenate client secret.
>

I was actually referring to regenerating the access token since that's what
the "Regenerate" button does.

>
> Thanks
> Isura.
>
>>
>>> Yes. Or we can get it as input.
>>
>>
>>> We may also have to think of providing a mechanism to renew the
>>> client-secret if/when lost.
>>>
>>
>> I meant regnerate client secret itself, not the access token.
>>
>> thanks,
>> Dimuthu
>>
>>
>>>
>>> Thanks,
>>> NuwanD.
>>>
>>>
>>>
>
>
> --
>
> *Isura Dilhara Karunaratne*
> Associate Technical Lead | WSO2
> Email: is...@wso2.com
> Mob : +94 772 254 810 <+94%2077%20225%204810>
> Blog : http://isurad.blogspot.com/
>
>
>
>


-- 
Nuwan Dias

Software Architect - WSO2, Inc. http://wso2.com
email : nuw...@wso2.com
Phone : +94 777 775 729
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [RRT] XACML based scope validator (during OAuth2 token validation)

2018-01-29 Thread Dimuthu Leelarathne
On Mon, Jan 29, 2018 at 11:38 AM, Ruwan Abeykoon  wrote:

> Hi All,
> -1 on adding anything to SP Configuration. This needs to be separated from
> SP object, or table itself.
> Reason:
> 1. We need to minimize DB changes adding features.
> 2. Adding a column per validator (XACML here) is not scalable. (What if
> another validator is added in future, do we add another column?)
>
>
> a) The DAO layer should do the necessary mapping.
> b) Can use Database Referential integrity and proper JOIN queries.
>


The configuration is not for the extension. The configuration will answer
the following concept.

"Do we need to perform authorizations when isssuing access tokens?"

There is no where in the IS object model that answers the above concept.

The way you perform authorizations can be anything -  It can be JDBC
validator, JavaScript validator (in the future). The configuration
introduced, is for the *concept*.

thanks,
Dimuthu




> c) Need to add proper extension points in the code so that the data-tables
> and UI elements can be plugged.
>
> Cheers,
> Ruwan
>
>
> On Sun, Jan 28, 2018 at 8:28 PM, Dimuthu Leelarathne 
> wrote:
>
>>
>>
>> On Wed, Jan 24, 2018 at 12:41 PM, Johann Nallathamby 
>> wrote:
>>
>>>
>>>
>>> On Tue, Jan 23, 2018 at 9:49 AM, Senthalan Kanagalingam <
>>> sentha...@wso2.com> wrote:
>>>
 Hi all,

 I have completed the scope validation implementation. But in this
 implementation, the entitlement engine has to run for every token
 validation request even there is no policy defined by the user for a
 particular service provider. PDP have to go through all existing policies
 to select the applicable policies. Its an overhead in token validation
 time.

 To avoid this we can introduce "Enable XACML based scope validator"
 checkbox under Local & Outbound Authentication Configuration.

>>>
>>> This should be under OAuth2 section because it's OAuth2 specific. We
>>> can't have "scope" under "Local & Outbound Authentication Configuration".
>>>
>>
>>
>> +1. It should be under OAuth2 section. And also it should be stored in
>> the same place as the OAuth2 configuration per service provider is stored.
>> Where do we store the SP configurations for OAuth2.0?
>>
>> thanks,
>> Dimuthu
>>
>>
>>> Regards,
>>> Johann.
>>>
>>>
 Then users can enable or disable scope validation for that particular
 service provider. This will be a simple select query and we can use
 caching. We can check whether the user has enabled the scope validation or
 not and continue.

 Any suggestions or improvements are highly appreciated.

 Thanks and Regards,
 Senthalan

 On Fri, Jan 19, 2018 at 6:42 PM, Senthalan Kanagalingam <
 sentha...@wso2.com> wrote:

> Hi,
>
> Here is the architecture of the XACML based scope validator.
>
>
> After whether access token has expired, the scope of the token will be
> validated using JDBCScopeValidator and XACMLScopeValidator.
> The JDBCScopeValidator was already implemented. The XACMLScopeValidator
> will create an XACML request from access token and validate using
> EntitlementService.
>
>
> Thanks and Regards,
> Senthalan
>
> On Tue, Jan 16, 2018 at 8:59 PM, Dimuthu Leelarathne <
> dimut...@wso2.com> wrote:
>
>> Hi Johann,
>>
>> On Tue, Jan 16, 2018 at 8:49 PM, Johann Nallathamby 
>> wrote:
>>
>>> Hi Senthalan,
>>>
>>> On Tue, Jan 16, 2018 at 12:05 PM, Senthalan Kanagalingam <
>>> sentha...@wso2.com> wrote:
>>>
 Hi Johann,

 Thanks for the feedback. Currently, I am checking that feature.

 According to my understanding, this feature will be useful to
 validate the token scopes against resource scopes. As this validation 
 is
 done by JDBCScopeValidator and my implementation will be parallel to 
 it (IS
 allows multiple scope validators), do I have to implement validation 
 of the
 token scopes against the resource scopes as well?

>>>
>>> -1 to have two implementation. There should be only one
>>> implementation which is based on XACML. Otherwise it will create 
>>> overhead
>>> in configuring and doesn't work well with tenant model.
>>>
>>
>>> The current scope-role based validation we introduced in IS 5.4.0
>>> will need to be implemented using XACML and be the default policy. The
>>> other policies you were planning could be additional template policies 
>>> we
>>> ship with the product. In addition users can have any new policies they
>>> want (per tenant).
>>>
>>>

 Because I have checked with identity-application-authz-xacml[1
 ]
 and planned to 

Re: [Architecture] [RRT] [IAM] Hash code, access token, refresh token and client secret values before store them in the database

2018-01-29 Thread Isura Karunaratne
On Mon, Jan 29, 2018 at 1:10 PM, Dimuthu Leelarathne 
wrote:

> Hi Nuwan,
>
> On Mon, Jan 29, 2018 at 1:08 PM, Nuwan Dias  wrote:
>
>> Hi Dimuthu,
>>
>> I don't think we can regenerate since the client-secret will be hashed
>> too. So I think we have to completely disable showing the test token and
>> remove it off from the Swagger Console as well.
>>
> Can't we use clientId to regrenate client secret.

Thanks
Isura.

>
>> Yes. Or we can get it as input.
>
>
>> We may also have to think of providing a mechanism to renew the
>> client-secret if/when lost.
>>
>
> I meant regnerate client secret itself, not the access token.
>
> thanks,
> Dimuthu
>
>
>>
>> Thanks,
>> NuwanD.
>>
>>
>>


-- 

*Isura Dilhara Karunaratne*
Associate Technical Lead | WSO2
Email: is...@wso2.com
Mob : +94 772 254 810
Blog : http://isurad.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [MB] MQTT : support around 100K mqtt connections using WSO2 MB

2018-01-29 Thread Imesh Gunaratne
[+ Hasitha, Sumedha]

On Sun, Jan 28, 2018 at 9:20 PM, Youcef HILEM 
wrote:

> Hi,
> We have a fleet of over 10 android smartphones.
> We evaluate MQTT bokers that can manage more than 100k connections with a
> large number of topics (notification, referential data, operational data,
> ...).
> Could you give me some tips to properly size a cluster in HA and scale with
> a load of over 100K connections?
>
> Thanks
> Youcef HILEM
>
>
>
> --
> Sent from: http://wso2-oxygen-tank.10903.n7.nabble.com/WSO2-
> Architecture-f62919.html
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>



-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [RRT] [IAM] Hash code, access token, refresh token and client secret values before store them in the database

2018-01-29 Thread Dimuthu Leelarathne
Hi Nuwan,

On Mon, Jan 29, 2018 at 1:08 PM, Nuwan Dias  wrote:

> Hi Dimuthu,
>
> I don't think we can regenerate since the client-secret will be hashed
> too. So I think we have to completely disable showing the test token and
> remove it off from the Swagger Console as well.
>
> Yes. Or we can get it as input.


> We may also have to think of providing a mechanism to renew the
> client-secret if/when lost.
>

I meant regnerate client secret itself, not the access token.

thanks,
Dimuthu


>
> Thanks,
> NuwanD.
>
>
>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture