Re: [Dev] IoT Server - Andriod agent is not publishing light measurements correctly

2016-06-04 Thread Ayyoob Hamza
Hi Sinthuja,
Wanted to know whether you are trying this with the alpha pack or a nightly
build?. If it is the nightly build that you are trying it with then could
it be possible to try this with the alpha pack[1] since with the latest
changes in the master having an issue with the agent. If the issue still
persists then could it be possible to let us know the android OS version.

[1] https://github.com/wso2/product-iots/releases/tag/v1.0.0-alpha

Regards,
Ayyoob

*Ayyoob Hamza*
*Software Engineer*
WSO2 Inc.; http://wso2.com
email: ayy...@wso2.com cell: +94 77 1681010 <%2B94%2077%207779495>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Requesting the parameters for throttling policy configurations in api-manager.xml

2016-06-04 Thread Harsha Kumara
Yes AFAIK it won't add any processing delay from CEP side if we don't
access any data in the json map. Since we have to go through header maps
and extract the query param this will add delay for data publishing side.
We can enable this by default. I think it's better to have the option to
disable.

Thanks,
Harsha

On Sat, Jun 4, 2016 at 1:19 PM, Nuwan Dias  wrote:

> Why would we have to check if there are conditions which require query
> params? Or headers? I think we should publish them anyway. Does it cause a
> performance impact to send additional data to CEP? I don't think getting
> those data from the request has an overhead.
>
> Thanks,
> NuwanD.
>
> On Sat, Jun 4, 2016 at 10:40 AM, Harsha Kumara  wrote:
>
>> Hi Ushani,
>>
>> We add that property because, sometime users may not need publishing the
>> query params to the CEP as they don't have policies associated with them.
>> Going through the policies applied for a particular API and determine it
>> has query param conditions and then publishing the query param to the CEP
>> is expensive. So we have added those configurations.
>>
>>
>> On Sat, Jun 4, 2016 at 7:46 AM, Ushani Balasooriya 
>> wrote:
>>
>>> Additionaly why do we need to control EnableQueryParamConditions via
>>> api-manager.xml when we have a on/off switch button to each conditional
>>> header in Advanced throttling configuration UI?
>>> On Jun 4, 2016 7:24 AM, "Ushani Balasooriya"  wrote:
>>>
 Hi Amila,

 So these configurations will be applicable across all tenants?

 Akso I think it would be better if we can hide Conditional groups when
 EnableHeaderConditions is set to false or make them disable. Likewise will
 it be possible to have UI level improvements to notify the admin? Just a
 thought.

>>> Yes this would be a good improvement to have in the UI.
>>
>>> Thanks,
 On Jun 3, 2016 11:00 PM, "Amila De Silva"  wrote:

> Hi Ushani,
>
> For the new throttling to work you first have to
> set EnableAdvanceThrottling to true. When EnableHeaderConditions is set to
> true, all the headers in the incoming message will be published to the 
> CEP.
> Similarly Setting EnableJWTClaimConditions and EnableQueryParamConditions
> to true would publish JWT claims and the query parameters coming with the
> request to the CEP. In the latest pack, spike arrest is only enabled
> through the UI, so there's no config element for that.
>
> On Fri, Jun 3, 2016 at 6:35 PM, Ushani Balasooriya 
> wrote:
>
>> Hi APIM Team,
>>
>> It would be highly appreciated, if you can let us know the parameters
>> that need to be enabled in api-manager.xml when working with throttling
>> since the documents are not ready yet.
>>
>> E.g., Spike arrest policy parameter, Query param parameter.
>>
>>
>> Thanks and regards,
>> --
>> *Ushani Balasooriya*
>> Senior Software Engineer - QA;
>> WSO2 Inc; http://www.wso2.com/.
>>
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "WSO2 Documentation Group" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to documentation+unsubscr...@wso2.com.
>> For more options, visit https://groups.google.com/a/wso2.com/d/optout
>> .
>>
>
>
>
> --
> *Amila De Silva*
>
> WSO2 Inc.
> mobile :(+94) 775119302
>
>
>>
>>
>> --
>> Harsha Kumara
>> Software Engineer, WSO2 Inc.
>> Mobile: +94775505618
>> Blog:harshcreationz.blogspot.com
>>
>
>
>
> --
> Nuwan Dias
>
> Technical Lead - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 
Harsha Kumara
Software Engineer, WSO2 Inc.
Mobile: +94775505618
Blog:harshcreationz.blogspot.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [VOTE] Release WSO2 Carbon Kernel 4.4.6 RC1

2016-06-04 Thread Kalpa Welivitigoda
Hi Devs,

This is the 1st release candidate of WSO2 Carbon Kernel 4.4.6.

This release fixes the following issues:
https://wso2.org/jira/issues/?filter=13090

Please download and test your products with kernel 4.4.6 RC1 and vote. Vote
will be open for 72 hours or as longer as needed.

​Source and binary distribution files:​
http://svn.wso2.org/repos/wso2/people/kalpaw/wso2carbon-4.4.6/wso2carbon-4.4.6-rc1.zip

​Maven staging repository:​
http://maven.wso2.org/nexus/content/repositories/orgwso2carbon-1022/

​The tag to be voted upon:​
https://github.com/wso2/carbon-kernel/tree/v4.4.6-rc1


[ ] Broken - do not release (explain why)
[ ] Stable - go ahead and release

Thank you
Carbon Team​

-- 
Best Regards,

Kalpa Welivitigoda
Software Engineer, WSO2 Inc. http://wso2.com
Email: kal...@wso2.com
Mobile: +94776509215
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Error while registering UDF method in Spark

2016-06-04 Thread Nirmal Fernando
Can we skip certain classes getting this $jacocoInit method?

On Sat, Jun 4, 2016 at 11:38 PM, Sachith Withana  wrote:

> Hi Fazlan,
>
> To add to what Gimantha said, since we register methods, only the jacoco
> injected method would fail to register.
> All the other methods would be registered as UDFs.
>
> Regards,
> Sachith
>
> On Sat, Jun 4, 2016 at 12:35 PM, Gimantha Bandara 
> wrote:
>
>> Hi Fazlan,
>>
>> Spark UDFs are the Java methods that we define. Jacoco actually injects a
>> static method called "$jacocoInit" to every test class, so every UDF class
>> will contain an additional method called "$jacocoInit". Spark cannot
>> determine the datatypes passed in the parameters and registering
>> "$jacocoInit" as a spark UDF fails. Thats why that error comes, and since
>> it is not a method that we define as a spark UDF, it is harmless.
>>
>> Thanks,
>>
>> On Sat, Jun 4, 2016 at 10:12 PM, Nirmal Fernando  wrote:
>>
>>> Hi Fazlan,
>>>
>>> AFAIK this is an unharmful error and Sachith should know some context.
>>>
>>> On Sat, Jun 4, 2016 at 10:09 PM, Fazlan Nazeem  wrote:
>>>
 Hi,

 We are observing some errors related to UDF registration in API-M
 analytics Test integration phase. The error keeps on printing till the
 Spark Client is started. This error is only present during the test phase.
 It does not occur when the server is started separately. DAS team, Any idea
 on how we could get rid of this? Jenkins log in [1].

  [2016-06-04 02:23:32,998] ERROR 
 {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  
 Error while registering the UDF method: $jacocoInit, Cannot determine the 
 return DataType: class [Z

 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
 org.wso2.carbon.analytics.spark.core.exception.AnalyticsUDFException: 
 Cannot determine the return DataType: class [Z
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.wso2.carbon.analytics.spark.core.util.AnalyticsCommonUtils.getDataType(AnalyticsCommonUtils.java:104)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.wso2.carbon.analytics.spark.core.udf.AnalyticsUDFsRegister.registerUDF(AnalyticsUDFsRegister.java:62)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.registerUDFs(SparkAnalyticsExecutor.java:414)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSqlContext(SparkAnalyticsExecutor.java:378)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:360)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:189)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:77)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at java.lang.reflect.Method.invoke(Method.java:606)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
 INFO  
 [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] -
   at 
 

Re: [Dev] IoT Server - Andriod agent is not publishing light measurements correctly

2016-06-04 Thread Sumedha Rubasinghe
Other values being sent correctly?
On Jun 5, 2016 12:54 AM, "Sinthuja Ragendran"  wrote:

> Hi,
>
> I see the light is being correctly measured in the phone app, and the
> values are correctly changing. But it's not getting sent to IoT server, I
> checked the data explorer (DAS component) in IoT server, but this seems to
> be not there. Is it know issue?
>
> Thanks,
> Sinthuja.
>
> --
> *Sinthuja Rajendran*
> Technical Lead
> WSO2, Inc.:http://wso2.com
>
> Blog: http://sinthu-rajan.blogspot.com/
> Mobile: +94774273955
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] IoT Server - Andriod agent is not publishing light measurements correctly

2016-06-04 Thread Sinthuja Ragendran
Hi,

I see the light is being correctly measured in the phone app, and the
values are correctly changing. But it's not getting sent to IoT server, I
checked the data explorer (DAS component) in IoT server, but this seems to
be not there. Is it know issue?

Thanks,
Sinthuja.

-- 
*Sinthuja Rajendran*
Technical Lead
WSO2, Inc.:http://wso2.com

Blog: http://sinthu-rajan.blogspot.com/
Mobile: +94774273955
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Architecture] Common configuration for publishing events from carbon servers to DAS/CEP

2016-06-04 Thread Buddhima Wijeweera
Hi All,

We are in the process of introducing this to ESB. I have successfully added
relevant features and data publishing is working fine with ESB.
But I can see the following logs in a fresh ESB pack first start (those are
not appearing when restarting again).

[2016-06-04 21:57:10,253]  INFO - EventStreamDeployer Stream definition is
deployed successfully  : esb-config-entry-stream:1.0.0
[2016-06-04 21:57:10,257]  INFO - EventStreamDeployer Stream definition is
deployed successfully  : esb-flow-entry-stream:1.0.0
*[2016-06-04 21:57:10,261]  INFO -
EventPublisherConfigurationFilesystemInvoker Event Publisher configuration
deleted from the file system : MessageFlowConfigurationPublisher.xml*
*[2016-06-04 21:57:10,261]  INFO - EventPublisherDeployer Event Publisher
undeployed successfully : MessageFlowConfigurationPublisher.xml*
[2016-06-04 21:57:10,511]  INFO -
EventPublisherConfigurationFilesystemInvoker Event Publisher configuration
saved in the filesystem : MessageFlowConfigurationPublisher.xml
[2016-06-04 21:57:10,525]  INFO - EventJunction WSO2EventConsumer added to
the junction. Stream:esb-config-entry-stream:1.0.0
[2016-06-04 21:57:10,527]  INFO - EventPublisherDeployer Event Publisher
configuration successfully deployed and in active state :
MessageFlowConfigurationPublisher
*[2016-06-04 21:57:10,528]  INFO -
EventPublisherConfigurationFilesystemInvoker Event Publisher configuration
deleted from the file system : MessageFlowStatisticsPublisher.xml*
*[2016-06-04 21:57:10,528]  INFO - EventPublisherDeployer Event Publisher
undeployed successfully : MessageFlowStatisticsPublisher.xml*
[2016-06-04 21:57:10,532]  INFO -
EventPublisherConfigurationFilesystemInvoker Event Publisher configuration
saved in the filesystem : MessageFlowStatisticsPublisher.xml
[2016-06-04 21:57:10,542]  INFO - EventJunction WSO2EventConsumer added to
the junction. Stream:esb-flow-entry-stream:1.0.0
[2016-06-04 21:57:10,543]  INFO - EventPublisherDeployer Event Publisher
configuration successfully deployed and in active state :
MessageFlowStatisticsPublisher


In this case I have 2 publishers called; MessageFlowConfigurationPublisher
& MessageFlowStatisticsPublisher. And seems they get deleted and redeployed
for some reason.

Is this an expected behavior (for analytics-common : 5.0.12-beta2) ?

Thanks,

On Fri, May 20, 2016 at 3:47 PM, Niranjan Karunanandham 
wrote:

> Hi Pulasthi,
>
> Did you add the feature to the p2-repo generation section in the pom? Can
> you check if the feature is there in your p2-repo?
>
> Regards,
> Nira
>
> On Fri, May 20, 2016 at 2:46 PM, Pulasthi Mahawithana 
> wrote:
>
>> Hi Niranjan,
>>
>> It was failing with the p2 repo that is created in p2-profile-gen.
>> Looking at the pom at [1], this feature only imports the org.wso2.carbon.core
>> which is available in IS but as a different patch release version. Does the
>> difference in the patch release version matter here? Or is it due to
>> something else?
>>
>> [1]
>> https://github.com/wso2/carbon-analytics-common/blob/v5.0.11/features/event-publisher/org.wso2.carbon.event.publisher.aggregate.feature/pom.xml
>>
>> On Fri, May 20, 2016 at 2:10 PM, Niranjan Karunanandham <
>> niran...@wso2.com> wrote:
>>
>>> Hi Pulasthi,
>>>
>>> When building the product, are you referring to the p2-repo of the
>>> locally built carbon-feature-repository or to a p2-repo that you create in
>>> p2-profile-gen. If so this issue can happen if the feature that you are
>>> installing has import features which are not there in your p2-repo.
>>>
>>> @Malith: As per the previous mail, can you close this PR since it need
>>> to come from the PR from the product team which will be using the feature.
>>>
>>> Regards,
>>> Nira
>>>
>>> Regards,
>>> Nira
>>>
>>> On Fri, May 20, 2016 at 12:47 PM, Pulasthi Mahawithana <
>>> pulast...@wso2.com> wrote:
>>>
 I tried to add this feature to product-is 's p2-profile-gen. But it
 fails saying,

 Installation failed.
 The installable unit
 org.wso2.carbon.event.publisher.aggregate.feature.group/5.0.11 has not been
 found.

 However I can install it from feature management UI when I merged and
 built the carbon feature repository locally with the PR at [1] and point
 that as a local repository. Any possible causes for this?

 [1] https://github.com/wso2/carbon-feature-repository/pull/46

 On Wed, Apr 27, 2016 at 9:08 AM, Niranjan Karunanandham <
 niran...@wso2.com> wrote:

> Hi Malith,
>
> Since this feature will be included in the next ESB release, IMO it
> would be better to close the current PR and have it included in the same 
> PR
> after the product release.
>
> Regards,
> Nira
>
> On Mon, Apr 25, 2016 at 6:05 PM, Malith Dhanushka 
> wrote:
>
>> Hi Niranjan,
>>
>> Correction on my previous reply. we have to ship this feature by
>> default with ESB 

Re: [Dev] Error while registering UDF method in Spark

2016-06-04 Thread Sachith Withana
Hi Fazlan,

To add to what Gimantha said, since we register methods, only the jacoco
injected method would fail to register.
All the other methods would be registered as UDFs.

Regards,
Sachith

On Sat, Jun 4, 2016 at 12:35 PM, Gimantha Bandara  wrote:

> Hi Fazlan,
>
> Spark UDFs are the Java methods that we define. Jacoco actually injects a
> static method called "$jacocoInit" to every test class, so every UDF class
> will contain an additional method called "$jacocoInit". Spark cannot
> determine the datatypes passed in the parameters and registering
> "$jacocoInit" as a spark UDF fails. Thats why that error comes, and since
> it is not a method that we define as a spark UDF, it is harmless.
>
> Thanks,
>
> On Sat, Jun 4, 2016 at 10:12 PM, Nirmal Fernando  wrote:
>
>> Hi Fazlan,
>>
>> AFAIK this is an unharmful error and Sachith should know some context.
>>
>> On Sat, Jun 4, 2016 at 10:09 PM, Fazlan Nazeem  wrote:
>>
>>> Hi,
>>>
>>> We are observing some errors related to UDF registration in API-M
>>> analytics Test integration phase. The error keeps on printing till the
>>> Spark Client is started. This error is only present during the test phase.
>>> It does not occur when the server is started separately. DAS team, Any idea
>>> on how we could get rid of this? Jenkins log in [1].
>>>
>>>  [2016-06-04 02:23:32,998] ERROR 
>>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  
>>> Error while registering the UDF method: $jacocoInit, Cannot determine the 
>>> return DataType: class [Z
>>>
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> - org.wso2.carbon.analytics.spark.core.exception.AnalyticsUDFException: 
>>> Cannot determine the return DataType: class [Z
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.wso2.carbon.analytics.spark.core.util.AnalyticsCommonUtils.getDataType(AnalyticsCommonUtils.java:104)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.wso2.carbon.analytics.spark.core.udf.AnalyticsUDFsRegister.registerUDF(AnalyticsUDFsRegister.java:62)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.registerUDFs(SparkAnalyticsExecutor.java:414)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSqlContext(SparkAnalyticsExecutor.java:378)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:360)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:189)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:77)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at java.lang.reflect.Method.invoke(Method.java:606)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
>>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>>> -   at 
>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
>>> INFO  

Re: [Dev] GSOC 2016 - Project 21 : MongoDB Userstore Development

2016-06-04 Thread Tharindu Edirisinghe
The call details are as following.

1. Demonstrated how to use WSO2 admin services.

2. All the user operations in *RemoteUserStoreManagerService *[1] should be
tested (using SOAP UI) for the mongodb userstore.

3. When the user profile is saved in a JDBC userstore, for each attribute
of the user, it will add a new entry in *UM_USER_ATTRIBUTE **(Refer [2] for
more information) *like below.

+---+--+-+---++--+
| UM_ID | UM_ATTR_NAME | UM_ATTR_VALUE   | UM_PROFILE_ID |
UM_USER_ID | UM_TENANT_ID |
+---+--+-+---++--+
| 1 | im   | | default   |
1 |-1234 |
| 2 | region   | Western | default   |
1 |-1234 |
| 3 | streetAddress| | default   |
1 |-1234 |
| 4 | country  | | default   |
1 |-1234 |
| 5 | mobile   | | default   |
1 |-1234 |
| 6 | sn   | NewLastname | default   |
1 |-1234 |
| 7 | profileConfiguration | default | default   |
1 |-1234 |
| 8 | dateOfBirth  | | default   |
1 |-1234 |
| 9 | mail | newu...@new.com | default   |
1 |-1234 |
|10 | organizationName | WSO2| default   |
1 |-1234 |
|11 | givenName| NewUser | default   |
1 |-1234 |
|12 | province | western | default   |
1 |-1234 |
+---+--+-+---++--+

Performance wise this is not a good design. For mongodb userstore, I
suggested to add a new document in *UM_USER_ATTRIBUTE *collection for each
user. If the attribute value is empty in the profile, an empty string can
be stored.

{
   "im": "",
   " region": "Western",
   "street": "Address",
   "country": "",
   "mobile": "",
   "sn": "NewLastname",
   "profileConfiguration": "default",
   "dateOfBirth": "",
   "mail": "newu...@new.com",
   "organizationName": "WSO2",
   "givenName": "NewUser",
   "province": "western"

}

3. Profile saving is having some issues currently and we need to further
investigate what is going wrong. Until the issue is figured out, asked to
manually create json documents in *UM_USER_ATTRIBUTE *collection and
implement the retrieval of user attributes.

*(can test getUserClaimValues method in the admin service using SOAP UI)*
4. For the analytics part of the project, suggested to extend the
*AbstractUserOperationEventListener
*class [1] and override the methods for publishing events.

5. Developer documentation, Administration Guide and Testing Guide *(sample
SOAP requests and responses in RemoteUserStoreManagerService API) *should
be written as deliverables. For all actions related to the mongodb
userstore manager *(i.e add user, delete user, add role ...) *, Selenium
scripts should be provided *(can use firefox selenium addon and record each
operation and provide the scripts)*.

So far the progress is satisfactory. Keep on the good work !

[1] https://localhost:9443/services/RemoteUserStoreManagerService?wsdl
[2]
http://tharindue.blogspot.com/2015/04/wso2-identity-server-data-dictionary.html
[3]
https://github.com/wso2/carbon-kernel/blob/v4.4.3/core/org.wso2.carbon.user.core/src/main/java/org/wso2/carbon/user/core/common/AbstractUserOperationEventListener.java

Thank you,
TharinduE

On Sat, Jun 4, 2016 at 9:48 AM, Asantha Thilina 
wrote:

> Hi Tharindu,
>
> ok sure i will look forward for that
>
> Thanks,
> Asantha
>
> On Fri, Jun 3, 2016 at 2:44 PM, Tharindu Edirisinghe 
> wrote:
>
>> Hi Asantha,
>>
>> Shall we have a google hangout tomorrow (Saturday) at 9.00 p.m ? So we
>> can discuss about the issues you are facing and get them resolved.
>>
>> Regards,
>> TharinduE
>>
>> On Fri, Jun 3, 2016 at 11:43 PM, Asantha Thilina <
>> asanthathil...@gmail.com> wrote:
>>
>>> Hi Tharindu,
>>>
>>> i fixed the most of the errors appeared in my user store in user
>>> management side now it's almost done in user management side now i can add
>>> new users ,roles and search roles of users and users of roles only issue i
>>> having now is i can't update a user profile of user in user store i getting
>>> a exception ,i want to get some advice from you to resolve that error and
>>> to implement a logic to commit transaction in mongodb and also another
>>> small problem is when i added a new claim where will it save in primary
>>> user store ? is there any feature to change the user store  where claims
>>> going to save. like it giving an option to select userstore in a dropdown
>>> when adding newusers and roles.
>>>
>>> all the works i have done so far in my repo[1]
>>>
>>> 

Re: [Dev] Error while registering UDF method in Spark

2016-06-04 Thread Gimantha Bandara
Hi Fazlan,

Spark UDFs are the Java methods that we define. Jacoco actually injects a
static method called "$jacocoInit" to every test class, so every UDF class
will contain an additional method called "$jacocoInit". Spark cannot
determine the datatypes passed in the parameters and registering
"$jacocoInit" as a spark UDF fails. Thats why that error comes, and since
it is not a method that we define as a spark UDF, it is harmless.

Thanks,

On Sat, Jun 4, 2016 at 10:12 PM, Nirmal Fernando  wrote:

> Hi Fazlan,
>
> AFAIK this is an unharmful error and Sachith should know some context.
>
> On Sat, Jun 4, 2016 at 10:09 PM, Fazlan Nazeem  wrote:
>
>> Hi,
>>
>> We are observing some errors related to UDF registration in API-M
>> analytics Test integration phase. The error keeps on printing till the
>> Spark Client is started. This error is only present during the test phase.
>> It does not occur when the server is started separately. DAS team, Any idea
>> on how we could get rid of this? Jenkins log in [1].
>>
>>  [2016-06-04 02:23:32,998] ERROR 
>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  
>> Error while registering the UDF method: $jacocoInit, Cannot determine the 
>> return DataType: class [Z
>>
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> - org.wso2.carbon.analytics.spark.core.exception.AnalyticsUDFException: 
>> Cannot determine the return DataType: class [Z
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.wso2.carbon.analytics.spark.core.util.AnalyticsCommonUtils.getDataType(AnalyticsCommonUtils.java:104)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.wso2.carbon.analytics.spark.core.udf.AnalyticsUDFsRegister.registerUDF(AnalyticsUDFsRegister.java:62)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.registerUDFs(SparkAnalyticsExecutor.java:414)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSqlContext(SparkAnalyticsExecutor.java:378)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:360)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:189)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:77)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at java.lang.reflect.Method.invoke(Method.java:606)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
>> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] 
>> -at 
>> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
>> INFO  

Re: [Dev] Error while registering UDF method in Spark

2016-06-04 Thread Nirmal Fernando
Hi Fazlan,

AFAIK this is an unharmful error and Sachith should know some context.

On Sat, Jun 4, 2016 at 10:09 PM, Fazlan Nazeem  wrote:

> Hi,
>
> We are observing some errors related to UDF registration in API-M
> analytics Test integration phase. The error keeps on printing till the
> Spark Client is started. This error is only present during the test phase.
> It does not occur when the server is started separately. DAS team, Any idea
> on how we could get rid of this? Jenkins log in [1].
>
>  [2016-06-04 02:23:32,998] ERROR 
> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -  
> Error while registering the UDF method: $jacocoInit, Cannot determine the 
> return DataType: class [Z
>
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsUDFException: Cannot 
> determine the return DataType: class [Z
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.wso2.carbon.analytics.spark.core.util.AnalyticsCommonUtils.getDataType(AnalyticsCommonUtils.java:104)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.wso2.carbon.analytics.spark.core.udf.AnalyticsUDFsRegister.registerUDF(AnalyticsUDFsRegister.java:62)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.registerUDFs(SparkAnalyticsExecutor.java:414)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSqlContext(SparkAnalyticsExecutor.java:378)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:360)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:189)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:77)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at java.lang.reflect.Method.invoke(Method.java:606)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
> INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader] - 
> at 
> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
> INFO  

[Dev] Error while registering UDF method in Spark

2016-06-04 Thread Fazlan Nazeem
Hi,

We are observing some errors related to UDF registration in API-M analytics
Test integration phase. The error keeps on printing till the Spark Client
is started. This error is only present during the test phase. It does not
occur when the server is started separately. DAS team, Any idea on how we
could get rid of this? Jenkins log in [1].

 [2016-06-04 02:23:32,998] ERROR
{org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor}
-  Error while registering the UDF method: $jacocoInit, Cannot
determine the return DataType: class [Z

INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
- org.wso2.carbon.analytics.spark.core.exception.AnalyticsUDFException:
Cannot determine the return DataType: class [Z
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.wso2.carbon.analytics.spark.core.util.AnalyticsCommonUtils.getDataType(AnalyticsCommonUtils.java:104)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.wso2.carbon.analytics.spark.core.udf.AnalyticsUDFsRegister.registerUDF(AnalyticsUDFsRegister.java:62)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.registerUDFs(SparkAnalyticsExecutor.java:414)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSqlContext(SparkAnalyticsExecutor.java:378)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:360)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:189)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:77)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at java.lang.reflect.Method.invoke(Method.java:606)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
INFO  [org.wso2.carbon.automation.extensions.servers.utils.ServerLogReader]
-   at 
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
INFO  

Re: [Dev] Fwd: GSOC2016: [ML][CEP] Predictive analytic with online data for WSO2 Machine Learner

2016-06-04 Thread Mahesh Dananjaya
Hi Maheshkya,
If you want to run it please use following queries.
1. StreamingLInearRegression

from Stream4InputStream#streaming:streaminglr(0, 2, 0.95, salary, rbi,
walks, strikeouts, errors)

select *

insert into regResults;

from Stream8Input#streaming:streamingkm(0, 2, 0.95,2,20, salary, rbi,
walks, strikeouts, errors)

select *
insert into regResults;

in both case the first parameter let you to decide which learning methos
you want, moving window, batch processing or time based model learning.
BR,
Mahesh.

On Sat, Jun 4, 2016 at 8:45 PM, Mahesh Dananjaya 
wrote:

> Hi Maheshkaya,
> I have added the moving window method and update the previos
> StreamingLinearRegression [1] which only performed batch processing with
> streaming data. and also i added the StreamingKMeansClustering [1] for our
> purposes and debugged them.thank you.
> regards,
> Mahesh.
> [1]
> https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc/siddhi/extension/streaming/src/main/java/org/gsoc/siddhi/extension/streaming
>
> On Sat, Jun 4, 2016 at 5:58 PM, Supun Sethunga  wrote:
>
>> Thanks Mahesh! The graphs look promising! :)
>>
>> So by looking at graph, LR with SGD can train  a model within 60 secs
>> (6*10^10 nano sec), using about 900,000 data points . Means, this online
>> training can handle events/data points coming at rate of 15,000 per second
>> (or more) , if the batch size is set to 900,000 (or less) or window size is
>> set to 60 secs (or less). This is great IMO!
>>
>> On Sat, Jun 4, 2016 at 10:51 AM, Mahesh Dananjaya <
>> dananjayamah...@gmail.com> wrote:
>>
>>> Hi Maheshakya,
>>> As you requested i can change other parameters as well such as feature
>>> size(p). Initially i did it with p=3;sure thing. Anyway you can see and run
>>> the code if you want. source is at [1]. the test timing is called with
>>> random data as you requested if you set args[0] to 1. And you can find the
>>> extension and streaming algorithms in gsoc/ directiry[2]. thank you.
>>> BR,
>>> Mahesh.
>>> [1]
>>> https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/src/main/java/org/sparkexample/StreamingLinearRegression.java
>>> [2] https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc
>>>
>>> On Sat, Jun 4, 2016 at 10:39 AM, Mahesh Dananjaya <
>>> dananjayamah...@gmail.com> wrote:
>>>
 Hi supun,
 Though i pushed it yesterday, there was some problems with the network.
 now you can see them in the repo location [1].I added some Matlab plot you
 can see the patter there.you can use ml also. Ok sure thing. I can prepare
 a report or else blog if you want. files are as follows. The y axis is in
 ns and x axis is in batch size. And also i added two pplots as jpegs[2], so
 you can easily compare.
 lr_timing_1000.txt -> batch size incremented by 1000
 lr_timing_1.txt -> batch size incremented by 1
 lr_timing_power10.txt -> batch size incremented by power of 10

 In here independent variable is only tha batch size.If you want i can
 send you making other parameters such as step size, number of iteration,
 feature vector size as independent variables. please let me know if you
 want further info. thank you.
 regards,
 Mahesh.


 [1
 ]https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output
 [2]
 https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/output/lr_timing_1.jpg

 On Sat, Jun 4, 2016 at 9:58 AM, Supun Sethunga  wrote:

> Hi Mahesh,
>
> I have added those timing reports to my repo [1].
>
> Whats the file name? :)
>
> Btw, can you compile simple doc (gdoc) with the above results, and
> bring everything to one place? That way it is easy to compare, and keep
> track.
>
> Thanks,
> Supun
>
> On Fri, Jun 3, 2016 at 7:23 PM, Mahesh Dananjaya <
> dananjayamah...@gmail.com> wrote:
>
>> Hi Maheshkya,
>> I have added those timing reports to my repo [1].please have a look
>> at. three files are there. one is using incremet as 1000 for batch sizes
>> (lr_timing_1000). Otherone is using incremet by 1 (lr_timing_1)
>> upto 1 million in both scenarios.you can see the reports and figures in 
>> the
>> location [2] in the repo. i also added the streaminglinearregression
>> classes in the repo gsoc folder.thank you.
>> regards,
>> Mahesh.
>> [1]https://github.com/dananjayamahesh/GSOC2016
>> [2]
>> https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output
>>
>> On Mon, May 30, 2016 at 9:24 AM, Maheshakya Wijewardena <
>> mahesha...@wso2.com> wrote:
>>
>>> Hi Mahesh,
>>>
>>> Thank you for the update. I will look into your implementation.
>>>
>>> And i will be able to send you the 

Re: [Dev] Fwd: GSOC2016: [ML][CEP] Predictive analytic with online data for WSO2 Machine Learner

2016-06-04 Thread Mahesh Dananjaya
Hi Maheshkaya,
I have added the moving window method and update the previos
StreamingLinearRegression [1] which only performed batch processing with
streaming data. and also i added the StreamingKMeansClustering [1] for our
purposes and debugged them.thank you.
regards,
Mahesh.
[1]
https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc/siddhi/extension/streaming/src/main/java/org/gsoc/siddhi/extension/streaming

On Sat, Jun 4, 2016 at 5:58 PM, Supun Sethunga  wrote:

> Thanks Mahesh! The graphs look promising! :)
>
> So by looking at graph, LR with SGD can train  a model within 60 secs
> (6*10^10 nano sec), using about 900,000 data points . Means, this online
> training can handle events/data points coming at rate of 15,000 per second
> (or more) , if the batch size is set to 900,000 (or less) or window size is
> set to 60 secs (or less). This is great IMO!
>
> On Sat, Jun 4, 2016 at 10:51 AM, Mahesh Dananjaya <
> dananjayamah...@gmail.com> wrote:
>
>> Hi Maheshakya,
>> As you requested i can change other parameters as well such as feature
>> size(p). Initially i did it with p=3;sure thing. Anyway you can see and run
>> the code if you want. source is at [1]. the test timing is called with
>> random data as you requested if you set args[0] to 1. And you can find the
>> extension and streaming algorithms in gsoc/ directiry[2]. thank you.
>> BR,
>> Mahesh.
>> [1]
>> https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/src/main/java/org/sparkexample/StreamingLinearRegression.java
>> [2] https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc
>>
>> On Sat, Jun 4, 2016 at 10:39 AM, Mahesh Dananjaya <
>> dananjayamah...@gmail.com> wrote:
>>
>>> Hi supun,
>>> Though i pushed it yesterday, there was some problems with the network.
>>> now you can see them in the repo location [1].I added some Matlab plot you
>>> can see the patter there.you can use ml also. Ok sure thing. I can prepare
>>> a report or else blog if you want. files are as follows. The y axis is in
>>> ns and x axis is in batch size. And also i added two pplots as jpegs[2], so
>>> you can easily compare.
>>> lr_timing_1000.txt -> batch size incremented by 1000
>>> lr_timing_1.txt -> batch size incremented by 1
>>> lr_timing_power10.txt -> batch size incremented by power of 10
>>>
>>> In here independent variable is only tha batch size.If you want i can
>>> send you making other parameters such as step size, number of iteration,
>>> feature vector size as independent variables. please let me know if you
>>> want further info. thank you.
>>> regards,
>>> Mahesh.
>>>
>>>
>>> [1
>>> ]https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output
>>> [2]
>>> https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/output/lr_timing_1.jpg
>>>
>>> On Sat, Jun 4, 2016 at 9:58 AM, Supun Sethunga  wrote:
>>>
 Hi Mahesh,

 I have added those timing reports to my repo [1].

 Whats the file name? :)

 Btw, can you compile simple doc (gdoc) with the above results, and
 bring everything to one place? That way it is easy to compare, and keep
 track.

 Thanks,
 Supun

 On Fri, Jun 3, 2016 at 7:23 PM, Mahesh Dananjaya <
 dananjayamah...@gmail.com> wrote:

> Hi Maheshkya,
> I have added those timing reports to my repo [1].please have a look
> at. three files are there. one is using incremet as 1000 for batch sizes
> (lr_timing_1000). Otherone is using incremet by 1 (lr_timing_1)
> upto 1 million in both scenarios.you can see the reports and figures in 
> the
> location [2] in the repo. i also added the streaminglinearregression
> classes in the repo gsoc folder.thank you.
> regards,
> Mahesh.
> [1]https://github.com/dananjayamahesh/GSOC2016
> [2]
> https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output
>
> On Mon, May 30, 2016 at 9:24 AM, Maheshakya Wijewardena <
> mahesha...@wso2.com> wrote:
>
>> Hi Mahesh,
>>
>> Thank you for the update. I will look into your implementation.
>>
>> And i will be able to send you the timing/performances analysis
>>> report tomorrow for the SGD functions
>>>
>>
>> Great. Sent those asap so that we can proceed.
>>
>> Best regards.
>>
>> On Sun, May 29, 2016 at 6:56 PM, Mahesh Dananjaya <
>> dananjayamah...@gmail.com> wrote:
>>
>>>
>>> Hi maheshakay,
>>> I have implemented the linear regression with cep siddhi event
>>> stream with  taking batch sizes as parameters from the cep. Now we can
>>> trying the moving window method to. Before that i think i should get 
>>> your
>>> opinion on data structures to save the streaming data.please check my 
>>> repo
>>> [1]  /gsoc/ folder there you can find all new things i add.. there in 

Re: [Dev] What is "startingDelayInSeconds" in registry indexing?

2016-06-04 Thread Bhathiya Jayasekara
Found this in [2].

60
When the server is starting (at the time registry indexing service register
in OSGI ) indexing task is scheduling with 60s delay.

Can someone please explain the importance if this? If I set this value to
0, what can go wrong?

[2] http://www.vitharana.org/2015/02/registry-resource-indexing-in-wso2.html

Thanks,
Bhathiya

On Sat, Jun 4, 2016 at 8:06 PM, Bhathiya Jayasekara 
wrote:

> Hi all,
>
> Can someone please explain "startingDelayInSeconds" here[1]?
>
> [1] https://docs.wso2.com/display/Governance520/Configuration+for+Indexing
>
> Thanks,
> --
> *Bhathiya Jayasekara*
> *Senior Software Engineer,*
> *WSO2 inc., http://wso2.com *
>
> *Phone: +94715478185 <%2B94715478185>*
> *LinkedIn: http://www.linkedin.com/in/bhathiyaj
> *
> *Twitter: https://twitter.com/bhathiyax *
> *Blog: http://movingaheadblog.blogspot.com
> *
>



-- 
*Bhathiya Jayasekara*
*Senior Software Engineer,*
*WSO2 inc., http://wso2.com *

*Phone: +94715478185*
*LinkedIn: http://www.linkedin.com/in/bhathiyaj
*
*Twitter: https://twitter.com/bhathiyax *
*Blog: http://movingaheadblog.blogspot.com
*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] What is "startingDelayInSeconds" in registry indexing?

2016-06-04 Thread Bhathiya Jayasekara
Hi all,

Can someone please explain "startingDelayInSeconds" here[1]?

[1] https://docs.wso2.com/display/Governance520/Configuration+for+Indexing

Thanks,
-- 
*Bhathiya Jayasekara*
*Senior Software Engineer,*
*WSO2 inc., http://wso2.com *

*Phone: +94715478185*
*LinkedIn: http://www.linkedin.com/in/bhathiyaj
*
*Twitter: https://twitter.com/bhathiyax *
*Blog: http://movingaheadblog.blogspot.com
*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Predictive analytic with streaming data for WSO2 Machine Learner

2016-06-04 Thread Nifras Ismail
Hi Ishani,

Please refer[1] , You may have clear picture.

[1]
https://docs.wso2.com/display/CEP410/Predictive+Analytics+with+WSO2+Machine+Learner#PredictiveAnalyticswithWSO2MachineLearner-PrerequisitesPrerequisites


On Fri, Jun 3, 2016 at 2:06 PM, Ishani Pathinayake <
ishanipathinay...@gmail.com> wrote:

> Hi all,
>
> I'm Ishani from SLIIT an undergraduate of Software Engineering. I am
> trying to do the "Predictive analytic with streaming data for WSO2 Machine
> Learner" for an academic purpose.
>
> I would like to know what is the structure of streaming data or what type
> of streaming data you except to handle here?
>
> Thank you.
>
> --
> *Cheers !*
> Ishani Pathinayake.
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Nifras Ismail
Associate Software Engineer
WSO2
Email : nif...@wso2.com
Mobile : 0094 77 89 90 300
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Fwd: GSOC2016: [ML][CEP] Predictive analytic with online data for WSO2 Machine Learner

2016-06-04 Thread Supun Sethunga
Thanks Mahesh! The graphs look promising! :)

So by looking at graph, LR with SGD can train  a model within 60 secs
(6*10^10 nano sec), using about 900,000 data points . Means, this online
training can handle events/data points coming at rate of 15,000 per second
(or more) , if the batch size is set to 900,000 (or less) or window size is
set to 60 secs (or less). This is great IMO!

On Sat, Jun 4, 2016 at 10:51 AM, Mahesh Dananjaya  wrote:

> Hi Maheshakya,
> As you requested i can change other parameters as well such as feature
> size(p). Initially i did it with p=3;sure thing. Anyway you can see and run
> the code if you want. source is at [1]. the test timing is called with
> random data as you requested if you set args[0] to 1. And you can find the
> extension and streaming algorithms in gsoc/ directiry[2]. thank you.
> BR,
> Mahesh.
> [1]
> https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/src/main/java/org/sparkexample/StreamingLinearRegression.java
> [2] https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc
>
> On Sat, Jun 4, 2016 at 10:39 AM, Mahesh Dananjaya <
> dananjayamah...@gmail.com> wrote:
>
>> Hi supun,
>> Though i pushed it yesterday, there was some problems with the network.
>> now you can see them in the repo location [1].I added some Matlab plot you
>> can see the patter there.you can use ml also. Ok sure thing. I can prepare
>> a report or else blog if you want. files are as follows. The y axis is in
>> ns and x axis is in batch size. And also i added two pplots as jpegs[2], so
>> you can easily compare.
>> lr_timing_1000.txt -> batch size incremented by 1000
>> lr_timing_1.txt -> batch size incremented by 1
>> lr_timing_power10.txt -> batch size incremented by power of 10
>>
>> In here independent variable is only tha batch size.If you want i can
>> send you making other parameters such as step size, number of iteration,
>> feature vector size as independent variables. please let me know if you
>> want further info. thank you.
>> regards,
>> Mahesh.
>>
>>
>> [1
>> ]https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output
>> [2]
>> https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/output/lr_timing_1.jpg
>>
>> On Sat, Jun 4, 2016 at 9:58 AM, Supun Sethunga  wrote:
>>
>>> Hi Mahesh,
>>>
>>> I have added those timing reports to my repo [1].
>>>
>>> Whats the file name? :)
>>>
>>> Btw, can you compile simple doc (gdoc) with the above results, and bring
>>> everything to one place? That way it is easy to compare, and keep track.
>>>
>>> Thanks,
>>> Supun
>>>
>>> On Fri, Jun 3, 2016 at 7:23 PM, Mahesh Dananjaya <
>>> dananjayamah...@gmail.com> wrote:
>>>
 Hi Maheshkya,
 I have added those timing reports to my repo [1].please have a look at.
 three files are there. one is using incremet as 1000 for batch sizes
 (lr_timing_1000). Otherone is using incremet by 1 (lr_timing_1)
 upto 1 million in both scenarios.you can see the reports and figures in the
 location [2] in the repo. i also added the streaminglinearregression
 classes in the repo gsoc folder.thank you.
 regards,
 Mahesh.
 [1]https://github.com/dananjayamahesh/GSOC2016
 [2]
 https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output

 On Mon, May 30, 2016 at 9:24 AM, Maheshakya Wijewardena <
 mahesha...@wso2.com> wrote:

> Hi Mahesh,
>
> Thank you for the update. I will look into your implementation.
>
> And i will be able to send you the timing/performances analysis report
>> tomorrow for the SGD functions
>>
>
> Great. Sent those asap so that we can proceed.
>
> Best regards.
>
> On Sun, May 29, 2016 at 6:56 PM, Mahesh Dananjaya <
> dananjayamah...@gmail.com> wrote:
>
>>
>> Hi maheshakay,
>> I have implemented the linear regression with cep siddhi event stream
>> with  taking batch sizes as parameters from the cep. Now we can trying 
>> the
>> moving window method to. Before that i think i should get your opinion on
>> data structures to save the streaming data.please check my repo [1]  
>> /gsoc/
>> folder there you can find all new things i add.. there in the extension
>> folder you can find those extension. And i will be able to send you the
>> timing/performances analysis report tomorrow for the SGD functions. thank
>> you.
>> regards,
>> Mahesh.
>> [1] https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc
>>
>>
>> On Fri, May 27, 2016 at 12:56 PM, Mahesh Dananjaya <
>> dananjayamah...@gmail.com> wrote:
>>
>>> Hi maheshkaya,
>>> i have written some siddhi extension and trying to develop a one for
>>> my one. In time series example in the [1], can you please explain me the
>>> input format and query 

Re: [Dev] Fwd: GSOC2016: [ML][CEP] Predictive analytic with online data for WSO2 Machine Learner

2016-06-04 Thread Mahesh Dananjaya
Hi Maheshakya,
I have looked into the spark streaming fundamentals and  k mean clustering
to develop the streaming k mean clustering for stream data. those can be
found at [1] and [2].I will commit new changes to my repo today including
the basic implementation of streaming k mean clustering.thank you.
regards,
Mahesh.
[1] http://spark.apache.org/docs/latest/streaming-programming-guide.html
[2] http://spark.apache.org/docs/latest/mllib-clustering.html

On Sat, Jun 4, 2016 at 10:51 AM, Mahesh Dananjaya  wrote:

> Hi Maheshakya,
> As you requested i can change other parameters as well such as feature
> size(p). Initially i did it with p=3;sure thing. Anyway you can see and run
> the code if you want. source is at [1]. the test timing is called with
> random data as you requested if you set args[0] to 1. And you can find the
> extension and streaming algorithms in gsoc/ directiry[2]. thank you.
> BR,
> Mahesh.
> [1]
> https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/src/main/java/org/sparkexample/StreamingLinearRegression.java
> [2] https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc
>
> On Sat, Jun 4, 2016 at 10:39 AM, Mahesh Dananjaya <
> dananjayamah...@gmail.com> wrote:
>
>> Hi supun,
>> Though i pushed it yesterday, there was some problems with the network.
>> now you can see them in the repo location [1].I added some Matlab plot you
>> can see the patter there.you can use ml also. Ok sure thing. I can prepare
>> a report or else blog if you want. files are as follows. The y axis is in
>> ns and x axis is in batch size. And also i added two pplots as jpegs[2], so
>> you can easily compare.
>> lr_timing_1000.txt -> batch size incremented by 1000
>> lr_timing_1.txt -> batch size incremented by 1
>> lr_timing_power10.txt -> batch size incremented by power of 10
>>
>> In here independent variable is only tha batch size.If you want i can
>> send you making other parameters such as step size, number of iteration,
>> feature vector size as independent variables. please let me know if you
>> want further info. thank you.
>> regards,
>> Mahesh.
>>
>>
>> [1
>> ]https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output
>> [2]
>> https://github.com/dananjayamahesh/GSOC2016/blob/master/spark-examples/first-example/output/lr_timing_1.jpg
>>
>> On Sat, Jun 4, 2016 at 9:58 AM, Supun Sethunga  wrote:
>>
>>> Hi Mahesh,
>>>
>>> I have added those timing reports to my repo [1].
>>>
>>> Whats the file name? :)
>>>
>>> Btw, can you compile simple doc (gdoc) with the above results, and bring
>>> everything to one place? That way it is easy to compare, and keep track.
>>>
>>> Thanks,
>>> Supun
>>>
>>> On Fri, Jun 3, 2016 at 7:23 PM, Mahesh Dananjaya <
>>> dananjayamah...@gmail.com> wrote:
>>>
 Hi Maheshkya,
 I have added those timing reports to my repo [1].please have a look at.
 three files are there. one is using incremet as 1000 for batch sizes
 (lr_timing_1000). Otherone is using incremet by 1 (lr_timing_1)
 upto 1 million in both scenarios.you can see the reports and figures in the
 location [2] in the repo. i also added the streaminglinearregression
 classes in the repo gsoc folder.thank you.
 regards,
 Mahesh.
 [1]https://github.com/dananjayamahesh/GSOC2016
 [2]
 https://github.com/dananjayamahesh/GSOC2016/tree/master/spark-examples/first-example/output

 On Mon, May 30, 2016 at 9:24 AM, Maheshakya Wijewardena <
 mahesha...@wso2.com> wrote:

> Hi Mahesh,
>
> Thank you for the update. I will look into your implementation.
>
> And i will be able to send you the timing/performances analysis report
>> tomorrow for the SGD functions
>>
>
> Great. Sent those asap so that we can proceed.
>
> Best regards.
>
> On Sun, May 29, 2016 at 6:56 PM, Mahesh Dananjaya <
> dananjayamah...@gmail.com> wrote:
>
>>
>> Hi maheshakay,
>> I have implemented the linear regression with cep siddhi event stream
>> with  taking batch sizes as parameters from the cep. Now we can trying 
>> the
>> moving window method to. Before that i think i should get your opinion on
>> data structures to save the streaming data.please check my repo [1]  
>> /gsoc/
>> folder there you can find all new things i add.. there in the extension
>> folder you can find those extension. And i will be able to send you the
>> timing/performances analysis report tomorrow for the SGD functions. thank
>> you.
>> regards,
>> Mahesh.
>> [1] https://github.com/dananjayamahesh/GSOC2016/tree/master/gsoc
>>
>>
>> On Fri, May 27, 2016 at 12:56 PM, Mahesh Dananjaya <
>> dananjayamah...@gmail.com> wrote:
>>
>>> Hi maheshkaya,
>>> i have written some siddhi extension and trying to develop a one for
>>> my one. In time series example in 

Re: [Dev] Can we use puppet scripts in wso2/puppet-modules for CEP for a distributed deployment ?

2016-06-04 Thread Sajith Ravindra
Hi Imesh/Isuru,

I submitted the pull request[1] sometime back and it has implemented all 3
patterns of CEP (standalone, HA and distributed). Please let us know if we
can provide any further assistance on this. It would be great if we can get
the distributed deployment scripts also in wso2 repo because manually doing
the distributed deployments requires some effort compared to other two
patterns.


[1] -
https://github.com/wso2/puppet-modules/commit/85e652d086d5b48971b3425ec12562c13c9d4a9b

Thanks
*,Sajith Ravindra*
Senior Software Engineer
WSO2 Inc.; http://wso2.com
lean.enterprise.middleware

mobile: +94 77 2273550
blog: http://sajithr.blogspot.com/


On Sat, Jun 4, 2016 at 10:21 AM, Isuru Haththotuwa  wrote:

> Hi Sajith,
>
> On Sat, Jun 4, 2016 at 10:18 AM, Imesh Gunaratne  wrote:
>
>> Hi Sajith,
>>
>> On Sat, Jun 4, 2016 at 12:31 AM, Sajith Ravindra 
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> It seems we have still not added the capability to deploy CEP in storm
>>> based distributed mode using the puppet scripts in [1].
>>>
>>
>> ​Yes, currently Storm based distributed deployment automation for CEP is
>> not there in [1]. If CEP team can contribute to that, that would be great!
>>
> Adding to what Imesh said, we currently only have two profiles defined for
> CEP, presenter+mgt and worker. Would need to improve as we go on.
>
>>
>> Thanks
>>
>> Am i missing something here? or are we still in the process of merging
>>> the changes? Is there anything that needs to be done from CEP team to get
>>> distributed deployment related changes merged to [1]?
>>>
>>>
>>> [1] - https://github.com/wso2/puppet-modules
>>>
>>> Thanks
>>> *,Sajith Ravindra*
>>> Senior Software Engineer
>>> WSO2 Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> mobile: +94 77 2273550
>>> blog: http://sajithr.blogspot.com/
>>> 
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: https://medium.com/@imesh TW: @imesh
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Why YAML configuration files in C5 has extension .yml instead of .yaml?

2016-06-04 Thread Isuru Perera
Should our code accept both ".yml" and ".yaml" as valid extensions?

I'm currently using ".yml" extension for configuration file in next Metrics
release. Shall I change the extension to ".yaml"?

Metrics 2.0.0 is based on Carbon 5.1.0. In Carbon, the configuration is in
"carbon.yml". I think that there is an inconsistency when using
"metrics.yaml". So, shall we use ".yaml" when Carbon uses ".yaml" extension?

On Fri, Jun 3, 2016 at 2:49 PM, Afkham Azeez  wrote:

> Let's use yaml as the extension.
>
> On Fri, Jun 3, 2016 at 2:37 PM, SajithAR Ariyarathna 
> wrote:
>
>> Hi All,
>>
>> Is there any reason behind $subject. As per YAML.org FAQ '.yaml' is the
>> recommended extension for YAML files [1].
>>
>> [1] http://www.yaml.org/faq.html
>>
>> Thanks.
>> --
>> Sajith Janaprasad Ariyarathna
>> Software Engineer; WSO2, Inc.;  http://wso2.com/
>>
>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Isuru Perera
Associate Technical Lead | WSO2, Inc. | http://wso2.com/
Lean . Enterprise . Middleware

about.me/chrishantha
Contact: +IsuruPereraWSO2 
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Requesting the parameters for throttling policy configurations in api-manager.xml

2016-06-04 Thread Nuwan Dias
Why would we have to check if there are conditions which require query
params? Or headers? I think we should publish them anyway. Does it cause a
performance impact to send additional data to CEP? I don't think getting
those data from the request has an overhead.

Thanks,
NuwanD.

On Sat, Jun 4, 2016 at 10:40 AM, Harsha Kumara  wrote:

> Hi Ushani,
>
> We add that property because, sometime users may not need publishing the
> query params to the CEP as they don't have policies associated with them.
> Going through the policies applied for a particular API and determine it
> has query param conditions and then publishing the query param to the CEP
> is expensive. So we have added those configurations.
>
>
> On Sat, Jun 4, 2016 at 7:46 AM, Ushani Balasooriya 
> wrote:
>
>> Additionaly why do we need to control EnableQueryParamConditions via
>> api-manager.xml when we have a on/off switch button to each conditional
>> header in Advanced throttling configuration UI?
>> On Jun 4, 2016 7:24 AM, "Ushani Balasooriya"  wrote:
>>
>>> Hi Amila,
>>>
>>> So these configurations will be applicable across all tenants?
>>>
>>> Akso I think it would be better if we can hide Conditional groups when
>>> EnableHeaderConditions is set to false or make them disable. Likewise will
>>> it be possible to have UI level improvements to notify the admin? Just a
>>> thought.
>>>
>> Yes this would be a good improvement to have in the UI.
>
>> Thanks,
>>> On Jun 3, 2016 11:00 PM, "Amila De Silva"  wrote:
>>>
 Hi Ushani,

 For the new throttling to work you first have to
 set EnableAdvanceThrottling to true. When EnableHeaderConditions is set to
 true, all the headers in the incoming message will be published to the CEP.
 Similarly Setting EnableJWTClaimConditions and EnableQueryParamConditions
 to true would publish JWT claims and the query parameters coming with the
 request to the CEP. In the latest pack, spike arrest is only enabled
 through the UI, so there's no config element for that.

 On Fri, Jun 3, 2016 at 6:35 PM, Ushani Balasooriya 
 wrote:

> Hi APIM Team,
>
> It would be highly appreciated, if you can let us know the parameters
> that need to be enabled in api-manager.xml when working with throttling
> since the documents are not ready yet.
>
> E.g., Spike arrest policy parameter, Query param parameter.
>
>
> Thanks and regards,
> --
> *Ushani Balasooriya*
> Senior Software Engineer - QA;
> WSO2 Inc; http://www.wso2.com/.
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "WSO2 Documentation Group" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to documentation+unsubscr...@wso2.com.
> For more options, visit https://groups.google.com/a/wso2.com/d/optout.
>



 --
 *Amila De Silva*

 WSO2 Inc.
 mobile :(+94) 775119302


>
>
> --
> Harsha Kumara
> Software Engineer, WSO2 Inc.
> Mobile: +94775505618
> Blog:harshcreationz.blogspot.com
>



-- 
Nuwan Dias

Technical Lead - WSO2, Inc. http://wso2.com
email : nuw...@wso2.com
Phone : +94 777 775 729
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Gmail Connector] Error connecting to gmail connector

2016-06-04 Thread Maheeka Jayasuriya
Thanks Hariprasad for taking time on helping to resolve the issue. The
mentioned approach is now working.

Maheeka Jayasuriya
Senior Software Engineer
Mobile : +9450661

On Sat, Jun 4, 2016 at 10:49 AM, Hariprasath Thanarajah <
haripras...@wso2.com> wrote:

> Hi Maheeka,
>
> If you are trying to get the refresh token and accessToken from oAuth
> playground you should refresh the refreshToken in oAuth playground only
> because in oAuth playground it sets the redirect_uri as https%3A%2F%
> 2Fdevelopers.google.com%2Foauthplayground is not your app redirect_uri.
> So after one hour if you trying any method it gave the error.
> Better trying below url[1] to get the code and using [2] to get the
> refresh token.
>
> [1] - https://accounts.google.com/o/oauth2/auth?redirect_uri=* redirect-uri>*_type=code_id=**=
> https://mail.google.com/+https://www.googleapis.com/auth/gmail.compose+https://www.googleapis.com/auth/gmail.insert+https://www.googleapis.com/auth/gmail.labels+https://www.googleapis.com/auth/gmail.modify+https://www.googleapis.com/auth/gmail.readonly+https://www.googleapis.com/auth/gmail.send_prompt=force_type=offline
>
> [2] - I have attached below
>
>
> From [2]  you can get the refresh_token and you can use it in the init
> call. Then you won't get the error.
>
> On Sat, Jun 4, 2016 at 8:09 AM, Malaka Silva  wrote:
>
>> Yes this should be the way. Only use init method before each call.
>>
>> On Sat, Jun 4, 2016 at 8:00 AM, Shakila Sivagnanarajah 
>> wrote:
>>
>>> Hi Maheeka,
>>>
>>> Since the refreshing access token is automated in the connector, you can
>>> use following init configuration.
>>>
>>>  
>>> {$ctx:refreshToken}
>>> {$ctx:clientId}
>>> {$ctx:clientSecret}
>>>
>>> {$ctx:accessTokenRegistryPath}
>>> {$ctx:accessToken}
>>> {$ctx:apiUrl}
>>> {$ctx:userId}
>>> 
>>> 
>>> {$ctx:to}
>>> {$ctx:subject}
>>> {$ctx:from}
>>> {$ctx:messageBody}
>>> {$ctx:cc}
>>> {$ctx:bcc}
>>> 
>>> 
>>>
>>> Thanks
>>>
>>> On Sat, Jun 4, 2016 at 7:40 AM, Shakila Sivagnanarajah >> > wrote:
>>>
 Hi Maheeka,

 I Just have tested it, it is working fine. It seems the access token is
 not set in your call.

 Please try with this configuration.

 
 {$ctx:refreshToken}
 {$ctx:clientId}
 {$ctx:clientSecret}
 {$ctx:grantType}
 
 
 {$ctx:apiUrl}
 {$ctx:userId}
 
 
 {$ctx:to}
 {$ctx:subject}
 {$ctx:from}
 {$ctx:messageBody}
 {$ctx:cc}
 {$ctx:bcc}
 
 


 Thanks

 On Sat, Jun 4, 2016 at 7:33 AM, Malaka Silva  wrote:

> Looping Hariprasath.
>
> On Sat, Jun 4, 2016 at 7:25 AM, Shakila Sivagnanarajah <
> shak...@wso2.com> wrote:
>
>> Hi Maheeka,
>>
>> I will check and update you
>>
>> Thanks
>>
>> On Sat, Jun 4, 2016 at 1:05 AM, Maheeka Jayasuriya 
>> wrote:
>>
>>> Hi Shakila/Malaka,
>>>
>>> I am getting the following errors when using the latest gmail
>>> connector from connector store. I am getting the clientId and 
>>> clientSecret
>>> from the app and the refresh token and access token from playground app.
>>> Used apiUrl as https://www.googleapis.com/gmail.
>>>
>>> Am I doing any configuration wrong?
>>>
>>> [2016-06-04 00:04:44,784] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "POST //v1/users/johndoeintcl...@gmail.com/messages/send
>>> HTTP/1.1[\r][\n]"
>>> [2016-06-04 00:04:44,784] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "X-Frame-Options: SAMEORIGIN[\r][\n]"
>>> [2016-06-04 00:04:44,784] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "Authorization: Bearer [\r][\n]"
>>> [2016-06-04 00:04:44,785] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "Alt-Svc: quic=":443"; ma=2592000;
>>> v="34,33,32,31,30,29,28,27,26,25"[\r][\n]"
>>> [2016-06-04 00:04:44,785] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "Vary: X-Origin[\r][\n]"
>>> [2016-06-04 00:04:44,785] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "X-XSS-Protection: 1; mode=block[\r][\n]"
>>> [2016-06-04 00:04:44,785] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "Expires: Fri, 03 Jun 2016 18:34:43 GMT[\r][\n]"
>>> [2016-06-04 00:04:44,785] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "Alternate-Protocol: 443:quic[\r][\n]"
>>> [2016-06-04 00:04:44,785] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "Content-Type: application/json; charset=UTF-8[\r][\n]"
>>> [2016-06-04 00:04:44,786] DEBUG - wire HTTPS-Sender I/O dispatcher-3
>>> << "Accept-Ranges: none[\r][\n]"
>>> [2016-06-04 00:04:44,786] DEBUG - wire