Re: [Dev] About Edge Analytics Service-Mobile IOT

2015-08-25 Thread Lasantha Fernando
Hi Lakini,

I earlier assumed that you would be using the latest Siddhi-3.0.0 which has
the concept of different ExecutionPlanRuntimes. But looking at the code, it
seems you are using Siddhi-2.1.0.wso2v1. In that case, my explanation about
ExecutionPlanRuntime would be incorrect since Siddhi-2.1.0 did not have
ExecutionPlanRuntimes. For Siddhi-2.1.0,

1. Multi-threading when sending events would work. But Siddhi itself would
not streamline the events coming from different threads and handover the
work to Siddhi's own threads. i.e. the thread that sends the event would be
involved in processing and send the output event (unless siddhi-windows
were involved). The Siddhi engine itself was not multithreaded, but it can
handle events coming in from multiple threads. It is thread-safe.

2. When modifying queries using multiple threads, the queries were tracked
by the query-id which involved a generated UUID. So if you added similar
queries to the same SiddhiManager instance, it would be added as two
different queries. To edit a query, you would have to remove the old query
first using its query id and add the modified query.

But looking at your code, it seems the two client apps are calling the
IEdgeAnalyticsService.Stub.getService() method and in that method you are
creating a new CEP instance each time, which in turn creates a new Siddhi
manager. Given that, your two client apps would deploy their queries on two
different Siddhi instances.

However, if there is only one instance of the service itself, calling the
RemoteService.getService() method simply creates a new CEP instance and
assigns that to the instance variable. i.e. the second call would overwrite
the reference to the CEP instance created by the first call. So definitely
when you call the RemoteService.sendData() method, it would send those
events to the CEP/Siddhi instance that was created by the second call, even
if it is the first client that is sending the events.

I think there is an issue in simply assigning a new instance to the 'cep'
variable in com.example.lakini.edgeanalyticsservice.EdgeAnalyticsService
IEdgeAnalyticsService.Stub.getService() method.

Can you please correct this issue and see whether events are being
processed correctly?

Thanks,
Lasantha

On 25 August 2015 at 18:53, Lakini Senanayaka lak...@wso2.com wrote:

 Hi all,

 As per request I have shared my source code.You can find my sample project
 [1] from this link and I have point out the place[2][3][4] it will be
 easier to you to understand.

 [1] - sample edgeAnalyticsService
 https://github.com/Lakini/EdgeAnalyticServiceSample/
 [2] - EdgeAnalyticsService
 https://github.com/Lakini/EdgeAnalyticServiceSample/blob/master/EdgeAnalyticsService/app/src/main/java/com/example/lakini/edgeanalyticsservice/CEP.java#L37
 [3] - ClientApp1
 https://github.com/Lakini/EdgeAnalyticServiceSample/blob/master/ClientApp/app/src/main/java/com/example/lakini/edgeanalyticsservice/MainActivity.java#L58
 [4] -ClientApp2
 https://github.com/Lakini/EdgeAnalyticServiceSample/blob/master/Second/app/src/main/java/com/example/lakini/edgeanalyticsservice/MainActivity.java#L58

 Thank you.

 On Tue, Aug 25, 2015 at 4:30 PM, Lasantha Fernando lasan...@wso2.com
 wrote:

 Hi Srinath, Lakini,

 Siddhi is thread safe when sending events. You can send events via
 multiple threads without any issue.

 When changing queries via multiple threads, the execution plan runtime of
 the previous query would have to be shutdown manually. Looking at the
 current code, it seems that an ExecutionPlanRuntime instance would be
 created after the query is parsed and that would be put to a
 ConcurrentHashMap with the execution plan name (specified via annotations)
 as the key. So if you do not shutdown that old runtime, it will still keep
 running. But if you shutdown the execution plan from the client code, you
 should not encounter any issue. @Suho, please correct if my understanding
 is incorrect on this.

 @Lakini, if you are having two execution plans with different names and
 still it seems that one query is overriding the other, that is wrong. Can
 you share the execution plan syntax you are using for the two queries and
 also point to the code? Based on the scenario described above, it seems
 there is a conflict on the execution plan names or the output streams of
 the execution plans.

 Thanks,
 Lasantha

 On 25 August 2015 at 16:09, Srinath Perera srin...@wso2.com wrote:

 Please point to your code.

 CEP team, please help.

 On Tue, Aug 25, 2015 at 4:00 PM, Lakini Senanayaka lak...@wso2.com
 wrote:

  Srinath,thank you for your prompt reply.

 My Client and Service run separately in 2 different processes.I have
 just developed 2 simple apps which send random data and I controlled it
 using a timer.I have set 2 different rules for the 2 apps.one client has a
 rule like notify when the value50 and the second client has a rule like
 notify when the value20.When running two clients in the same time ,the
 first client 

Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Viraj Senevirathne
Hi all,

I have tested VFS inbound and transport use cases for file, ftp and sftp
protocols. No issues found.
[X] Stable - go ahead and release

Thanks.

On Tue, Aug 25, 2015 at 8:28 PM, Nadeeshaan Gunasinghe nadeesh...@wso2.com
wrote:

 Hi,

 I tested JMS use cases and MSMP fail over use cases. No issues found.

 [X] Stable - go ahead and release

 Regards.


 *Nadeeshaan Gunasinghe*
 Software Engineer, WSO2 Inc. http://wso2.com
 +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
 #14f655ebf8b0b7e4_
 http://www.facebook.com/nadeeshaan.gunasinghe
 http://lk.linkedin.com/in/nadeeshaan  http://twitter.com/Nadeeshaan
 http://nadeeshaan.blogspot.com/
 Get a signature like this: Click here!
 http://ws-promos.appspot.com/r?rdata=eyJydXJsIjogImh0dHA6Ly93d3cud2lzZXN0YW1wLmNvbS9lbWFpbC1pbnN0YWxsP3dzX25jaWQ9NjcyMjk0MDA4JnV0bV9zb3VyY2U9ZXh0ZW5zaW9uJnV0bV9tZWRpdW09ZW1haWwmdXRtX2NhbXBhaWduPXByb21vXzU3MzI1Njg1NDg3Njk3OTIiLCAiZSI6ICI1NzMyNTY4NTQ4NzY5NzkyIn0=

 On Tue, Aug 25, 2015 at 6:01 PM, Jagath Sisirakumara Ariyarathne 
 jaga...@wso2.com wrote:

 Hi,

 I executed performance tests for basic scenarios with this pack. No
 issues observed.

 [X] Stable - go ahead and release

 Thanks.

 On Mon, Aug 24, 2015 at 10:27 PM, Chanaka Fernando chana...@wso2.com
 wrote:

 Hi Devs,

 WSO2 ESB 4.9.0 RC1 Release Vote

 This release fixes the following issues:
 https://wso2.org/jira/browse/ESBJAVA-4093?filter=12363

 Please download ESB 490 RC1 and test the functionality and vote. Vote
 will be open for 72 hours or as needed.

 *Source  binary distribution files:*

 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/org/wso2/esb/wso2esb/4.9.0-RC1/

 *Maven staging repository:*
 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/

 *The tag to be voted upon :*
 https://github.com/wso2/product-esb/tree/esb-parent-4.9.0-RC1


 [ ] Broken - do not release (explain why)
 [ ] Stable - go ahead and release


 Thanks and Regards,
 ~ WSO2 ESB Team ~

 --
 --
 Chanaka Fernando
 Senior Technical Lead
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 mobile: +94 773337238
 Blog : http://soatutorials.blogspot.com
 LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
 Twitter:https://twitter.com/chanakaudaya
 Wordpress:http://chanakaudaya.wordpress.com




 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Jagath Ariyarathne
 Technical Lead
 WSO2 Inc.  http://wso2.com/
 Email: jaga...@wso2.com
 Mob  : +94 77 386 7048


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev



 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
Viraj Senevirathne
Software Engineer; WSO2, Inc.

Mobile : +94 71 818 4742 %2B94%20%280%29%20773%20451194
Email : vir...@wso2.com thili...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Issues in setting up distributed mode deployment for CEP

2015-08-25 Thread Sajith Ravindra
To add to Lasantha's comment on point 2

2) After starting the STORM cluster, CEP managers and CEP worker node, We
couldn't see anything on the STORM UI under topology. Does that mean
cluster is not setup correctly?

You can verify if the cluster has setup correctly by logging in to storm
UI. You should be able to see the information of nimbus and all supervisors
listed. And also please verify that the IP address of nimbus has been added
to repository/conf/cep/storm/storm.yaml of CEP manager node.

Thanks
*,Sajith Ravindra*
Senior Software Engineer
WSO2 Inc.; http://wso2.com
lean.enterprise.middleware

mobile: +94 77 2273550
blog: http://sajithr.blogspot.com/
http://lk.linkedin.com/pub/shani-ranasinghe/34/111/ab

On Tue, Aug 25, 2015 at 8:45 PM, Lasantha Fernando lasan...@wso2.com
wrote:

 Hi Suho,

 Sure. Will do.

 @Sashika, can you share the event-processor.xml and the axis2.xml files of
 your configuration?

 Also please find my comments inline regarding the queries raised above.



 On 25 August 2015 at 19:34, Sriskandarajah Suhothayan s...@wso2.com
 wrote:

 Lasantha  Sajith please look into it, looks like there is a config
 issue.

 And also add this info the to docs so it will not be miss leading.

 Suho

 On Tue, Aug 25, 2015 at 8:55 AM, Sashika Wijesinghe sash...@wso2.com
 wrote:

 Hi All,

 We have set up the distributed mode deployment for CEP using two CEP
 managers and one CEP worker. but still we have some issues to get clarified.

 1) When the worker node is started, we couldn't find any record in
 manager logs whether the worker connected to CEP Manager or not. How to
 verify whether worker is successfully connected with the storm and CEP
 managers?


 You should be able to go to the execution plan listing page where a
 'Distributed Deployment Status' is shown which will give you an idea about
 the status of the Storm topology as well as the inflow/outflow connections.
 The CEP workers will simply talk to the CEP managers to get information
 about Storm receiver spouts within the topology. Then the workers
 themselves will establish the connections. You should see logs similar to
 below when worker connects to Storm.

 [2015-08-25 20:38:58,221]  INFO
 {org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
 -  [-1234:ExecutionPlan:CEPPublisher] Initializing storm output event
 listener
 [2015-08-25 20:38:58,266]  INFO
 {org.wso2.carbon.event.processor.manager.commons.transport.server.TCPEventServer}
 -  EventServer starting event listener on port 15001
 [2015-08-25 20:38:58,270]  INFO
 {org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
 -  [-1234:ExecutionPlan:CEPPublisher] Registering output stream listener
 for Siddhi stream : highCountStream
 [2015-08-25 20:38:58,275]  INFO
 {org.wso2.carbon.event.stream.core.internal.EventJunction} -  Producer
 added to the junction. Stream:highCountStream:1.0.0
 [2015-08-25 20:38:58,271]  INFO
 {org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
 -  [-1234:ExecutionPlan:CEPPublisher] Registering CEP publisher for
 10.100.0.75:15001
 [2015-08-25 20:38:58,299]  INFO
 {org.wso2.carbon.event.processor.common.util.AsyncEventPublisher} -
  [-1234:ExecutionPlan:CEPReceiver] Requesting a StormReceiver for
 10.100.0.75
 [2015-08-25 20:38:58,301]  INFO
 {org.wso2.carbon.event.stream.core.internal.EventJunction} -  Consumer
 added to the junction. Stream:org.wso2.test.stream:1.0.0
 [2015-08-25 20:38:58,314]  INFO
 {org.wso2.carbon.event.processor.core.EventProcessorDeployer} -  Execution
 plan is deployed successfully and in active state  : ExecutionPlan
 [2015-08-25 20:38:58,352]  INFO
 {org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
 -  [-1234:ExecutionPlan:CEPPublisher] Successfully registered CEP publisher
 for 10.100.0.75:15001
 [2015-08-25 20:38:58,352]  INFO
 {org.wso2.carbon.event.processor.common.util.AsyncEventPublisher} -
  [-1234:ExecutionPlan:CEPReceiver] Retrieved StormReceiver at
 10.100.0.75:15000 from storm manager service at 10.100.0.75:8904
 [2015-08-25 20:38:58,377]  INFO
 {org.wso2.carbon.event.processor.manager.commons.transport.client.TCPEventPublisher}
 -  Connecting to 10.100.0.75:15000
 [2015-08-25 20:38:58,384]  INFO
 {org.wso2.carbon.event.processor.common.util.AsyncEventPublisher} -
  [-1234:ExecutionPlan:CEPReceiver] Connected to StormReceiver at
 10.100.0.75:15000 for the Stream(s) testStream,




 2) After starting the STORM cluster, CEP managers and CEP worker node,
 We couldn't see anything on the STORM UI under topology. Does that mean
 cluster is not setup correctly?


 A storm topology would not appear unless it has been submitted by creating
 an execution plan. Did you create an execution plan? If so, can you share
 the execution plan?



 3) How multiple CEP workers used in production setup?
 As we understood, CEP workers handle all the communication and
 processing within CEP. If there are 

Re: [Dev] Pull Request

2015-08-25 Thread Manuranga Perera
Hi Sameera,
It was not possible to override asset manager and return a custom object so
I added this fix for Dhanuka. was this intentional do you see any issues
with this fix.

On Tue, Aug 25, 2015 at 8:23 PM, Dhanuka Ranasinghe dhan...@wso2.com
wrote:

 Hi,

 Please merge PR [1].

 [1]  https://github.com/wso2/carbon-store/pull/170

 Cheers,
 Dhanuka
 *Dhanuka Ranasinghe*

 Senior Software Engineer
 WSO2 Inc. ; http://wso2.com
 lean . enterprise . middleware

 phone : +94 715381915




-- 
With regards,
*Manu*ranga Perera.

phone : 071 7 70 20 50
mail : m...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Accessing original file name from Smooks ESB Mediator

2015-08-25 Thread Shiva Balachandran
Hi Rudy,

AFAIK, we cannot use that property inside smooks, cause we only pass the
stream to the smooks mediator.

Thank you.

Regards,
Shiva Balachandran

On Tue, Aug 25, 2015 at 8:38 PM, Rudy Hilado rudy.hil...@centrihealth.com
wrote:

 Shiva,

 Yes, I can access the file name and put it into the property
 “Processing_File_Name”, but how can I access that property from within the
 Smooks mediator configuration (smooks-resource-list), particularly from
 within the fileNamePattern element?



 Thanks,

 - Rudy





 *From:* Shiva Balachandran [mailto:sh...@wso2.com]
 *Sent:* Tuesday, August 25, 2015 10:37 AM
 *To:* Rudy Hilado
 *Cc:* WSO2 Developers' List
 *Subject:* Re: [Dev] Accessing original file name from Smooks ESB Mediator



 Have you tried setting a property to capture the file name that's being
 processed?

  property name=”Processing_File_Name”
 expression=”get-property(‘transport’,’FILE_NAME’)”  type=”STRING”/



 Check this link
 https://shivabalachandran.wordpress.com/2015/08/06/quick-note-5-wso2-findout-access-lookup-identify-the-name-of-the-current-file-being-processed-by-the-vfs-listener/
 .





 On Mon, Aug 24, 2015 at 11:21 PM, Rudy Hilado 
 rudy.hil...@centrihealth.com wrote:

 Hello (I thought I’d post this again to see if anyone has some insight)



 In the WSO2 ESB, we are trying to process very large files and I’ve
 reviewed the online examples that explain the Smooks mediator should be
 used to split the larger file into smaller files. Using the smooks
 configuration file:outputStream, the output file name is defined in the
 file:fileNamePattern. All the examples show the file name pattern derived
 using data from the record being processed.



 Is it possible to access the source file name?



 I need the split files to have a name that includes the original file name.



 (I’m beginning to suspect that, oddly, it’s not possible to access the
 original file name within Smooks)



 Thanks for any help,

 - Rudy


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev





 --

 Shiva Balachandran

 Software Engineer

 WSO2 Inc.


 Mobile - +94 774445788

 Blog - https://shivabalachandran.wordpress.com/




-- 
Shiva Balachandran
Software Engineer
WSO2 Inc.

Mobile - +94 774445788
Blog - https://shivabalachandran.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Accessing original file name from Smooks ESB Mediator

2015-08-25 Thread Rudy Hilado
Ok. Thank you.

 

Is there another WSO2 ESB implementation pattern I can use to split a large ( 
1 GB) file into smaller files without using the Smooks mediator? (where I can 
avoid loading the entire file into memory and also have the split files be 
named based on the original file name)

 

Regards,

- Rudy 

 

From: Shiva Balachandran [mailto:sh...@wso2.com] 
Sent: Tuesday, August 25, 2015 11:12 AM
To: Rudy Hilado
Cc: WSO2 Developers' List
Subject: Re: [Dev] Accessing original file name from Smooks ESB Mediator

 

Hi Rudy,

AFAIK, we cannot use that property inside smooks, cause we only pass the stream 
to the smooks mediator.

Thank you.

Regards,
Shiva Balachandran

 

On Tue, Aug 25, 2015 at 8:38 PM, Rudy Hilado rudy.hil...@centrihealth.com 
wrote:

Shiva,

Yes, I can access the file name and put it into the property 
“Processing_File_Name”, but how can I access that property from within the 
Smooks mediator configuration (smooks-resource-list), particularly from 
within the fileNamePattern element?

 

Thanks,

- Rudy

 

 

From: Shiva Balachandran [mailto:sh...@wso2.com] 
Sent: Tuesday, August 25, 2015 10:37 AM
To: Rudy Hilado
Cc: WSO2 Developers' List
Subject: Re: [Dev] Accessing original file name from Smooks ESB Mediator

 

Have you tried setting a property to capture the file name that's being 
processed?

 property name=”Processing_File_Name” 
expression=”get-property(‘transport’,’FILE_NAME’)”  type=”STRING”/

 

Check this link 
https://shivabalachandran.wordpress.com/2015/08/06/quick-note-5-wso2-findout-access-lookup-identify-the-name-of-the-current-file-being-processed-by-the-vfs-listener/
 .

 

 

On Mon, Aug 24, 2015 at 11:21 PM, Rudy Hilado rudy.hil...@centrihealth.com 
wrote:

Hello (I thought I’d post this again to see if anyone has some insight)

 

In the WSO2 ESB, we are trying to process very large files and I’ve reviewed 
the online examples that explain the Smooks mediator should be used to split 
the larger file into smaller files. Using the smooks configuration 
file:outputStream, the output file name is defined in the 
file:fileNamePattern. All the examples show the file name pattern derived 
using data from the record being processed.

 

Is it possible to access the source file name?

 

I need the split files to have a name that includes the original file name.

 

(I’m beginning to suspect that, oddly, it’s not possible to access the original 
file name within Smooks)

 

Thanks for any help,

- Rudy


___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev





 

-- 

Shiva Balachandran

Software Engineer

WSO2 Inc.


Mobile - +94 774445788 tel:%2B94%20774445788 

Blog - https://shivabalachandran.wordpress.com/





 

-- 

Shiva Balachandran

Software Engineer

WSO2 Inc.


Mobile - +94 774445788

Blog - https://shivabalachandran.wordpress.com/

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Issues in setting up distributed mode deployment for CEP

2015-08-25 Thread Sriskandarajah Suhothayan
Lasantha  Sajith please look into it, looks like there is a config issue.

And also add this info the to docs so it will not be miss leading.

Suho

On Tue, Aug 25, 2015 at 8:55 AM, Sashika Wijesinghe sash...@wso2.com
wrote:

 Hi All,

 We have set up the distributed mode deployment for CEP using two CEP
 managers and one CEP worker. but still we have some issues to get clarified.

 1) When the worker node is started, we couldn't find any record in manager
 logs whether the worker connected to CEP Manager or not. How to verify
 whether worker is successfully connected with the storm and CEP managers?

 2) After starting the STORM cluster, CEP managers and CEP worker node, We
 couldn't see anything on the STORM UI under topology. Does that mean
 cluster is not setup correctly?

 3) How multiple CEP workers used in production setup?
 As we understood, CEP workers handle all the communication and processing
 within CEP. If there are two worker nodes, how those two workers manage the
 work among them? Are we going to keep one worker node in active mode and
 the other node in passive mode?


 Thanks and Regards,
 --

 *Sashika WijesingheSoftware Engineer - QA Team*
 Mobile : +94 (0) 774537487
 sash...@wso2.com




-- 

*S. Suhothayan*
Technical Lead  Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* http://wso2.com/*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
http://suhothayan.blogspot.com/twitter: http://twitter.com/suhothayan
http://twitter.com/suhothayan | linked-in:
http://lk.linkedin.com/in/suhothayan http://lk.linkedin.com/in/suhothayan*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Accessing original file name from Smooks ESB Mediator

2015-08-25 Thread Shiva Balachandran
Have you tried setting a property to capture the file name that's being
processed?

 property name=”Processing_File_Name”
expression=”get-property(‘transport’,’FILE_NAME’)”  type=”STRING”/

Check this link
https://shivabalachandran.wordpress.com/2015/08/06/quick-note-5-wso2-findout-access-lookup-identify-the-name-of-the-current-file-being-processed-by-the-vfs-listener/
.


On Mon, Aug 24, 2015 at 11:21 PM, Rudy Hilado rudy.hil...@centrihealth.com
wrote:

 Hello (I thought I’d post this again to see if anyone has some insight)



 In the WSO2 ESB, we are trying to process very large files and I’ve
 reviewed the online examples that explain the Smooks mediator should be
 used to split the larger file into smaller files. Using the smooks
 configuration file:outputStream, the output file name is defined in the
 file:fileNamePattern. All the examples show the file name pattern derived
 using data from the record being processed.



 Is it possible to access the source file name?



 I need the split files to have a name that includes the original file name.



 (I’m beginning to suspect that, oddly, it’s not possible to access the
 original file name within Smooks)



 Thanks for any help,

 - Rudy

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
Shiva Balachandran
Software Engineer
WSO2 Inc.

Mobile - +94 774445788
Blog - https://shivabalachandran.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Wrangler Integration

2015-08-25 Thread Supun Sethunga
You can integrate it to [1], by adding a new component
org.wso2.carbon.ml.wrangler. Each component is a carbon component.

Please follow the naming conventions used in the other components, for
package names and etc..

[1] https://github.com/wso2/carbon-ml/tree/master/components/ml

Thanks,
Supun

On Tue, Aug 25, 2015 at 7:33 AM, Danula Eranjith hmdanu...@gmail.com
wrote:

 Hi all,

 Can you suggest where I should be ideally integrating these files[1]
 https://github.com/danula/wso2-ml-wrangler-integration/tree/master/src
 in ML.

 [1] -
 https://github.com/danula/wso2-ml-wrangler-integration/tree/master/src

 Thanks,
 Danula




-- 
*Supun Sethunga*
Software Engineer
WSO2, Inc.
http://wso2.com/
lean | enterprise | middleware
Mobile : +94 716546324
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Nadeeshaan Gunasinghe
Hi,

I tested JMS use cases and MSMP fail over use cases. No issues found.

[X] Stable - go ahead and release

Regards.


*Nadeeshaan Gunasinghe*
Software Engineer, WSO2 Inc. http://wso2.com
+94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe #
http://www.facebook.com/nadeeshaan.gunasinghe
http://lk.linkedin.com/in/nadeeshaan  http://twitter.com/Nadeeshaan
http://nadeeshaan.blogspot.com/
Get a signature like this: Click here!
http://ws-promos.appspot.com/r?rdata=eyJydXJsIjogImh0dHA6Ly93d3cud2lzZXN0YW1wLmNvbS9lbWFpbC1pbnN0YWxsP3dzX25jaWQ9NjcyMjk0MDA4JnV0bV9zb3VyY2U9ZXh0ZW5zaW9uJnV0bV9tZWRpdW09ZW1haWwmdXRtX2NhbXBhaWduPXByb21vXzU3MzI1Njg1NDg3Njk3OTIiLCAiZSI6ICI1NzMyNTY4NTQ4NzY5NzkyIn0=

On Tue, Aug 25, 2015 at 6:01 PM, Jagath Sisirakumara Ariyarathne 
jaga...@wso2.com wrote:

 Hi,

 I executed performance tests for basic scenarios with this pack. No issues
 observed.

 [X] Stable - go ahead and release

 Thanks.

 On Mon, Aug 24, 2015 at 10:27 PM, Chanaka Fernando chana...@wso2.com
 wrote:

 Hi Devs,

 WSO2 ESB 4.9.0 RC1 Release Vote

 This release fixes the following issues:
 https://wso2.org/jira/browse/ESBJAVA-4093?filter=12363

 Please download ESB 490 RC1 and test the functionality and vote. Vote
 will be open for 72 hours or as needed.

 *Source  binary distribution files:*

 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/org/wso2/esb/wso2esb/4.9.0-RC1/

 *Maven staging repository:*
 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/

 *The tag to be voted upon :*
 https://github.com/wso2/product-esb/tree/esb-parent-4.9.0-RC1


 [ ] Broken - do not release (explain why)
 [ ] Stable - go ahead and release


 Thanks and Regards,
 ~ WSO2 ESB Team ~

 --
 --
 Chanaka Fernando
 Senior Technical Lead
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 mobile: +94 773337238
 Blog : http://soatutorials.blogspot.com
 LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
 Twitter:https://twitter.com/chanakaudaya
 Wordpress:http://chanakaudaya.wordpress.com




 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Jagath Ariyarathne
 Technical Lead
 WSO2 Inc.  http://wso2.com/
 Email: jaga...@wso2.com
 Mob  : +94 77 386 7048


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Accessing original file name from Smooks ESB Mediator

2015-08-25 Thread Rudy Hilado
Shiva,

Yes, I can access the file name and put it into the property 
“Processing_File_Name”, but how can I access that property from within the 
Smooks mediator configuration (smooks-resource-list), particularly from 
within the fileNamePattern element?

 

Thanks,

- Rudy

 

 

From: Shiva Balachandran [mailto:sh...@wso2.com] 
Sent: Tuesday, August 25, 2015 10:37 AM
To: Rudy Hilado
Cc: WSO2 Developers' List
Subject: Re: [Dev] Accessing original file name from Smooks ESB Mediator

 

Have you tried setting a property to capture the file name that's being 
processed?

 property name=”Processing_File_Name” 
expression=”get-property(‘transport’,’FILE_NAME’)”  type=”STRING”/

 

Check this link 
https://shivabalachandran.wordpress.com/2015/08/06/quick-note-5-wso2-findout-access-lookup-identify-the-name-of-the-current-file-being-processed-by-the-vfs-listener/
 .

 

 

On Mon, Aug 24, 2015 at 11:21 PM, Rudy Hilado rudy.hil...@centrihealth.com 
wrote:

Hello (I thought I’d post this again to see if anyone has some insight)

 

In the WSO2 ESB, we are trying to process very large files and I’ve reviewed 
the online examples that explain the Smooks mediator should be used to split 
the larger file into smaller files. Using the smooks configuration 
file:outputStream, the output file name is defined in the 
file:fileNamePattern. All the examples show the file name pattern derived 
using data from the record being processed.

 

Is it possible to access the source file name?

 

I need the split files to have a name that includes the original file name.

 

(I’m beginning to suspect that, oddly, it’s not possible to access the original 
file name within Smooks)

 

Thanks for any help,

- Rudy


___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev





 

-- 

Shiva Balachandran

Software Engineer

WSO2 Inc.


Mobile - +94 774445788

Blog - https://shivabalachandran.wordpress.com/

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Append escape character in json payload -ESB RC1 pack

2015-08-25 Thread Malaka Silva
Hi Senduran,

Can this be due to [1]?

@Keerthika/Kesavan - I guess we are using payloadfactory in connector
templates in both cases?

[1] https://wso2.org/jira/browse/ESBJAVA-3995

On Tue, Aug 25, 2015 at 3:36 PM, Keerthika Mahendralingam 
keerth...@wso2.com wrote:

 Escape characters appended in payload causing the error in other
 connectors also when using the ESB-RC-1 pack.
 Please find the log for the Marketo connector:

 [2015-08-25 15:29:03,433] DEBUG - wire  POST
 /services/createAndUpdateLeads HTTP/1.1[\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Host: esb.wso2.com:8280
 [\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Connection: keep-alive[\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Content-Length: 441[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Cache-Control: no-cache[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Origin:
 chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  User-Agent: Mozilla/5.0
 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko)
 Chrome/44.0.2403.155 Safari/537.36[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Content-Type:
 application/json[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept: */*[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept-Encoding: gzip,
 deflate[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept-Language:
 en-US,en;q=0.8[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Cookie:
 __ar_v4=LOY73IEDGREW5ABOLPMGGI%3A20141012%3A10%7C3HZ3H74ZSZHCLIQQVDL7FW%3A20141012%3A10%7CCZDVOXTUDJEN7DJK45GUJ6%3A20141012%3A10;
 __utmx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:;
 __utmxx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:1432283552:15552000;
 __utma=62040490.99968054.1406221425.1425200186.1433095074.3;
 __utmz=62040490.1433095074.3.3.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided);
 CAKEPHP=0ueo79km0ue216pp32326bibh0;
 BIGipServersjgcp-restapi_https=4262002698.47873.;
 SESS73db9c87576ccc08af28bf2520013ef7=63232bff43562fb56e9d80ca59e15bef;
 csrftoken=6a5d43410cf85c55adc8c5d99bfa62b5;
 __auc=8de3c8c814769541720b3574cf2; _ga=GA1.2.99968054.1406221425[\r][\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  [\r][\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  {[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  marketoInstanceURL:
 https://667-LPB-079.mktorest.com,[\n];

 [2015-08-25 15:29:03,436] DEBUG - wire  
 clientSecret:Vgo6rzIiJyPOzTcefP3Zr56tK2hv8fJd,[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  
 clientId:01f22a42-0f05-4e7f-b675-550b6de66f91,[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire [0x9]input:[[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire [0x9]{[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  email:
 te...@gmail.com,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 firstName:test1,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 lastName:test1[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  },[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  {[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  email:
 te...@gmail.com,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 firstName:test2,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 lastName:test1[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  }[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  ][\n]

 [2015-08-25 15:29:03,438] DEBUG - wire  }

 [2015-08-25 15:29:03,450] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.action. Returning empty result. Error invalid
 path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.lookupField. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.asyncProcessing. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.partitionName. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:06,066] DEBUG - wire  GET
 /identity/oauth/token?grant_type=client_credentialsclient_id=01f22a42-0f05-4e7f-b675-550b6de66f91client_secret=Vgo6rzIiJyPOzTcefP3Zr56tK2hv8fJd
 HTTP/1.1[\r][\n]

 [2015-08-25 15:29:06,066] DEBUG - wire  Accept-Language:
 en-US,en;q=0.8[\r][\n]

 [2015-08-25 15:29:06,066] DEBUG - wire  Cookie:
 __ar_v4=LOY73IEDGREW5ABOLPMGGI%3A20141012%3A10%7C3HZ3H74ZSZHCLIQQVDL7FW%3A20141012%3A10%7CCZDVOXTUDJEN7DJK45GUJ6%3A20141012%3A10;
 __utmx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:;
 __utmxx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:1432283552:15552000;
 __utma=62040490.99968054.1406221425.1425200186.1433095074.3;
 __utmz=62040490.1433095074.3.3.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided);
 CAKEPHP=0ueo79km0ue216pp32326bibh0;
 BIGipServersjgcp-restapi_https=4262002698.47873.;
 SESS73db9c87576ccc08af28bf2520013ef7=63232bff43562fb56e9d80ca59e15bef;
 csrftoken=6a5d43410cf85c55adc8c5d99bfa62b5;
 

Re: [Dev] Pull Request

2015-08-25 Thread Dhanuka Ranasinghe
slightly change the fix

*Dhanuka Ranasinghe*

Senior Software Engineer
WSO2 Inc. ; http://wso2.com
lean . enterprise . middleware

phone : +94 715381915

On Tue, Aug 25, 2015 at 8:27 PM, Manuranga Perera m...@wso2.com wrote:

 Hi Sameera,
 It was not possible to override asset manager and return a custom object
 so I added this fix for Dhanuka. was this intentional do you see any issues
 with this fix.

 On Tue, Aug 25, 2015 at 8:23 PM, Dhanuka Ranasinghe dhan...@wso2.com
 wrote:

 Hi,

 Please merge PR [1].

 [1]  https://github.com/wso2/carbon-store/pull/170

 Cheers,
 Dhanuka
 *Dhanuka Ranasinghe*

 Senior Software Engineer
 WSO2 Inc. ; http://wso2.com
 lean . enterprise . middleware

 phone : +94 715381915




 --
 With regards,
 *Manu*ranga Perera.

 phone : 071 7 70 20 50
 mail : m...@wso2.com

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [PPaaS] WSO2 Private PaaS build failure

2015-08-25 Thread Thanuja Uruththirakodeeswaran
Hi Devs,

I've built *Stratos 4.1.x* branch successfully and after that tried to
build PPaaS using Stratos - 4.1.2.
I'm getting the following error when building PPaaS:

[INFO] --- carbon-p2-plugin:1.5.4:p2-repo-gen (2-p2-repo-generation) @
ppaas-p2-profile-gen ---
[ERROR] Error occured when processing the Feature Artifact:
org.apache.stratos:org.apache.stratos.tenant.activity.server.feature:4.1.2
org.apache.maven.plugin.MojoExecutionException: Error occured when
processing the Feature Artifact:
org.apache.stratos:org.apache.stratos.tenant.activity.server.feature:4.1.2
at
org.wso2.maven.p2.RepositoryGenMojo.getProcessedFeatureArtifacts(RepositoryGenMojo.java:322)
at
org.wso2.maven.p2.RepositoryGenMojo.createRepo(RepositoryGenMojo.java:197)
at org.wso2.maven.p2.RepositoryGenMojo.execute(RepositoryGenMojo.java:191)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: ERROR
at
org.wso2.maven.p2.generate.utils.MavenUtils.getResolvedArtifact(MavenUtils.java:60)
at
org.wso2.maven.p2.RepositoryGenMojo.getProcessedFeatureArtifacts(RepositoryGenMojo.java:319)
... 23 more
Caused by: org.apache.maven.artifact.resolver.ArtifactNotFoundException:
Failure to find
org.apache.stratos:org.apache.stratos.tenant.activity.server.feature:zip:4.1.2
in http://maven.wso2.org/nexus/content/repositories/releases/ was cached in
the local repository, resolution will not be reattempted until the update
interval of wso2.releases has elapsed or updates are forced

Try downloading the file manually from the project website.

Then, install it using the command:
mvn install:install-file -DgroupId=org.apache.stratos
-DartifactId=org.apache.stratos.tenant.activity.server.feature
-Dversion=4.1.2 -Dpackaging=zip -Dfile=/path/to/file

Alternatively, if you host your own repository you can deploy the file
there:
mvn deploy:deploy-file -DgroupId=org.apache.stratos
-DartifactId=org.apache.stratos.tenant.activity.server.feature
-Dversion=4.1.2 -Dpackaging=zip -Dfile=/path/to/file -Durl=[url]
-DrepositoryId=[id]



org.apache.stratos:org.apache.stratos.tenant.activity.server.feature:zip:4.1.2

from the specified remote repositories:
  wso2.releases (http://maven.wso2.org/nexus/content/repositories/releases/,
releases=true, snapshots=true),
  wso2.snapshots (
http://maven.wso2.org/nexus/content/repositories/snapshots/,
releases=false, snapshots=true),
  wso2-nexus (http://maven.wso2.org/nexus/content/groups/wso2-public/,
releases=true, snapshots=true),
  central (http://repo.maven.apache.org/maven2, releases=true,
snapshots=false)

at
org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:219)
at
org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:157)
at
org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:525)
at
org.wso2.maven.p2.generate.utils.MavenUtils.getResolvedArtifact(MavenUtils.java:56)
... 24 more
Caused by: org.sonatype.aether.resolution.ArtifactResolutionException:
Failure to find
org.apache.stratos:org.apache.stratos.tenant.activity.server.feature:zip:4.1.2
in http://maven.wso2.org/nexus/content/repositories/releases/ was cached in
the local repository, resolution will not be reattempted until the update
interval of wso2.releases has elapsed or 

[Dev] Engaging an axis2 module in ESB 490

2015-08-25 Thread Hasitha Aravinda
Hi team,

Since we are removing QoS UI feature from next release, we need to way to
engage axis2 module for a proxy. AFAIK there is no synapse config or
service parameter to engage a module with proxy at deployment time.( Like
handler for API).

How we are going to provide this function in ESB 490 ?

Thanks,
Hasitha.
-- 
--
Hasitha Aravinda,
Senior Software Engineer,
WSO2 Inc.
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Wrangler Integration

2015-08-25 Thread Danula Eranjith
Thanks Supun

On Tue, Aug 25, 2015 at 7:25 PM, Supun Sethunga sup...@wso2.com wrote:

 You can integrate it to [1], by adding a new component
 org.wso2.carbon.ml.wrangler. Each component is a carbon component.

 Please follow the naming conventions used in the other components, for
 package names and etc..

 [1] https://github.com/wso2/carbon-ml/tree/master/components/ml

 Thanks,
 Supun

 On Tue, Aug 25, 2015 at 7:33 AM, Danula Eranjith hmdanu...@gmail.com
 wrote:

 Hi all,

 Can you suggest where I should be ideally integrating these files[1]
 https://github.com/danula/wso2-ml-wrangler-integration/tree/master/src
 in ML.

 [1] -
 https://github.com/danula/wso2-ml-wrangler-integration/tree/master/src

 Thanks,
 Danula




 --
 *Supun Sethunga*
 Software Engineer
 WSO2, Inc.
 http://wso2.com/
 lean | enterprise | middleware
 Mobile : +94 716546324

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Append escape character in json payload -ESB RC1 pack

2015-08-25 Thread Malaka Silva
Hi Kesavan/Keertika,

Please share your config to reproduce this issue.

Maybe you can extract the transformation from connector and give a sample
to simulate the issue.

On Tue, Aug 25, 2015 at 7:18 PM, Senduran Balasubramaniyam 
sendu...@wso2.com wrote:

 Hi Kesavan,

 I guess that you are passing the json as a string to the payload mediator
 so that the payload mediator escapes the double quotes, considering them as
 special characters.
 Could you please share your proxy service so that I guess, I can recommend
 you the correct way.

 Regards
 Senduran

 On Tue, Aug 25, 2015 at 5:43 AM, Kesavan Yogarajah kesav...@wso2.com
 wrote:

 Yes Malaka.

 Kesavan Yogarajah
 Associate Software Engineer
 Mobile :+94 (0) 779 758021
 kesav...@wso2.com
 WSO2, Inc.
 lean . enterprise . middleware

 On Tue, Aug 25, 2015 at 3:42 PM, Malaka Silva mal...@wso2.com wrote:

 Hi Senduran,

 Can this be due to [1]?

 @Keerthika/Kesavan - I guess we are using payloadfactory in connector
 templates in both cases?

 [1] https://wso2.org/jira/browse/ESBJAVA-3995

 On Tue, Aug 25, 2015 at 3:36 PM, Keerthika Mahendralingam 
 keerth...@wso2.com wrote:

 Escape characters appended in payload causing the error in other
 connectors also when using the ESB-RC-1 pack.
 Please find the log for the Marketo connector:

 [2015-08-25 15:29:03,433] DEBUG - wire  POST
 /services/createAndUpdateLeads HTTP/1.1[\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Host: esb.wso2.com:8280
 [\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Connection:
 keep-alive[\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Content-Length: 441[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Cache-Control:
 no-cache[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Origin:
 chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  User-Agent: Mozilla/5.0
 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko)
 Chrome/44.0.2403.155 Safari/537.36[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Content-Type:
 application/json[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept: */*[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept-Encoding: gzip,
 deflate[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept-Language:
 en-US,en;q=0.8[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Cookie:
 __ar_v4=LOY73IEDGREW5ABOLPMGGI%3A20141012%3A10%7C3HZ3H74ZSZHCLIQQVDL7FW%3A20141012%3A10%7CCZDVOXTUDJEN7DJK45GUJ6%3A20141012%3A10;
 __utmx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:;
 __utmxx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:1432283552:15552000;
 __utma=62040490.99968054.1406221425.1425200186.1433095074.3;
 __utmz=62040490.1433095074.3.3.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided);
 CAKEPHP=0ueo79km0ue216pp32326bibh0;
 BIGipServersjgcp-restapi_https=4262002698.47873.;
 SESS73db9c87576ccc08af28bf2520013ef7=63232bff43562fb56e9d80ca59e15bef;
 csrftoken=6a5d43410cf85c55adc8c5d99bfa62b5;
 __auc=8de3c8c814769541720b3574cf2; _ga=GA1.2.99968054.1406221425[\r][\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  [\r][\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  {[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  marketoInstanceURL:
 https://667-LPB-079.mktorest.com,[\n];

 [2015-08-25 15:29:03,436] DEBUG - wire  
 clientSecret:Vgo6rzIiJyPOzTcefP3Zr56tK2hv8fJd,[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  
 clientId:01f22a42-0f05-4e7f-b675-550b6de66f91,[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire [0x9]input:[[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire [0x9]{[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  email:
 te...@gmail.com,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 firstName:test1,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 lastName:test1[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  },[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  {[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  email:
 te...@gmail.com,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 firstName:test2,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 lastName:test1[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  }[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  ][\n]

 [2015-08-25 15:29:03,438] DEBUG - wire  }

 [2015-08-25 15:29:03,450] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.action. Returning empty result. Error invalid
 path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.lookupField. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.asyncProcessing. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.partitionName. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:06,066] DEBUG - wire  GET
 

Re: [Dev] Issues in setting up distributed mode deployment for CEP

2015-08-25 Thread Lasantha Fernando
Hi Suho,

Sure. Will do.

@Sashika, can you share the event-processor.xml and the axis2.xml files of
your configuration?

Also please find my comments inline regarding the queries raised above.



On 25 August 2015 at 19:34, Sriskandarajah Suhothayan s...@wso2.com wrote:

 Lasantha  Sajith please look into it, looks like there is a config issue.

 And also add this info the to docs so it will not be miss leading.

 Suho

 On Tue, Aug 25, 2015 at 8:55 AM, Sashika Wijesinghe sash...@wso2.com
 wrote:

 Hi All,

 We have set up the distributed mode deployment for CEP using two CEP
 managers and one CEP worker. but still we have some issues to get clarified.

 1) When the worker node is started, we couldn't find any record in
 manager logs whether the worker connected to CEP Manager or not. How to
 verify whether worker is successfully connected with the storm and CEP
 managers?


You should be able to go to the execution plan listing page where a
'Distributed Deployment Status' is shown which will give you an idea about
the status of the Storm topology as well as the inflow/outflow connections.
The CEP workers will simply talk to the CEP managers to get information
about Storm receiver spouts within the topology. Then the workers
themselves will establish the connections. You should see logs similar to
below when worker connects to Storm.

[2015-08-25 20:38:58,221]  INFO
{org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
-  [-1234:ExecutionPlan:CEPPublisher] Initializing storm output event
listener
[2015-08-25 20:38:58,266]  INFO
{org.wso2.carbon.event.processor.manager.commons.transport.server.TCPEventServer}
-  EventServer starting event listener on port 15001
[2015-08-25 20:38:58,270]  INFO
{org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
-  [-1234:ExecutionPlan:CEPPublisher] Registering output stream listener
for Siddhi stream : highCountStream
[2015-08-25 20:38:58,275]  INFO
{org.wso2.carbon.event.stream.core.internal.EventJunction} -  Producer
added to the junction. Stream:highCountStream:1.0.0
[2015-08-25 20:38:58,271]  INFO
{org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
-  [-1234:ExecutionPlan:CEPPublisher] Registering CEP publisher for
10.100.0.75:15001
[2015-08-25 20:38:58,299]  INFO
{org.wso2.carbon.event.processor.common.util.AsyncEventPublisher} -
 [-1234:ExecutionPlan:CEPReceiver] Requesting a StormReceiver for
10.100.0.75
[2015-08-25 20:38:58,301]  INFO
{org.wso2.carbon.event.stream.core.internal.EventJunction} -  Consumer
added to the junction. Stream:org.wso2.test.stream:1.0.0
[2015-08-25 20:38:58,314]  INFO
{org.wso2.carbon.event.processor.core.EventProcessorDeployer} -  Execution
plan is deployed successfully and in active state  : ExecutionPlan
[2015-08-25 20:38:58,352]  INFO
{org.wso2.carbon.event.processor.core.internal.storm.SiddhiStormOutputEventListener}
-  [-1234:ExecutionPlan:CEPPublisher] Successfully registered CEP publisher
for 10.100.0.75:15001
[2015-08-25 20:38:58,352]  INFO
{org.wso2.carbon.event.processor.common.util.AsyncEventPublisher} -
 [-1234:ExecutionPlan:CEPReceiver] Retrieved StormReceiver at
10.100.0.75:15000 from storm manager service at 10.100.0.75:8904
[2015-08-25 20:38:58,377]  INFO
{org.wso2.carbon.event.processor.manager.commons.transport.client.TCPEventPublisher}
-  Connecting to 10.100.0.75:15000
[2015-08-25 20:38:58,384]  INFO
{org.wso2.carbon.event.processor.common.util.AsyncEventPublisher} -
 [-1234:ExecutionPlan:CEPReceiver] Connected to StormReceiver at
10.100.0.75:15000 for the Stream(s) testStream,




 2) After starting the STORM cluster, CEP managers and CEP worker node, We
 couldn't see anything on the STORM UI under topology. Does that mean
 cluster is not setup correctly?


A storm topology would not appear unless it has been submitted by creating
an execution plan. Did you create an execution plan? If so, can you share
the execution plan?



 3) How multiple CEP workers used in production setup?
 As we understood, CEP workers handle all the communication and processing
 within CEP. If there are two worker nodes, how those two workers manage the
 work among them? Are we going to keep one worker node in active mode and
 the other node in passive mode?


Workers would simply be scaled up as necessary and the load balancing would
happen from the client side of CEP. i.e. When sending events to CEP, you
can use databridge agents to load balance to CEP nodes so the events are
distributed. The workers won't have a special active/passive mode under
normal configuration.

Please let us know if there were any doc issues or misleading configuration
elements that could be improved.

Thanks,
Lasantha




 Thanks and Regards,
 --

 *Sashika WijesingheSoftware Engineer - QA Team*
 Mobile : +94 (0) 774537487
 sash...@wso2.com




 --

 *S. Suhothayan*
 Technical Lead  Team Lead of WSO2 Complex Event Processor
 *WSO2 Inc. *http://wso2.com
 * 

[Dev] Can't upload carbon app in cluster mode

2015-08-25 Thread John Hawkins
Hi Folks,
having problems with using an LB infront of two ESB's.

I'm using nginx to LB two ESBs

One of the ESBs is both a worker and a manager. Both nodes are on my
machine - as is everything else (SVN, nginx etc )

I have set up mgt.esb.wso2.com in nginx and I can see and log in to the
carbon console OK (admin/admin)
I have also set up esb.wso2.com in nginx and if I try to see that carbon
console (https://esb.wso2.com) I can see it but it fails to log me in as
admin/admin (I'm assuming that is OK too)


My problem comes when I try to upload a new carbon application  using
https://mgt.esb.wso2.com/carbon/carbonapps/app_upload.jsp?region=region1item=apps_add_menu

When I try to upload a car file it fails with 404 not found -
https://mgt.esb.wso2.com/fileupload/carbonapp

I can upload the .car file if I go through https://127.0.0.1:9450 - which
is the port my mgmnt console is running on i..e avoiding the LB.

Any ideas what's going on here ?

many thanks,
John.

John Hawkins
Director: Solutions Architecture
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Pull Request

2015-08-25 Thread Dhanuka Ranasinghe
In asset-api.js ---  update function , why we make *asset = null; *at
catch block.

*Dhanuka Ranasinghe*

Senior Software Engineer
WSO2 Inc. ; http://wso2.com
lean . enterprise . middleware

phone : +94 715381915

On Tue, Aug 25, 2015 at 9:04 PM, Dhanuka Ranasinghe dhan...@wso2.com
wrote:

 slightly change the fix

 *Dhanuka Ranasinghe*

 Senior Software Engineer
 WSO2 Inc. ; http://wso2.com
 lean . enterprise . middleware

 phone : +94 715381915

 On Tue, Aug 25, 2015 at 8:27 PM, Manuranga Perera m...@wso2.com wrote:

 Hi Sameera,
 It was not possible to override asset manager and return a custom object
 so I added this fix for Dhanuka. was this intentional do you see any issues
 with this fix.

 On Tue, Aug 25, 2015 at 8:23 PM, Dhanuka Ranasinghe dhan...@wso2.com
 wrote:

 Hi,

 Please merge PR [1].

 [1]  https://github.com/wso2/carbon-store/pull/170

 Cheers,
 Dhanuka
 *Dhanuka Ranasinghe*

 Senior Software Engineer
 WSO2 Inc. ; http://wso2.com
 lean . enterprise . middleware

 phone : +94 715381915




 --
 With regards,
 *Manu*ranga Perera.

 phone : 071 7 70 20 50
 mail : m...@wso2.com



___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Error when deserializing model summary

2015-08-25 Thread Madawa Soysa
Hi Nirmal,

Not actually. I have added parquet-common, parquet-encoding,
parquet-column, parquet-hadoop as mentioned at
https://github.com/Parquet/parquet-mr
https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FParquet%2Fparquet-mrsa=Dsntz=1usg=AFQjCNEHFd3g0eqXClZCkN35QgFYl0FVww
. But still get the following exception even though the
class parquet.hadoop.metadata.CompressionCodecName is there in the
parquet-hadoop.jar

java.lang.ClassNotFoundException:
parquet.hadoop.metadata.CompressionCodecName cannot be found by
spark-sql_2.10_1.4.1.wso2v1

I kept trying with different versions, still the issue is there.

On 26 August 2015 at 08:27, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, hope the issue is resolved now after the instructions given
 offline.

 On Mon, Aug 24, 2015 at 8:34 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 I have tested the model.save() with a simple Java program. It works fine.

 I have noticed that scala-library:2.11.6 is a dependency of
 spark:spark-core_2.11:1.4.1 [1]
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar
 In ML scala version is explicitly specified as 2.10.4. Is there a specific
 reason to use scala 2.10.4? I guess this version incompatibility could be
 the reason for this issue.

 [1] -
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar

 On 24 August 2015 at 10:24, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, seems this is a Spark issue :-( can you try a simple Java
 program and see whether model.save() works?

 On Sat, Aug 22, 2015 at 8:19 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi Nirmal,

 outPath is correct and the directory gets created, but the process
 becomes idle after that. Attached the only part that was written to a file.

 Also the method doesn't throw an exception as well.

 On 21 August 2015 at 21:31, Nirmal Fernando nir...@wso2.com wrote:

 Hi Madawa,

 According to Spark API [1], outPath shouldn't be exist.

 [1]
 https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala#L200

 On Fri, Aug 21, 2015 at 1:59 PM, Niranda Perera nira...@wso2.com
 wrote:

 I don't think it's correct. Scala version is 2.10.4 even in the mvn
 repo

 On Fri, Aug 21, 2015, 13:46 Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between Scala 
 and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com
 wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi All,

 There an issue with serializing Spark's MatrixFactorizationModel
 object. The object contains a huge RDD and as I have read in many 
 blogs,
 this model cannot be serialized as a java object. Therefore when 
 retrieving
 the model I get the same exception as above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they
 recommended me to use the built in save and load functions other 
 than using
 Java serializing.  So I have used following method to persist the 
 model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing happens when this line executes. No error is thrown
 as well. Any solution for this?


 Can you print outPath and see whether it's a valid file path?



 [1] -
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 On 16 August 2015 at 18:06, Madawa Soysa madawa...@cse.mrt.ac.lk
  wrote:

 Yes I was able to resolve the issue by removing RDD fields from
 the SummaryModel object as @Mano pointed out. Still I have the same
 exception when retrieving the model. Trying to fix that issue.

 On 14 August 2015 at 10:43, Nirmal Fernando nir...@wso2.com
 wrote:

 Thanks Niranda, this doc is useful.

 On Fri, Aug 14, 2015 at 10:36 AM, Niranda Perera 
 nira...@wso2.com wrote:

 From what I know, OneToOneDependancy come into play when spark
 tries to create the RDD dependency tree.

 Just thought of sharing that. this would be a good resource
 [1] :-)

 [1]
 https://databricks-training.s3.amazonaws.com/slides/advanced-spark-training.pdf

 On Thu, Aug 13, 2015 at 12:09 AM, Nirmal Fernando 
 nir...@wso2.com wrote:

 What is 

Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Nirmal Fernando
Do we need to have the release-note?

On Wed, Aug 26, 2015 at 8:34 AM, Ravindra Ranwala ravin...@wso2.com wrote:

 Hi All,

 I have tested scenarios [1]. No issues found.

 [X] Stable - go ahead and release


 [1]
 https://docs.google.com/document/d/1UpXN3SJvXOzp5UH6eF0lkO_vXTnvko7ruY5VMs9bDDI/edit



 Thanks  Regards,

 On Wed, Aug 26, 2015 at 7:29 AM, Prabath Ariyarathna prabat...@wso2.com
 wrote:

 HI All.

 I have tested basic security scenarios on the Java7 and Java8. No issues
 found.
 [X] Stable - go ahead and release.

 Thanks.

 On Tue, Aug 25, 2015 at 9:20 PM, Viraj Senevirathne vir...@wso2.com
 wrote:

 Hi all,

 I have tested VFS inbound and transport use cases for file, ftp and sftp
 protocols. No issues found.
 [X] Stable - go ahead and release

 Thanks.

 On Tue, Aug 25, 2015 at 8:28 PM, Nadeeshaan Gunasinghe 
 nadeesh...@wso2.com wrote:

 Hi,

 I tested JMS use cases and MSMP fail over use cases. No issues found.

 [X] Stable - go ahead and release

 Regards.


 *Nadeeshaan Gunasinghe*
 Software Engineer, WSO2 Inc. http://wso2.com
 +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
 #14f67f6d896ba6bd_14f67bb67ffa1168_14f658e48984972a_14f655ebf8b0b7e4_
 http://www.facebook.com/nadeeshaan.gunasinghe
 http://lk.linkedin.com/in/nadeeshaan  http://twitter.com/Nadeeshaan
   http://nadeeshaan.blogspot.com/
 Get a signature like this: Click here!
 http://ws-promos.appspot.com/r?rdata=eyJydXJsIjogImh0dHA6Ly93d3cud2lzZXN0YW1wLmNvbS9lbWFpbC1pbnN0YWxsP3dzX25jaWQ9NjcyMjk0MDA4JnV0bV9zb3VyY2U9ZXh0ZW5zaW9uJnV0bV9tZWRpdW09ZW1haWwmdXRtX2NhbXBhaWduPXByb21vXzU3MzI1Njg1NDg3Njk3OTIiLCAiZSI6ICI1NzMyNTY4NTQ4NzY5NzkyIn0=

 On Tue, Aug 25, 2015 at 6:01 PM, Jagath Sisirakumara Ariyarathne 
 jaga...@wso2.com wrote:

 Hi,

 I executed performance tests for basic scenarios with this pack. No
 issues observed.

 [X] Stable - go ahead and release

 Thanks.

 On Mon, Aug 24, 2015 at 10:27 PM, Chanaka Fernando chana...@wso2.com
 wrote:

 Hi Devs,

 WSO2 ESB 4.9.0 RC1 Release Vote

 This release fixes the following issues:
 https://wso2.org/jira/browse/ESBJAVA-4093?filter=12363

 Please download ESB 490 RC1 and test the functionality and vote. Vote
 will be open for 72 hours or as needed.

 *Source  binary distribution files:*

 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/org/wso2/esb/wso2esb/4.9.0-RC1/

 *Maven staging repository:*
 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/

 *The tag to be voted upon :*
 https://github.com/wso2/product-esb/tree/esb-parent-4.9.0-RC1


 [ ] Broken - do not release (explain why)
 [ ] Stable - go ahead and release


 Thanks and Regards,
 ~ WSO2 ESB Team ~

 --
 --
 Chanaka Fernando
 Senior Technical Lead
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 mobile: +94 773337238
 Blog : http://soatutorials.blogspot.com
 LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
 Twitter:https://twitter.com/chanakaudaya
 Wordpress:http://chanakaudaya.wordpress.com




 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Jagath Ariyarathne
 Technical Lead
 WSO2 Inc.  http://wso2.com/
 Email: jaga...@wso2.com
 Mob  : +94 77 386 7048


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev



 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Viraj Senevirathne
 Software Engineer; WSO2, Inc.

 Mobile : +94 71 818 4742 %2B94%20%280%29%20773%20451194
 Email : vir...@wso2.com thili...@wso2.com

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --

 *Prabath Ariyarathna*

 *Associate Technical Lead*

 *WSO2, Inc. *

 *lean . enterprise . middleware *


 *Email: prabat...@wso2.com prabat...@wso2.com*

 *Blog: http://prabu-lk.blogspot.com http://prabu-lk.blogspot.com*

 *Flicker : https://www.flickr.com/photos/47759189@N08
 https://www.flickr.com/photos/47759189@N08*

 *Mobile: +94 77 699 4730 *






 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Ravindra Ranwala
 Software Engineer
 WSO2, Inc: http://wso2.com
 http://www.google.com/url?q=http%3A%2F%2Fwso2.comsa=Dsntz=1usg=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg
 Mobile: +94714198770


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 

Thanks  regards,
Nirmal

Team Lead - WSO2 Machine Learner
Associate Technical Lead - Data Technologies Team, WSO2 Inc.
Mobile: +94715779733
Blog: http://nirmalfdo.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Install ES features into IoTServer

2015-08-25 Thread Ayyoob Hamza
Hi,
As the discussion on moving the web app layer to CDMF. We tried to install
the features but ES requires carbon common version 4.4.4. However CDMF is
on carbon common version 4.4.0.  which is needed for api.mgt 1.4.0, So is
it okay if we fork and continue the development with the latest versions,
so that it wont be a bottleneck for EMM release. ?

Thanks
*Ayyoob Hamza*
*Software Engineer*
WSO2 Inc.; http://wso2.com
email: ayy...@wso2.com cell: +94 77 1681010 %2B94%2077%207779495
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Connectors] Storing connector endpoint in registry

2015-08-25 Thread Malaka Silva
Hi Bhathiya,

You can store the complete init config in registry and refer that in your
salesforce methods.

Eg:

proxy xmlns=http://ws.apache.org/ns/synapse; name=sfdc
 transports=https,http statistics=disable trace=disable
startOnLoad=true
   target
  inSequence
 salesforce.query *configKey=conf:/repository/esb/sfdcconf*
batchSize200/batchSize
queryStringselect id,name from Account/queryString
 /salesforce.query
 respond/
  /inSequence
   /target
   description/
/proxy

*/_system/config/repository/esb/sfdcconf*
salesforce.init xmlns=http://ws.apache.org/ns/synapse;
usernamexx...@gmail.com/username
passwordxxx/password
loginUrlhttps://login.salesforce.com/services/Soap/u/34.0
/loginUrl
 /salesforce.init

On Tue, Aug 25, 2015 at 11:27 AM, Bhathiya Jayasekara bhath...@wso2.com
wrote:

 Hi Dushan/Malaka,

 So, is it not possible to achieve the $subject? Any workaround at least?

 Thanks,
 Bhathiya

 On Mon, Aug 24, 2015 at 4:34 PM, Bhathiya Jayasekara bhath...@wso2.com
 wrote:

 Hi Dushan,

 The initial service URL[1] can be different from account to account.
 Hence the requirement.

 Thanks,
 Bhathiya

 On Mon, Aug 24, 2015 at 4:26 PM, Dushan Abeyruwan dus...@wso2.com
 wrote:

 Hi
 Why you need like that ?
 SF connector we have uses initial service URL at init [1] and upon
 successfully authenticate all the other operations other than
 salesforce.init will use dynamic url return from authenticate requests
 (extracting from response payload).[2]

 init

 property expression=//ns:loginResponse/ns:result/ns:serverUrl/text()
 name=salesforce.serviceUrl scope=operation type=STRING
 xmlns:ns=urn:partner.soap.sforce.com /

 property name=salesforce.login.done scope=operation
 type=STRING value=true /


 [1] https://login.salesforce.com/services/Soap/u/27.0
 [2]
 https://github.com/wso2-dev/esb-connectors/blob/master/salesforce/1.0.0/src/main/resources/salesforce/init.xml

 Cheers,
 Dushan

 On Mon, Aug 24, 2015 at 4:12 PM, Bhathiya Jayasekara bhath...@wso2.com
 wrote:

 Hi all,

 Is there a way to store Salesforce endpoint in registry, and refer to
 that in init block of connector?

 Thanks,
 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




 --
 Dushan Abeyruwan | Technical Lead
 Technical Support-Bloomington US
 PMC Member Apache Synpase
 WSO2 Inc. http://wso2.com/
 Blog:*http://www.dushantech.com/ http://www.dushantech.com/*
 Mobile:(001)812-391-7441




 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




-- 

Best Regards,

Malaka Silva
Senior Tech Lead
M: +94 777 219 791
Tel : 94 11 214 5345
Fax :94 11 2145300
Skype : malaka.sampath.silva
LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
Blog : http://mrmalakasilva.blogspot.com/

WSO2, Inc.
lean . enterprise . middleware
http://www.wso2.com/
http://www.wso2.com/about/team/malaka-silva/
http://wso2.com/about/team/malaka-silva/

Save a tree -Conserve nature  Save the world for your future. Print this
email only if it is absolutely necessary.
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [AppFactory][Discuss] URL construction for snigle tenant applications

2015-08-25 Thread Dimuthu Leelarathne
Hi Anuruddha,

We need to check how domain mapping works on top of this as well.

thanks,
dimuthu


On Mon, Aug 24, 2015 at 8:42 PM, Imesh Gunaratne im...@wso2.com wrote:

 Hi Anuruddha,

 Yes, the only way to avoid exposing the generated Kubernetes service ports
 is to use a load balancer in front. +1 for the proposal. In the cartridge
 definition we can define required proxy ports for each port mapping to be
 exposed by the load balancer.

 On Mon, Aug 24, 2015 at 6:15 PM, Anuruddha Premalal anurud...@wso2.com
 wrote:

 Hi All,

 I have attached a possible solution diagram for single tenant URL flow.

 For dev-setups we can make use nginx for load balancing and BIND9 for DNS
 mapping. WDYT?

 ​
  AppFactory single tenant URL flow
 https://docs.google.com/a/wso2.com/drawings/d/1kbXH9gudC4pdvM86TqPX5C0NmM0T5ghKvf7nvTCVOdM/edit?usp=drive_web
 ​
 Regards,
 Anuruddha.

 On Mon, Aug 24, 2015 at 4:39 PM, Anuruddha Premalal anurud...@wso2.com
 wrote:

 Hi All,

 With the new Stratos and kubernetes Appfactory has the single tenant
 application support. With this feature users will be able to spin a new
 instance for  single tenant supported apps and it's versions.

 This thread is to discuss how we should expose the deployed application
 URL to the user.

 There single tenant instances are spawned as containers in docker.
 Access to the container is differ by the port, IP is a fixed factor
 depending on the number of kubernetes nodes. One option is to expose the
 URL with port, however for production instances we have to provide a URL
 without port.

 Appreciate your input.

 Regards,
 --
 *Anuruddha Premalal*
 Software Eng. | WSO2 Inc.
 Mobile : +94710461070
 Web site : www.regilandvalley.com




 --
 *Anuruddha Premalal*
 Software Eng. | WSO2 Inc.
 Mobile : +94710461070
 Web site : www.regilandvalley.com




 --
 *Imesh Gunaratne*
 Senior Technical Lead
 WSO2 Inc: http://wso2.com
 T: +94 11 214 5345 M: +94 77 374 2057
 W: http://imesh.gunaratne.org
 Lean . Enterprise . Middleware




-- 
Dimuthu Leelarathne
Director

WSO2, Inc. (http://wso2.com)
email: dimut...@wso2.com
Mobile : 0773661935

Lean . Enterprise . Middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Error when deserializing model summary

2015-08-25 Thread Nirmal Fernando
So it's coming from 2 bundles? Can we get rid of 1 and see?

On Wed, Aug 26, 2015 at 8:40 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
wrote:

 Gives following output.

 parquet.hadoop.metadata; version=0.0.0parquet_common_1.6.0_1.0.0 [321]
 parquet.hadoop.metadata; version=0.0.0parquet_hadoop_1.6.0_1.0.0 [323]

 On 26 August 2015 at 08:35, Nirmal Fernando nir...@wso2.com wrote:

 You added them to repository/components/lib right? Can you start the
 server in OSGi mode (./wso2server.sh -DosgiConsole ) and issue following
 command in OSGi console;

 osgi p parquet.hadoop.metadata

 On Wed, Aug 26, 2015 at 8:32 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi Nirmal,

 Not actually. I have added parquet-common, parquet-encoding,
 parquet-column, parquet-hadoop as mentioned at
 https://github.com/Parquet/parquet-mr
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FParquet%2Fparquet-mrsa=Dsntz=1usg=AFQjCNEHFd3g0eqXClZCkN35QgFYl0FVww
 . But still get the following exception even though the
 class parquet.hadoop.metadata.CompressionCodecName is there in the
 parquet-hadoop.jar

 java.lang.ClassNotFoundException:
 parquet.hadoop.metadata.CompressionCodecName cannot be found by
 spark-sql_2.10_1.4.1.wso2v1

 I kept trying with different versions, still the issue is there.

 On 26 August 2015 at 08:27, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, hope the issue is resolved now after the instructions given
 offline.

 On Mon, Aug 24, 2015 at 8:34 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 I have tested the model.save() with a simple Java program. It works
 fine.

 I have noticed that scala-library:2.11.6 is a dependency of
 spark:spark-core_2.11:1.4.1 [1]
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar
 In ML scala version is explicitly specified as 2.10.4. Is there a specific
 reason to use scala 2.10.4? I guess this version incompatibility could be
 the reason for this issue.

 [1] -
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar

 On 24 August 2015 at 10:24, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, seems this is a Spark issue :-( can you try a simple Java
 program and see whether model.save() works?

 On Sat, Aug 22, 2015 at 8:19 AM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi Nirmal,

 outPath is correct and the directory gets created, but the process
 becomes idle after that. Attached the only part that was written to a 
 file.

 Also the method doesn't throw an exception as well.

 On 21 August 2015 at 21:31, Nirmal Fernando nir...@wso2.com wrote:

 Hi Madawa,

 According to Spark API [1], outPath shouldn't be exist.

 [1]
 https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala#L200

 On Fri, Aug 21, 2015 at 1:59 PM, Niranda Perera nira...@wso2.com
 wrote:

 I don't think it's correct. Scala version is 2.10.4 even in the
 mvn repo

 On Fri, Aug 21, 2015, 13:46 Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between 
 Scala and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa madawa...@cse.mrt.ac.lk
  wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com
 wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi All,

 There an issue with serializing Spark's
 MatrixFactorizationModel object. The object contains a huge RDD 
 and as I
 have read in many blogs, this model cannot be serialized as a 
 java object.
 Therefore when retrieving the model I get the same exception as 
 above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they
 recommended me to use the built in save and load functions other 
 than using
 Java serializing.  So I have used following method to persist the 
 model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing happens when this line executes. No error is
 thrown as well. Any solution for this?


 Can you print outPath and see whether it's a valid file path?



 [1] -
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 On 16 August 2015 at 18:06, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Yes I was able to resolve the issue by removing RDD fields

Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Nirmal Fernando
We have a small issue when installing ML features into ESB. Issue log is
[1]. Reason being following;

215 INSTALLED   org.wso2.carbon.databridge.commons.thrift_4.4.4


osgi diag 215

reference:file:../plugins/org.wso2.carbon.databridge.commons.thrift_4.4.4.jar
[215]

  Direct constraints which are unresolved:

Missing imported package org.slf4j_[1.6.1,1.7.0)


Since org.slf4j version 1.7.x get exported after installing Spark features.


@Suho/Anjana; do we really need to import org.slf4j from this bundle?

[1]

[2015-08-26 03:08:00,002] ERROR - Axis2ServiceRegistry Error while adding
services from bundle : org.wso2.carbon.event.sink.config-4.4.5.SNAPSHOT

java.lang.NoClassDefFoundError:
org/wso2/carbon/databridge/agent/thrift/lb/LoadBalancingDataPublisher

at java.lang.Class.getDeclaredMethods0(Native Method)

at java.lang.Class.privateGetDeclaredMethods(Class.java:2625)

at java.lang.Class.privateGetPublicMethods(Class.java:2743)

at java.lang.Class.getMethods(Class.java:1480)

at java.beans.Introspector.getPublicDeclaredMethods(Introspector.java:1280)

at java.beans.Introspector.getTargetMethodInfo(Introspector.java:1141)

at java.beans.Introspector.getBeanInfo(Introspector.java:416)

at java.beans.Introspector.getBeanInfo(Introspector.java:252)

at java.beans.Introspector.getBeanInfo(Introspector.java:214)

at
org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchema(DefaultSchemaGenerator.java:634)

at
org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchemaTypeforNameCommon(DefaultSchemaGenerator.java:1142)

at
org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchemaForType(DefaultSchemaGenerator.java:1132)

at
org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.processMethods(DefaultSchemaGenerator.java:431)

at
org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchema(DefaultSchemaGenerator.java:274)

at org.apache.axis2.deployment.util.Utils.fillAxisService(Utils.java:468)

at
org.apache.axis2.deployment.ServiceBuilder.populateService(ServiceBuilder.java:397)

at
org.apache.axis2.deployment.ServiceGroupBuilder.populateServiceGroup(ServiceGroupBuilder.java:101)

at
org.wso2.carbon.utils.deployment.Axis2ServiceRegistry.addServices(Axis2ServiceRegistry.java:221)

at
org.wso2.carbon.utils.deployment.Axis2ServiceRegistry.register(Axis2ServiceRegistry.java:102)

at
org.wso2.carbon.utils.deployment.Axis2ServiceRegistry.register(Axis2ServiceRegistry.java:89)

at
org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:473)

at
org.wso2.carbon.core.init.CarbonServerManager.start(CarbonServerManager.java:219)

at
org.wso2.carbon.core.internal.CarbonCoreServiceComponent.activate(CarbonCoreServiceComponent.java:91)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at
org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)

at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)

at
org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)

at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)

at
org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)

at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)

at
org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)

at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)

at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)

at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)

at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)

at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)

at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)

at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)

at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)

at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)

at
org.eclipse.equinox.http.servlet.internal.Activator.registerHttpService(Activator.java:81)

at
org.eclipse.equinox.http.servlet.internal.Activator.addProxyServlet(Activator.java:60)

at
org.eclipse.equinox.http.servlet.internal.ProxyServlet.init(ProxyServlet.java:40)

at

[Dev] API Manager 1.9.1 build 25-08-2015

2015-08-25 Thread Nuwan Dias
Hi,

$subject can be found from [1].

[1] - http://builder1.us1.wso2.org/~apim/25-08-2015/

Thanks,
NuwanD.

-- 
Nuwan Dias

Technical Lead - WSO2, Inc. http://wso2.com
email : nuw...@wso2.com
Phone : +94 777 775 729
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Append escape character in json payload -ESB RC1 pack

2015-08-25 Thread Senduran Balasubramaniyam
Hi Kesavan,

I guess that you are passing the json as a string to the payload mediator
so that the payload mediator escapes the double quotes, considering them as
special characters.
Could you please share your proxy service so that I guess, I can recommend
you the correct way.

Regards
Senduran

On Tue, Aug 25, 2015 at 5:43 AM, Kesavan Yogarajah kesav...@wso2.com
wrote:

 Yes Malaka.

 Kesavan Yogarajah
 Associate Software Engineer
 Mobile :+94 (0) 779 758021
 kesav...@wso2.com
 WSO2, Inc.
 lean . enterprise . middleware

 On Tue, Aug 25, 2015 at 3:42 PM, Malaka Silva mal...@wso2.com wrote:

 Hi Senduran,

 Can this be due to [1]?

 @Keerthika/Kesavan - I guess we are using payloadfactory in connector
 templates in both cases?

 [1] https://wso2.org/jira/browse/ESBJAVA-3995

 On Tue, Aug 25, 2015 at 3:36 PM, Keerthika Mahendralingam 
 keerth...@wso2.com wrote:

 Escape characters appended in payload causing the error in other
 connectors also when using the ESB-RC-1 pack.
 Please find the log for the Marketo connector:

 [2015-08-25 15:29:03,433] DEBUG - wire  POST
 /services/createAndUpdateLeads HTTP/1.1[\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Host: esb.wso2.com:8280
 [\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Connection:
 keep-alive[\r][\n]

 [2015-08-25 15:29:03,434] DEBUG - wire  Content-Length: 441[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Cache-Control:
 no-cache[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Origin:
 chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  User-Agent: Mozilla/5.0
 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko)
 Chrome/44.0.2403.155 Safari/537.36[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Content-Type:
 application/json[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept: */*[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept-Encoding: gzip,
 deflate[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Accept-Language:
 en-US,en;q=0.8[\r][\n]

 [2015-08-25 15:29:03,435] DEBUG - wire  Cookie:
 __ar_v4=LOY73IEDGREW5ABOLPMGGI%3A20141012%3A10%7C3HZ3H74ZSZHCLIQQVDL7FW%3A20141012%3A10%7CCZDVOXTUDJEN7DJK45GUJ6%3A20141012%3A10;
 __utmx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:;
 __utmxx=62040490.Ai96-XwTT32uGt454LGacQ$80443214-37:1432283552:15552000;
 __utma=62040490.99968054.1406221425.1425200186.1433095074.3;
 __utmz=62040490.1433095074.3.3.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided);
 CAKEPHP=0ueo79km0ue216pp32326bibh0;
 BIGipServersjgcp-restapi_https=4262002698.47873.;
 SESS73db9c87576ccc08af28bf2520013ef7=63232bff43562fb56e9d80ca59e15bef;
 csrftoken=6a5d43410cf85c55adc8c5d99bfa62b5;
 __auc=8de3c8c814769541720b3574cf2; _ga=GA1.2.99968054.1406221425[\r][\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  [\r][\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  {[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  marketoInstanceURL:
 https://667-LPB-079.mktorest.com,[\n];

 [2015-08-25 15:29:03,436] DEBUG - wire  
 clientSecret:Vgo6rzIiJyPOzTcefP3Zr56tK2hv8fJd,[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire  
 clientId:01f22a42-0f05-4e7f-b675-550b6de66f91,[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire [0x9]input:[[\n]

 [2015-08-25 15:29:03,436] DEBUG - wire [0x9]{[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  email:
 te...@gmail.com,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 firstName:test1,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 lastName:test1[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  },[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  {[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  email:
 te...@gmail.com,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 firstName:test2,[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  
 lastName:test1[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  }[\n]

 [2015-08-25 15:29:03,437] DEBUG - wire  ][\n]

 [2015-08-25 15:29:03,438] DEBUG - wire  }

 [2015-08-25 15:29:03,450] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.action. Returning empty result. Error invalid
 path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.lookupField. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.asyncProcessing. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:03,452] ERROR - SynapseJsonPath #stringValueOf. Error
 evaluating JSON Path $.partitionName. Returning empty result. Error
 invalid path

 [2015-08-25 15:29:06,066] DEBUG - wire  GET
 /identity/oauth/token?grant_type=client_credentialsclient_id=01f22a42-0f05-4e7f-b675-550b6de66f91client_secret=Vgo6rzIiJyPOzTcefP3Zr56tK2hv8fJd
 HTTP/1.1[\r][\n]

 [2015-08-25 15:29:06,066] DEBUG - wire  Accept-Language:
 en-US,en;q=0.8[\r][\n]

 [2015-08-25 15:29:06,066] DEBUG - wire  Cookie:
 

Re: [Dev] [Connectors] Storing connector endpoint in registry

2015-08-25 Thread Bhathiya Jayasekara
Thanks Malaka. I will do that.

On Wed, Aug 26, 2015 at 1:27 AM, Malaka Silva mal...@wso2.com wrote:

 Hi Bhathiya,

 You can store the complete init config in registry and refer that in your
 salesforce methods.

 Eg:

 proxy xmlns=http://ws.apache.org/ns/synapse; name=sfdc
  transports=https,http statistics=disable trace=disable
 startOnLoad=true
target
   inSequence
  salesforce.query *configKey=conf:/repository/esb/sfdcconf*
 batchSize200/batchSize
 queryStringselect id,name from Account/queryString
  /salesforce.query
  respond/
   /inSequence
/target
description/
 /proxy

 */_system/config/repository/esb/sfdcconf*
 salesforce.init xmlns=http://ws.apache.org/ns/synapse;
 usernamexx...@gmail.com/username
 passwordxxx/password
 loginUrlhttps://login.salesforce.com/services/Soap/u/34.0
 /loginUrl
  /salesforce.init

 On Tue, Aug 25, 2015 at 11:27 AM, Bhathiya Jayasekara bhath...@wso2.com
 wrote:

 Hi Dushan/Malaka,

 So, is it not possible to achieve the $subject? Any workaround at least?

 Thanks,
 Bhathiya

 On Mon, Aug 24, 2015 at 4:34 PM, Bhathiya Jayasekara bhath...@wso2.com
 wrote:

 Hi Dushan,

 The initial service URL[1] can be different from account to account.
 Hence the requirement.

 Thanks,
 Bhathiya

 On Mon, Aug 24, 2015 at 4:26 PM, Dushan Abeyruwan dus...@wso2.com
 wrote:

 Hi
 Why you need like that ?
 SF connector we have uses initial service URL at init [1] and upon
 successfully authenticate all the other operations other than
 salesforce.init will use dynamic url return from authenticate requests
 (extracting from response payload).[2]

 init

 property expression=//ns:loginResponse/ns:result/ns:serverUrl/text()
 name=salesforce.serviceUrl scope=operation type=STRING
 xmlns:ns=urn:partner.soap.sforce.com /

 property name=salesforce.login.done scope=operation
 type=STRING value=true /


 [1] https://login.salesforce.com/services/Soap/u/27.0
 [2]
 https://github.com/wso2-dev/esb-connectors/blob/master/salesforce/1.0.0/src/main/resources/salesforce/init.xml

 Cheers,
 Dushan

 On Mon, Aug 24, 2015 at 4:12 PM, Bhathiya Jayasekara bhath...@wso2.com
  wrote:

 Hi all,

 Is there a way to store Salesforce endpoint in registry, and refer to
 that in init block of connector?

 Thanks,
 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax
 https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




 --
 Dushan Abeyruwan | Technical Lead
 Technical Support-Bloomington US
 PMC Member Apache Synpase
 WSO2 Inc. http://wso2.com/
 Blog:*http://www.dushantech.com/ http://www.dushantech.com/*
 Mobile:(001)812-391-7441




 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




 --

 Best Regards,

 Malaka Silva
 Senior Tech Lead
 M: +94 777 219 791
 Tel : 94 11 214 5345
 Fax :94 11 2145300
 Skype : malaka.sampath.silva
 LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
 Blog : http://mrmalakasilva.blogspot.com/

 WSO2, Inc.
 lean . enterprise . middleware
 http://www.wso2.com/
 http://www.wso2.com/about/team/malaka-silva/
 http://wso2.com/about/team/malaka-silva/

 Save a tree -Conserve nature  Save the world for your future. Print this
 email only if it is absolutely necessary.




-- 
*Bhathiya Jayasekara*
*Senior Software Engineer,*
*WSO2 inc., http://wso2.com http://wso2.com*

*Phone: +94715478185*
*LinkedIn: http://www.linkedin.com/in/bhathiyaj
http://www.linkedin.com/in/bhathiyaj*
*Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
*Blog: http://movingaheadblog.blogspot.com
http://movingaheadblog.blogspot.com/*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Error when deserializing model summary

2015-08-25 Thread Nirmal Fernando
Madawa, hope the issue is resolved now after the instructions given offline.

On Mon, Aug 24, 2015 at 8:34 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
wrote:

 I have tested the model.save() with a simple Java program. It works fine.

 I have noticed that scala-library:2.11.6 is a dependency of
 spark:spark-core_2.11:1.4.1 [1]
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar
 In ML scala version is explicitly specified as 2.10.4. Is there a specific
 reason to use scala 2.10.4? I guess this version incompatibility could be
 the reason for this issue.

 [1] -
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar

 On 24 August 2015 at 10:24, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, seems this is a Spark issue :-( can you try a simple Java program
 and see whether model.save() works?

 On Sat, Aug 22, 2015 at 8:19 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi Nirmal,

 outPath is correct and the directory gets created, but the process
 becomes idle after that. Attached the only part that was written to a file.

 Also the method doesn't throw an exception as well.

 On 21 August 2015 at 21:31, Nirmal Fernando nir...@wso2.com wrote:

 Hi Madawa,

 According to Spark API [1], outPath shouldn't be exist.

 [1]
 https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala#L200

 On Fri, Aug 21, 2015 at 1:59 PM, Niranda Perera nira...@wso2.com
 wrote:

 I don't think it's correct. Scala version is 2.10.4 even in the mvn
 repo

 On Fri, Aug 21, 2015, 13:46 Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between Scala and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi All,

 There an issue with serializing Spark's MatrixFactorizationModel
 object. The object contains a huge RDD and as I have read in many 
 blogs,
 this model cannot be serialized as a java object. Therefore when 
 retrieving
 the model I get the same exception as above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they
 recommended me to use the built in save and load functions other than 
 using
 Java serializing.  So I have used following method to persist the 
 model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing happens when this line executes. No error is thrown
 as well. Any solution for this?


 Can you print outPath and see whether it's a valid file path?



 [1] -
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 On 16 August 2015 at 18:06, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Yes I was able to resolve the issue by removing RDD fields from
 the SummaryModel object as @Mano pointed out. Still I have the same
 exception when retrieving the model. Trying to fix that issue.

 On 14 August 2015 at 10:43, Nirmal Fernando nir...@wso2.com
 wrote:

 Thanks Niranda, this doc is useful.

 On Fri, Aug 14, 2015 at 10:36 AM, Niranda Perera 
 nira...@wso2.com wrote:

 From what I know, OneToOneDependancy come into play when spark
 tries to create the RDD dependency tree.

 Just thought of sharing that. this would be a good resource [1]
 :-)

 [1]
 https://databricks-training.s3.amazonaws.com/slides/advanced-spark-training.pdf

 On Thu, Aug 13, 2015 at 12:09 AM, Nirmal Fernando 
 nir...@wso2.com wrote:

 What is *org.apache.spark.OneToOneDependency ? Is it
 something you use?*

 On Wed, Aug 12, 2015 at 11:30 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi,

 I created a model summary in order to show the model data in
 the analysis.jag page.
 But when refreshing the page after building the model I get
 the following error.

 org.wso2.carbon.ml.core.exceptions.MLAnalysisHandlerException:
  An error has occurred while extracting all the models of 
 analysis id: 13
 at
 org.wso2.carbon.ml.core.impl.MLAnalysisHandler.getAllModelsOfAnalysis(MLAnalysisHandler.java:245)
 at
 org.wso2.carbon.ml.rest.api.AnalysisApiV10.getAllModelsOfAnalysis(AnalysisApiV10.java:517)
 Caused by:
 

Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Ashen Weerathunga
@Nirmal, okay i'll arange it today.

@Mahesan

Thanks for the suggestion. yes 100 must me too high for some cases. I
thought that during 100 iterations most probably it will converge to stable
clusters. Thats why I put 100. yes as cases like k = 100 it might be not
enough. Thanks and ill try with different number of iterations also.

On Wed, Aug 26, 2015 at 9:31 AM, Nirmal Fernando nir...@wso2.com wrote:

 @Ashen let's have a code review today, if it's possible.

 @Srinath Forgot to mention that I've already given some feedback to Ashen,
 on how he could use Spark transformations effectively in his code.

 On Tue, Aug 25, 2015 at 4:33 PM, Ashen Weerathunga as...@wso2.com wrote:

 Okay sure.

 On Tue, Aug 25, 2015 at 3:55 PM, Nirmal Fernando nir...@wso2.com wrote:

 Sure. @Ashen, can you please arrange one?

 On Tue, Aug 25, 2015 at 2:35 PM, Srinath Perera srin...@wso2.com
 wrote:

 Nirmal, Seshika, shall we do a code review? This code should go into ML
 after UI part is done.

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 2:20 PM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 This is the source code of the project.
 https://github.com/ashensw/Spark-KMeans-fraud-detection

 Best Regards,
 Ashen

 On Tue, Aug 25, 2015 at 2:00 PM, Ashen Weerathunga as...@wso2.com
 wrote:

 Thanks all for the suggestions,

 There are few assumptions I have made,

- Clusters are uniform
- Fraud data always will be outliers to the normal clusters
- Clusters are not intersect with each other
- I have given the number of Iterations as 100. So I assume that
100 iterations will be enough to make almost stable clusters

 @Maheshakya,

 In this dataset consist of higher amount of anomaly data than the
 normal data. But the practical scenario will be otherwise. Because of 
 that
 It will be more unrealistic if I use those 65% of anomaly data to 
 evaluate
 the model. The amount of normal data I used to build the model is also 
 less
 than those 65% of anomaly data. Yes since our purpose is to detect
 anomalies It would be good to try with more anomaly data to evaluate the
 model.Thanks and I'll try to use them also.

 Best Regards,

 Ashen

 On Tue, Aug 25, 2015 at 12:35 PM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Is there any particular reason why you are putting aside 65% of
 anomalous data at the evaluation? Since there is an obvious imbalance 
 when
 the numbers of normal and abnormal cases are taken into account, you 
 will
 get greater accuracy at the evaluation because a model tends to produce
 more accurate results for the class with the greater size. But it's not 
 the
 case for the class of smaller size. With less number of records, it wont
 make much impact on the accuracy. Hence IMO, it would be better if you
 could evaluate with more anomalous data.
 i.e. number of records of each class needs to be roughly equal.

 Best regards

 On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com
  wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this
 process (uniform clusters etc). It will make the process more clear 
 IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga 
 as...@wso2.com wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to
 cluster the KDD cup 99 network anomaly detection dataset using 
 apache spark
 k means algorithm. So far I was able to achieve 99% accuracy rate 
 from this
 dataset.The steps I have followed during the process are mentioned 
 below.

- Separate the dataset into two parts (normal data and
anomaly data) by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper
  parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper
  parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only
   handle numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K
means and build the model (15% of both normal and anomaly data 
 were used to
tune the hyper parameters such as k, percentile etc. to get an 
 optimized
model)
- Finally evaluate the model using 20% of both normal and
anomaly data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center
by using k means predict function.
- I have calculate 98th percentile distance for each cluster.
(98 was the best value I got by tuning the model with different 
 values)
- Then I checked whether the distance of new data point with
the given 

Re: [Dev] [ML] Error when deserializing model summary

2015-08-25 Thread Madawa Soysa
Gives following output.

parquet.hadoop.metadata; version=0.0.0parquet_common_1.6.0_1.0.0 [321]
parquet.hadoop.metadata; version=0.0.0parquet_hadoop_1.6.0_1.0.0 [323]

On 26 August 2015 at 08:35, Nirmal Fernando nir...@wso2.com wrote:

 You added them to repository/components/lib right? Can you start the
 server in OSGi mode (./wso2server.sh -DosgiConsole ) and issue following
 command in OSGi console;

 osgi p parquet.hadoop.metadata

 On Wed, Aug 26, 2015 at 8:32 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi Nirmal,

 Not actually. I have added parquet-common, parquet-encoding,
 parquet-column, parquet-hadoop as mentioned at
 https://github.com/Parquet/parquet-mr
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FParquet%2Fparquet-mrsa=Dsntz=1usg=AFQjCNEHFd3g0eqXClZCkN35QgFYl0FVww
 . But still get the following exception even though the
 class parquet.hadoop.metadata.CompressionCodecName is there in the
 parquet-hadoop.jar

 java.lang.ClassNotFoundException:
 parquet.hadoop.metadata.CompressionCodecName cannot be found by
 spark-sql_2.10_1.4.1.wso2v1

 I kept trying with different versions, still the issue is there.

 On 26 August 2015 at 08:27, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, hope the issue is resolved now after the instructions given
 offline.

 On Mon, Aug 24, 2015 at 8:34 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 I have tested the model.save() with a simple Java program. It works
 fine.

 I have noticed that scala-library:2.11.6 is a dependency of
 spark:spark-core_2.11:1.4.1 [1]
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar
 In ML scala version is explicitly specified as 2.10.4. Is there a specific
 reason to use scala 2.10.4? I guess this version incompatibility could be
 the reason for this issue.

 [1] -
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar

 On 24 August 2015 at 10:24, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, seems this is a Spark issue :-( can you try a simple Java
 program and see whether model.save() works?

 On Sat, Aug 22, 2015 at 8:19 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
  wrote:

 Hi Nirmal,

 outPath is correct and the directory gets created, but the process
 becomes idle after that. Attached the only part that was written to a 
 file.

 Also the method doesn't throw an exception as well.

 On 21 August 2015 at 21:31, Nirmal Fernando nir...@wso2.com wrote:

 Hi Madawa,

 According to Spark API [1], outPath shouldn't be exist.

 [1]
 https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala#L200

 On Fri, Aug 21, 2015 at 1:59 PM, Niranda Perera nira...@wso2.com
 wrote:

 I don't think it's correct. Scala version is 2.10.4 even in the mvn
 repo

 On Fri, Aug 21, 2015, 13:46 Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between Scala 
 and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com
 wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi All,

 There an issue with serializing Spark's
 MatrixFactorizationModel object. The object contains a huge RDD 
 and as I
 have read in many blogs, this model cannot be serialized as a java 
 object.
 Therefore when retrieving the model I get the same exception as 
 above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they
 recommended me to use the built in save and load functions other 
 than using
 Java serializing.  So I have used following method to persist the 
 model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing happens when this line executes. No error is
 thrown as well. Any solution for this?


 Can you print outPath and see whether it's a valid file path?



 [1] -
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 On 16 August 2015 at 18:06, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Yes I was able to resolve the issue by removing RDD fields
 from the SummaryModel object as @Mano pointed out. Still I have 
 the same
 exception when retrieving the model. Trying to fix that issue.

 

Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Nirmal Fernando
@Ashen let's have a code review today, if it's possible.

@Srinath Forgot to mention that I've already given some feedback to Ashen,
on how he could use Spark transformations effectively in his code.

On Tue, Aug 25, 2015 at 4:33 PM, Ashen Weerathunga as...@wso2.com wrote:

 Okay sure.

 On Tue, Aug 25, 2015 at 3:55 PM, Nirmal Fernando nir...@wso2.com wrote:

 Sure. @Ashen, can you please arrange one?

 On Tue, Aug 25, 2015 at 2:35 PM, Srinath Perera srin...@wso2.com wrote:

 Nirmal, Seshika, shall we do a code review? This code should go into ML
 after UI part is done.

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 2:20 PM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 This is the source code of the project.
 https://github.com/ashensw/Spark-KMeans-fraud-detection

 Best Regards,
 Ashen

 On Tue, Aug 25, 2015 at 2:00 PM, Ashen Weerathunga as...@wso2.com
 wrote:

 Thanks all for the suggestions,

 There are few assumptions I have made,

- Clusters are uniform
- Fraud data always will be outliers to the normal clusters
- Clusters are not intersect with each other
- I have given the number of Iterations as 100. So I assume that
100 iterations will be enough to make almost stable clusters

 @Maheshakya,

 In this dataset consist of higher amount of anomaly data than the
 normal data. But the practical scenario will be otherwise. Because of that
 It will be more unrealistic if I use those 65% of anomaly data to evaluate
 the model. The amount of normal data I used to build the model is also 
 less
 than those 65% of anomaly data. Yes since our purpose is to detect
 anomalies It would be good to try with more anomaly data to evaluate the
 model.Thanks and I'll try to use them also.

 Best Regards,

 Ashen

 On Tue, Aug 25, 2015 at 12:35 PM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Is there any particular reason why you are putting aside 65% of
 anomalous data at the evaluation? Since there is an obvious imbalance 
 when
 the numbers of normal and abnormal cases are taken into account, you will
 get greater accuracy at the evaluation because a model tends to produce
 more accurate results for the class with the greater size. But it's not 
 the
 case for the class of smaller size. With less number of records, it wont
 make much impact on the accuracy. Hence IMO, it would be better if you
 could evaluate with more anomalous data.
 i.e. number of records of each class needs to be roughly equal.

 Best regards

 On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this
 process (uniform clusters etc). It will make the process more clear IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
  wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to
 cluster the KDD cup 99 network anomaly detection dataset using apache 
 spark
 k means algorithm. So far I was able to achieve 99% accuracy rate 
 from this
 dataset.The steps I have followed during the process are mentioned 
 below.

- Separate the dataset into two parts (normal data and anomaly
data) by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper
  parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper
  parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only
   handle numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means
and build the model (15% of both normal and anomaly data were used 
 to tune
the hyper parameters such as k, percentile etc. to get an 
 optimized model)
- Finally evaluate the model using 20% of both normal and
anomaly data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center
by using k means predict function.
- I have calculate 98th percentile distance for each cluster.
(98 was the best value I got by tuning the model with different 
 values)
- Then I checked whether the distance of new data point with
the given cluster center is less than or grater than the 98th 
 percentile of
that cluster. If it is less than the percentile it is considered 
 as a
normal data. If it is grater than the percentile it is considered 
 as a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try
 out it with more realistic dataset. A summery of results I have 
 obtained
 using 98th 

[Dev] [ES] UX team feedback on following JIRAs

2015-08-25 Thread Udara Rathnayake
Hi UX team,

Please provide your feedback on following[1][2].

I'm +1 for the suggestion provided for STORE-1014 [2].
Shall we stick to the lighter green color for success messages? Or is the
other green variation the platform standard?

@Chankami, I'm unable to find out the place where we use red-variations for
error/validation messages. Is it for form validation in create asset or
different?

[1] https://wso2.org/jira/browse/STORE-1012
[2] https://wso2.org/jira/browse/STORE-1014

Regards,
UdaraR
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Error when deserializing model summary

2015-08-25 Thread Madawa Soysa
Both bundles are required. I tried removing one at a time and checked.

On 26 August 2015 at 08:43, Nirmal Fernando nir...@wso2.com wrote:

 So it's coming from 2 bundles? Can we get rid of 1 and see?

 On Wed, Aug 26, 2015 at 8:40 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Gives following output.

 parquet.hadoop.metadata; version=0.0.0parquet_common_1.6.0_1.0.0 [321]
 parquet.hadoop.metadata; version=0.0.0parquet_hadoop_1.6.0_1.0.0 [323]

 On 26 August 2015 at 08:35, Nirmal Fernando nir...@wso2.com wrote:

 You added them to repository/components/lib right? Can you start the
 server in OSGi mode (./wso2server.sh -DosgiConsole ) and issue following
 command in OSGi console;

 osgi p parquet.hadoop.metadata

 On Wed, Aug 26, 2015 at 8:32 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi Nirmal,

 Not actually. I have added parquet-common, parquet-encoding,
 parquet-column, parquet-hadoop as mentioned at
 https://github.com/Parquet/parquet-mr
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FParquet%2Fparquet-mrsa=Dsntz=1usg=AFQjCNEHFd3g0eqXClZCkN35QgFYl0FVww
 . But still get the following exception even though the
 class parquet.hadoop.metadata.CompressionCodecName is there in the
 parquet-hadoop.jar

 java.lang.ClassNotFoundException:
 parquet.hadoop.metadata.CompressionCodecName cannot be found by
 spark-sql_2.10_1.4.1.wso2v1

 I kept trying with different versions, still the issue is there.

 On 26 August 2015 at 08:27, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, hope the issue is resolved now after the instructions given
 offline.

 On Mon, Aug 24, 2015 at 8:34 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
  wrote:

 I have tested the model.save() with a simple Java program. It works
 fine.

 I have noticed that scala-library:2.11.6 is a dependency of
 spark:spark-core_2.11:1.4.1 [1]
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar
 In ML scala version is explicitly specified as 2.10.4. Is there a 
 specific
 reason to use scala 2.10.4? I guess this version incompatibility could be
 the reason for this issue.

 [1] -
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar

 On 24 August 2015 at 10:24, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, seems this is a Spark issue :-( can you try a simple Java
 program and see whether model.save() works?

 On Sat, Aug 22, 2015 at 8:19 AM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi Nirmal,

 outPath is correct and the directory gets created, but the process
 becomes idle after that. Attached the only part that was written to a 
 file.

 Also the method doesn't throw an exception as well.

 On 21 August 2015 at 21:31, Nirmal Fernando nir...@wso2.com
 wrote:

 Hi Madawa,

 According to Spark API [1], outPath shouldn't be exist.

 [1]
 https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala#L200

 On Fri, Aug 21, 2015 at 1:59 PM, Niranda Perera nira...@wso2.com
 wrote:

 I don't think it's correct. Scala version is 2.10.4 even in the
 mvn repo

 On Fri, Aug 21, 2015, 13:46 Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between 
 Scala and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com
 wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi All,

 There an issue with serializing Spark's
 MatrixFactorizationModel object. The object contains a huge RDD 
 and as I
 have read in many blogs, this model cannot be serialized as a 
 java object.
 Therefore when retrieving the model I get the same exception as 
 above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they
 recommended me to use the built in save and load functions other 
 than using
 Java serializing.  So I have used following method to persist 
 the model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing happens when this line executes. No error is
 thrown as well. Any solution for this?


 Can you print outPath and see whether it's a valid file path?



 [1] -
 

Re: [Dev] [ML] Error when deserializing model summary

2015-08-25 Thread Nirmal Fernando
Shall we try adding an all-in-one parquet jar? Can you check whether
there's one?

http://mvnrepository.com/artifact/com.twitter/parquet/1.6.0

On Wed, Aug 26, 2015 at 8:55 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
wrote:

 Both bundles are required. I tried removing one at a time and checked.

 On 26 August 2015 at 08:43, Nirmal Fernando nir...@wso2.com wrote:

 So it's coming from 2 bundles? Can we get rid of 1 and see?

 On Wed, Aug 26, 2015 at 8:40 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Gives following output.

 parquet.hadoop.metadata; version=0.0.0parquet_common_1.6.0_1.0.0
 [321]
 parquet.hadoop.metadata; version=0.0.0parquet_hadoop_1.6.0_1.0.0
 [323]

 On 26 August 2015 at 08:35, Nirmal Fernando nir...@wso2.com wrote:

 You added them to repository/components/lib right? Can you start the
 server in OSGi mode (./wso2server.sh -DosgiConsole ) and issue following
 command in OSGi console;

 osgi p parquet.hadoop.metadata

 On Wed, Aug 26, 2015 at 8:32 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi Nirmal,

 Not actually. I have added parquet-common, parquet-encoding,
 parquet-column, parquet-hadoop as mentioned at
 https://github.com/Parquet/parquet-mr
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FParquet%2Fparquet-mrsa=Dsntz=1usg=AFQjCNEHFd3g0eqXClZCkN35QgFYl0FVww
 . But still get the following exception even though the
 class parquet.hadoop.metadata.CompressionCodecName is there in the
 parquet-hadoop.jar

 java.lang.ClassNotFoundException:
 parquet.hadoop.metadata.CompressionCodecName cannot be found by
 spark-sql_2.10_1.4.1.wso2v1

 I kept trying with different versions, still the issue is there.

 On 26 August 2015 at 08:27, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, hope the issue is resolved now after the instructions given
 offline.

 On Mon, Aug 24, 2015 at 8:34 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 I have tested the model.save() with a simple Java program. It works
 fine.

 I have noticed that scala-library:2.11.6 is a dependency of
 spark:spark-core_2.11:1.4.1 [1]
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar
 In ML scala version is explicitly specified as 2.10.4. Is there a 
 specific
 reason to use scala 2.10.4? I guess this version incompatibility could 
 be
 the reason for this issue.

 [1] -
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar

 On 24 August 2015 at 10:24, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, seems this is a Spark issue :-( can you try a simple Java
 program and see whether model.save() works?

 On Sat, Aug 22, 2015 at 8:19 AM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi Nirmal,

 outPath is correct and the directory gets created, but the process
 becomes idle after that. Attached the only part that was written to a 
 file.

 Also the method doesn't throw an exception as well.

 On 21 August 2015 at 21:31, Nirmal Fernando nir...@wso2.com
 wrote:

 Hi Madawa,

 According to Spark API [1], outPath shouldn't be exist.

 [1]
 https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala#L200

 On Fri, Aug 21, 2015 at 1:59 PM, Niranda Perera nira...@wso2.com
  wrote:

 I don't think it's correct. Scala version is 2.10.4 even in the
 mvn repo

 On Fri, Aug 21, 2015, 13:46 Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between 
 Scala and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com
 wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi All,

 There an issue with serializing Spark's
 MatrixFactorizationModel object. The object contains a huge RDD 
 and as I
 have read in many blogs, this model cannot be serialized as a 
 java object.
 Therefore when retrieving the model I get the same exception as 
 above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they
 recommended me to use the built in save and load functions 
 other than using
 Java serializing.  So I have used following method to persist 
 the model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing 

Re: [Dev] Install ES features into IoTServer

2015-08-25 Thread Sumedha Rubasinghe
Ayyoob,
It should be being ES capabilities into CDMF layer. Not into IoT Server.

Are you sure it's AM 1.4.0 features? That's very old. At some point it
needs to be updated to latest. Since there is a  EMM release going on, I
think the only other option is to work on a fork for IoT Server and then
merge back when release is over.



On Wed, Aug 26, 2015 at 10:57 AM, Ayyoob Hamza ayy...@wso2.com wrote:

 Hi,
 As the discussion on moving the web app layer to CDMF. We tried to install
 the features but ES requires carbon common version 4.4.4. However CDMF is
 on carbon common version 4.4.0.  which is needed for api.mgt 1.4.0, So is
 it okay if we fork and continue the development with the latest versions,
 so that it wont be a bottleneck for EMM release. ?

 Thanks
 *Ayyoob Hamza*
 *Software Engineer*
 WSO2 Inc.; http://wso2.com
 email: ayy...@wso2.com cell: +94 77 1681010 %2B94%2077%207779495




-- 
/sumedha
m: +94 773017743
b :  bit.ly/sumedha
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Error when deserializing model summary

2015-08-25 Thread Nirmal Fernando
You added them to repository/components/lib right? Can you start the server
in OSGi mode (./wso2server.sh -DosgiConsole ) and issue following command
in OSGi console;

osgi p parquet.hadoop.metadata

On Wed, Aug 26, 2015 at 8:32 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
wrote:

 Hi Nirmal,

 Not actually. I have added parquet-common, parquet-encoding,
 parquet-column, parquet-hadoop as mentioned at
 https://github.com/Parquet/parquet-mr
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FParquet%2Fparquet-mrsa=Dsntz=1usg=AFQjCNEHFd3g0eqXClZCkN35QgFYl0FVww
 . But still get the following exception even though the
 class parquet.hadoop.metadata.CompressionCodecName is there in the
 parquet-hadoop.jar

 java.lang.ClassNotFoundException:
 parquet.hadoop.metadata.CompressionCodecName cannot be found by
 spark-sql_2.10_1.4.1.wso2v1

 I kept trying with different versions, still the issue is there.

 On 26 August 2015 at 08:27, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, hope the issue is resolved now after the instructions given
 offline.

 On Mon, Aug 24, 2015 at 8:34 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 I have tested the model.save() with a simple Java program. It works
 fine.

 I have noticed that scala-library:2.11.6 is a dependency of
 spark:spark-core_2.11:1.4.1 [1]
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar
 In ML scala version is explicitly specified as 2.10.4. Is there a specific
 reason to use scala 2.10.4? I guess this version incompatibility could be
 the reason for this issue.

 [1] -
 http://search.maven.org/#artifactdetails%7Corg.apache.spark%7Cspark-core_2.11%7C1.4.1%7Cjar

 On 24 August 2015 at 10:24, Nirmal Fernando nir...@wso2.com wrote:

 Madawa, seems this is a Spark issue :-( can you try a simple Java
 program and see whether model.save() works?

 On Sat, Aug 22, 2015 at 8:19 AM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi Nirmal,

 outPath is correct and the directory gets created, but the process
 becomes idle after that. Attached the only part that was written to a 
 file.

 Also the method doesn't throw an exception as well.

 On 21 August 2015 at 21:31, Nirmal Fernando nir...@wso2.com wrote:

 Hi Madawa,

 According to Spark API [1], outPath shouldn't be exist.

 [1]
 https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/MatrixFactorizationModel.scala#L200

 On Fri, Aug 21, 2015 at 1:59 PM, Niranda Perera nira...@wso2.com
 wrote:

 I don't think it's correct. Scala version is 2.10.4 even in the mvn
 repo

 On Fri, Aug 21, 2015, 13:46 Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between Scala 
 and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com
 wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi All,

 There an issue with serializing Spark's MatrixFactorizationModel
 object. The object contains a huge RDD and as I have read in many 
 blogs,
 this model cannot be serialized as a java object. Therefore when 
 retrieving
 the model I get the same exception as above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they
 recommended me to use the built in save and load functions other 
 than using
 Java serializing.  So I have used following method to persist the 
 model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing happens when this line executes. No error is thrown
 as well. Any solution for this?


 Can you print outPath and see whether it's a valid file path?



 [1] -
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 On 16 August 2015 at 18:06, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Yes I was able to resolve the issue by removing RDD fields from
 the SummaryModel object as @Mano pointed out. Still I have the same
 exception when retrieving the model. Trying to fix that issue.

 On 14 August 2015 at 10:43, Nirmal Fernando nir...@wso2.com
 wrote:

 Thanks Niranda, this doc is useful.

 On Fri, Aug 14, 2015 at 10:36 AM, Niranda Perera 
 nira...@wso2.com wrote:

 From what I know, OneToOneDependancy come into play when
 spark 

Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Ravindra Ranwala
Hi All,

I have tested scenarios [1]. No issues found.

[X] Stable - go ahead and release


[1]
https://docs.google.com/document/d/1UpXN3SJvXOzp5UH6eF0lkO_vXTnvko7ruY5VMs9bDDI/edit



Thanks  Regards,

On Wed, Aug 26, 2015 at 7:29 AM, Prabath Ariyarathna prabat...@wso2.com
wrote:

 HI All.

 I have tested basic security scenarios on the Java7 and Java8. No issues
 found.
 [X] Stable - go ahead and release.

 Thanks.

 On Tue, Aug 25, 2015 at 9:20 PM, Viraj Senevirathne vir...@wso2.com
 wrote:

 Hi all,

 I have tested VFS inbound and transport use cases for file, ftp and sftp
 protocols. No issues found.
 [X] Stable - go ahead and release

 Thanks.

 On Tue, Aug 25, 2015 at 8:28 PM, Nadeeshaan Gunasinghe 
 nadeesh...@wso2.com wrote:

 Hi,

 I tested JMS use cases and MSMP fail over use cases. No issues found.

 [X] Stable - go ahead and release

 Regards.


 *Nadeeshaan Gunasinghe*
 Software Engineer, WSO2 Inc. http://wso2.com
 +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
 #14f67bb67ffa1168_14f658e48984972a_14f655ebf8b0b7e4_
 http://www.facebook.com/nadeeshaan.gunasinghe
 http://lk.linkedin.com/in/nadeeshaan  http://twitter.com/Nadeeshaan
 http://nadeeshaan.blogspot.com/
 Get a signature like this: Click here!
 http://ws-promos.appspot.com/r?rdata=eyJydXJsIjogImh0dHA6Ly93d3cud2lzZXN0YW1wLmNvbS9lbWFpbC1pbnN0YWxsP3dzX25jaWQ9NjcyMjk0MDA4JnV0bV9zb3VyY2U9ZXh0ZW5zaW9uJnV0bV9tZWRpdW09ZW1haWwmdXRtX2NhbXBhaWduPXByb21vXzU3MzI1Njg1NDg3Njk3OTIiLCAiZSI6ICI1NzMyNTY4NTQ4NzY5NzkyIn0=

 On Tue, Aug 25, 2015 at 6:01 PM, Jagath Sisirakumara Ariyarathne 
 jaga...@wso2.com wrote:

 Hi,

 I executed performance tests for basic scenarios with this pack. No
 issues observed.

 [X] Stable - go ahead and release

 Thanks.

 On Mon, Aug 24, 2015 at 10:27 PM, Chanaka Fernando chana...@wso2.com
 wrote:

 Hi Devs,

 WSO2 ESB 4.9.0 RC1 Release Vote

 This release fixes the following issues:
 https://wso2.org/jira/browse/ESBJAVA-4093?filter=12363

 Please download ESB 490 RC1 and test the functionality and vote. Vote
 will be open for 72 hours or as needed.

 *Source  binary distribution files:*

 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/org/wso2/esb/wso2esb/4.9.0-RC1/

 *Maven staging repository:*
 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/

 *The tag to be voted upon :*
 https://github.com/wso2/product-esb/tree/esb-parent-4.9.0-RC1


 [ ] Broken - do not release (explain why)
 [ ] Stable - go ahead and release


 Thanks and Regards,
 ~ WSO2 ESB Team ~

 --
 --
 Chanaka Fernando
 Senior Technical Lead
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 mobile: +94 773337238
 Blog : http://soatutorials.blogspot.com
 LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
 Twitter:https://twitter.com/chanakaudaya
 Wordpress:http://chanakaudaya.wordpress.com




 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Jagath Ariyarathne
 Technical Lead
 WSO2 Inc.  http://wso2.com/
 Email: jaga...@wso2.com
 Mob  : +94 77 386 7048


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev



 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Viraj Senevirathne
 Software Engineer; WSO2, Inc.

 Mobile : +94 71 818 4742 %2B94%20%280%29%20773%20451194
 Email : vir...@wso2.com thili...@wso2.com

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --

 *Prabath Ariyarathna*

 *Associate Technical Lead*

 *WSO2, Inc. *

 *lean . enterprise . middleware *


 *Email: prabat...@wso2.com prabat...@wso2.com*

 *Blog: http://prabu-lk.blogspot.com http://prabu-lk.blogspot.com*

 *Flicker : https://www.flickr.com/photos/47759189@N08
 https://www.flickr.com/photos/47759189@N08*

 *Mobile: +94 77 699 4730 *






 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
Ravindra Ranwala
Software Engineer
WSO2, Inc: http://wso2.com
http://www.google.com/url?q=http%3A%2F%2Fwso2.comsa=Dsntz=1usg=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg
Mobile: +94714198770
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Nirmal Fernando
[Looping Sinthuja and Lasantha]

On Wed, Aug 26, 2015 at 8:49 AM, Nirmal Fernando nir...@wso2.com wrote:

 We have a small issue when installing ML features into ESB. Issue log is
 [1]. Reason being following;

 215 INSTALLED   org.wso2.carbon.databridge.commons.thrift_4.4.4


 osgi diag 215

 reference:file:../plugins/org.wso2.carbon.databridge.commons.thrift_4.4.4.jar
 [215]

   Direct constraints which are unresolved:

 Missing imported package org.slf4j_[1.6.1,1.7.0)


 Since org.slf4j version 1.7.x get exported after installing Spark features.


 @Suho/Anjana; do we really need to import org.slf4j from this bundle?

 [1]

 [2015-08-26 03:08:00,002] ERROR - Axis2ServiceRegistry Error while adding
 services from bundle : org.wso2.carbon.event.sink.config-4.4.5.SNAPSHOT

 java.lang.NoClassDefFoundError:
 org/wso2/carbon/databridge/agent/thrift/lb/LoadBalancingDataPublisher

 at java.lang.Class.getDeclaredMethods0(Native Method)

 at java.lang.Class.privateGetDeclaredMethods(Class.java:2625)

 at java.lang.Class.privateGetPublicMethods(Class.java:2743)

 at java.lang.Class.getMethods(Class.java:1480)

 at java.beans.Introspector.getPublicDeclaredMethods(Introspector.java:1280)

 at java.beans.Introspector.getTargetMethodInfo(Introspector.java:1141)

 at java.beans.Introspector.getBeanInfo(Introspector.java:416)

 at java.beans.Introspector.getBeanInfo(Introspector.java:252)

 at java.beans.Introspector.getBeanInfo(Introspector.java:214)

 at
 org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchema(DefaultSchemaGenerator.java:634)

 at
 org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchemaTypeforNameCommon(DefaultSchemaGenerator.java:1142)

 at
 org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchemaForType(DefaultSchemaGenerator.java:1132)

 at
 org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.processMethods(DefaultSchemaGenerator.java:431)

 at
 org.apache.axis2.description.java2wsdl.DefaultSchemaGenerator.generateSchema(DefaultSchemaGenerator.java:274)

 at org.apache.axis2.deployment.util.Utils.fillAxisService(Utils.java:468)

 at
 org.apache.axis2.deployment.ServiceBuilder.populateService(ServiceBuilder.java:397)

 at
 org.apache.axis2.deployment.ServiceGroupBuilder.populateServiceGroup(ServiceGroupBuilder.java:101)

 at
 org.wso2.carbon.utils.deployment.Axis2ServiceRegistry.addServices(Axis2ServiceRegistry.java:221)

 at
 org.wso2.carbon.utils.deployment.Axis2ServiceRegistry.register(Axis2ServiceRegistry.java:102)

 at
 org.wso2.carbon.utils.deployment.Axis2ServiceRegistry.register(Axis2ServiceRegistry.java:89)

 at
 org.wso2.carbon.core.init.CarbonServerManager.initializeCarbon(CarbonServerManager.java:473)

 at
 org.wso2.carbon.core.init.CarbonServerManager.start(CarbonServerManager.java:219)

 at
 org.wso2.carbon.core.internal.CarbonCoreServiceComponent.activate(CarbonCoreServiceComponent.java:91)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 at java.lang.reflect.Method.invoke(Method.java:606)

 at
 org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)

 at
 org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)

 at
 org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)

 at
 org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)

 at
 org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)

 at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)

 at
 org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)

 at
 org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)

 at
 org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)

 at
 org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)

 at
 org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)

 at
 org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)

 at
 org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)

 at
 org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)

 at
 org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)

 at
 org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)

 at
 org.eclipse.equinox.http.servlet.internal.Activator.registerHttpService(Activator.java:81)

 at
 

Re: [Dev] Can't upload carbon app in cluster mode

2015-08-25 Thread Bhathiya Jayasekara
Hi John,

Some time back I setup a similar cluster with ESB 4.8.1. I just tried to
install a car file, and it worked fine. I'm attaching my nginx config file
here.

Thanks,
Bhathiya

On Tue, Aug 25, 2015 at 12:23 PM, John Hawkins jo...@wso2.com wrote:

 Hi Folks,
 having problems with using an LB infront of two ESB's.

 I'm using nginx to LB two ESBs

 One of the ESBs is both a worker and a manager. Both nodes are on my
 machine - as is everything else (SVN, nginx etc )

 I have set up mgt.esb.wso2.com in nginx and I can see and log in to the
 carbon console OK (admin/admin)
 I have also set up esb.wso2.com in nginx and if I try to see that carbon
 console (https://esb.wso2.com) I can see it but it fails to log me in as
 admin/admin (I'm assuming that is OK too)


 My problem comes when I try to upload a new carbon application  using
 https://mgt.esb.wso2.com/carbon/carbonapps/app_upload.jsp?region=region1item=apps_add_menu

 When I try to upload a car file it fails with 404 not found -
 https://mgt.esb.wso2.com/fileupload/carbonapp

 I can upload the .car file if I go through https://127.0.0.1:9450 - which
 is the port my mgmnt console is running on i..e avoiding the LB.

 Any ideas what's going on here ?

 many thanks,
 John.

 John Hawkins
 Director: Solutions Architecture


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
*Bhathiya Jayasekara*
*Senior Software Engineer,*
*WSO2 inc., http://wso2.com http://wso2.com*

*Phone: +94715478185*
*LinkedIn: http://www.linkedin.com/in/bhathiyaj
http://www.linkedin.com/in/bhathiyaj*
*Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
*Blog: http://movingaheadblog.blogspot.com
http://movingaheadblog.blogspot.com/*


default
Description: Binary data
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] XMLStreamException cannot be resolved. 'event' component UI.

2015-08-25 Thread Hemika Kodikara
Note : After adding the package to import-package as in the earlier mail, I
get the following output. :

osgi p javax.xml.stream
javax.xml.stream; version=1.0.1org.eclipse.osgi_3.9.1.v20130814-1242 [0]
  abdera_1.0.0.wso2v3 [4] imports
  andes_3.0.0.SNAPSHOT [6] imports
  annogen_0.1.0.wso2v1 [7] imports
  axiom_1.2.11.wso2v6 [12] imports
  axis2_1.6.1.wso2v14 [13] imports
  axis2-json_1.6.1.wso2v14 [14] imports
  compass_2.0.1.wso2v2 [38] imports
  hector-core_1.1.4.wso2v1 [44] imports
  jaxb_2.2.5.wso2v1 [52] imports
  neethi_2.0.4.wso2v4 [61] imports
  org.eclipse.core.expressions_3.4.500.v20130515-1343 [70] imports
  org.eclipse.core.runtime_3.9.0.v20130326-1255 [72] imports
  org.hectorclient.hector_2.0.0.0 [118] imports
  org.wso2.carbon.andes.core_3.0.0.SNAPSHOT [130] imports
  org.wso2.carbon.andes.mgt.stub_3.0.0.SNAPSHOT [131] imports
  org.wso2.carbon.andes.stub_3.0.0.SNAPSHOT [132] imports
  org.wso2.carbon.andes.ui_3.0.0.SNAPSHOT [133] imports
  org.wso2.carbon.application.deployer_4.4.1 [134] imports
  org.wso2.carbon.authenticator.stub_4.4.1 [137] imports
  org.wso2.carbon.base_4.4.1 [138] imports
  org.wso2.carbon.core_4.4.1 [142] imports
  org.wso2.carbon.core.commons.stub_4.4.1 [145] imports
  org.wso2.carbon.event.client_4.5.0.SNAPSHOT [151] imports
  org.wso2.carbon.event.client.stub_4.5.0.SNAPSHOT [152] imports
  org.wso2.carbon.event.core_4.5.0.SNAPSHOT [154] imports
  org.wso2.carbon.event.stub_4.5.0.SNAPSHOT [155] imports
*  org.wso2.carbon.event.ui_4.5.0.SNAPSHOT [156] imports*
  org.wso2.carbon.feature.mgt.stub_4.4.1 [160] imports
  org.wso2.carbon.identity.authenticator.saml2.sso_4.4.0 [163] imports
  org.wso2.carbon.identity.authenticator.saml2.sso.common_4.4.0 [164]
imports
  org.wso2.carbon.identity.authenticator.saml2.sso.stub_4.4.0 [165] imports
  org.wso2.carbon.identity.sso.saml.stub_4.4.0 [167] imports
  org.wso2.carbon.identity.sts.store_4.4.0 [168] imports
  org.wso2.carbon.identity.user.store.configuration.deployer_4.5.0.SNAPSHOT
[170] imports
  org.wso2.carbon.identity.user.store.configuration.stub_4.5.0.SNAPSHOT
[171] imports
  org.wso2.carbon.java2wsdl.ui_4.5.0.SNAPSHOT [174] imports
  org.wso2.carbon.logging.admin.stub_4.5.0.SNAPSHOT [176] imports
  org.wso2.carbon.logging.service_4.5.0.SNAPSHOT [179] imports
  org.wso2.carbon.logging.view.stub_4.5.0.SNAPSHOT [180] imports
  org.wso2.carbon.messageflows.stub_4.5.0.SNAPSHOT [183] imports
  org.wso2.carbon.metrics.data.service.stub_1.1.0.SNAPSHOT [189] imports
  org.wso2.carbon.p2.touchpoint_4.4.1 [198] imports
  org.wso2.carbon.qpid.stub_4.5.0.SNAPSHOT [199] imports
  org.wso2.carbon.registry.common_4.4.1 [203] imports
  org.wso2.carbon.registry.common.ui_4.4.1 [204] imports
  org.wso2.carbon.registry.core_4.4.1 [205] imports
  org.wso2.carbon.registry.properties.stub_4.4.1 [207] imports
  org.wso2.carbon.registry.resource.stub_4.4.1 [210] imports
  org.wso2.carbon.registry.search.stub_4.4.1 [213] imports
  org.wso2.carbon.security.mgt_4.4.0 [221] imports
  org.wso2.carbon.server.admin_4.4.1 [224] imports
  org.wso2.carbon.server.admin.stub_4.4.1 [226] imports
  org.wso2.carbon.statistics.stub_4.5.0.SNAPSHOT [229] imports
  org.wso2.carbon.tenant.common_4.4.0 [231] imports
  org.wso2.carbon.tenant.common.stub_4.4.0 [232] imports
  org.wso2.carbon.tenant.mgt.stub_4.4.0 [238] imports
  org.wso2.carbon.tenant.redirector.servlet.stub_4.4.0 [241] imports
  org.wso2.carbon.tenant.throttling.agent_4.4.0 [245] imports
  org.wso2.carbon.tenant.usage.agent_4.4.0 [246] imports
  org.wso2.carbon.throttling.agent.stub_4.4.0 [247] imports
  org.wso2.carbon.tracer.stub_4.5.0.SNAPSHOT [253] imports
  org.wso2.carbon.tryit_4.5.0.SNAPSHOT [255] imports
  org.wso2.carbon.ui_4.4.1 [257] imports
  org.wso2.carbon.user.core_4.4.1 [264] imports
  org.wso2.carbon.user.mgt.stub_4.5.0.SNAPSHOT [267] imports
  org.wso2.carbon.utils_4.4.1 [269] imports
  org.wso2.carbon.wsdl2code.stub_4.5.0.SNAPSHOT [271] imports
  org.wso2.carbon.wsdl2code.ui_4.5.0.SNAPSHOT [272] imports
  rampart-core_1.6.1.wso2v14 [281] imports
  rampart-policy_1.6.1.wso2v14 [282] imports
  rampart-trust_1.6.1.wso2v14 [283] imports
  spring.framework_3.2.9.wso2v1 [289] imports
  tomcat_7.0.59.wso2v3 [291] imports
  xmlbeans_2.3.0.wso2v1 [300] imports

Still I am getting the error.

Regards,
Hemika

Hemika Kodikara
Software Engineer
WSO2 Inc.
lean . enterprise . middleware
http://wso2.com

Mobile : +9477762

On Tue, Aug 25, 2015 at 1:43 PM, Hemika Kodikara hem...@wso2.com wrote:

 Hi All,

 I am facing the following error when trying to publish a message to a
 topic in MB using the UI[1]. The topic implementation is taken from event
 component in carbon-commons.

 Following is the error that is coming :

 SEVERE: Servlet.service() for servlet [bridgeservlet] in context with path
 [/] threw exception [Unable to compile class for JSP:

 An error occurred at line: 18 in the jsp file:
 /topics/try_it_out_invoke_ajaxprocessor.jsp
 The type javax.xml.stream.XMLStreamException 

[Dev] XMLStreamException cannot be resolved. 'event' component UI.

2015-08-25 Thread Hemika Kodikara
Hi All,

I am facing the following error when trying to publish a message to a topic
in MB using the UI[1]. The topic implementation is taken from event
component in carbon-commons.

Following is the error that is coming :

SEVERE: Servlet.service() for servlet [bridgeservlet] in context with path
[/] threw exception [Unable to compile class for JSP:

An error occurred at line: 18 in the jsp file:
/topics/try_it_out_invoke_ajaxprocessor.jsp
The type javax.xml.stream.XMLStreamException cannot be resolved. It is
indirectly referenced from required .class files
15: String messageToBePrinted = null;
16: StAXOMBuilder builder = null;
17: try {
18: builder = new StAXOMBuilder(new
ByteArrayInputStream(textMsg.getBytes()));
19: message = builder.getDocumentElement();
20: if (message != null) {
21: brokerClient.publish(topic, message);

According to the code, new StAXOMBuilder() will throw and
javax.xml.stream.XMLStreamException.

I ran p javax.xml.stream on the osgi console and noticed that the
relevant UI module[2] did not contain the javax.xml.stream package.
Therefore I imported the package to the pom as below :

instructions
Bundle-SymbolicName${project.artifactId}/Bundle-SymbolicName
Bundle-Name${project.artifactId}/Bundle-Name
Carbon-ComponentUIBundle/Carbon-Component
   Import-Package

org.wso2.carbon.event.stub.*;version=${carbon.commons.imp.package.version},
org.wso2.carbon.event.api.*,
org.wso2.carbon.event.client.*,
org.wso2.carbon.utils,
   * javax.xml.stream,*
org.apache.axis2.*; version=${axis2.osgi.version.range},
org.apache.axiom.*; version=${axiom.osgi.version.range},
*;resolution:=optional
/Import-Package
Export-Package
org.wso2.carbon.event.ui.*
/Export-Package
  DynamicImport-Package*/DynamicImport-Package
/instructions

But still getting the same error. Is there anything else missing ?

[1] -
https://github.com/wso2/carbon-commons/blob/master/components/event/org.wso2.carbon.event.ui/src/main/resources/web/topics/try_it_out_invoke_ajaxprocessor.jsp
[2] -
https://github.com/wso2/carbon-commons/blob/master/components/event/org.wso2.carbon.event.ui/pom.xml

Regards,
Hemika

Hemika Kodikara
Software Engineer
WSO2 Inc.
lean . enterprise . middleware
http://wso2.com

Mobile : +9477762
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] error creating ESB Proxy Service based on G-REG wsdl which is created from data service (DSS)

2015-08-25 Thread Akila Nimantha [IT/EKO/LOITS]
Hi All,

I am working on integration of wso2 governance registry with wso2 DSS and wso2 
ESB. What I did was configured a central governance registry for both DSS and 
ESB swiched on ws-discovery option. Then I created a data service in DSS. When 
I trying to create a ESB proxy service based on previously created data service 
wsdl in governance registry, it gives me following error(please check attached 
file for tha log),

[cid:image003.jpg@01D0DF3F.0F00F8C0]

But when I download same service wsdl file from DSS and then save in governance 
registry, then I can create proxy service without any problem.

Is there an issue intergrating DSS with G-Reg?? do you guys have any solution???


Regards,
Akila Rathnayake

This message (including any attachments) is intended only for
the use of the individual or entity to which it is addressed and
may contain information that is non-public, proprietary,
privileged, confidential, and exempt from disclosure under
applicable law or may constitute as attorney work product.
If you are not the intended recipient, you are hereby notified
that any use, dissemination, distribution, or copying of this
communication is strictly prohibited. If you have received this
communication in error, notify us immediately by telephone and
(i) destroy this message if a facsimile or (ii) delete this message
immediately if this is an electronic communication.

Thank you.
[2015-08-25 13:55:06,968] ERROR -  Error trying to add the proxy service to the 
ESB configuration : GovTest13 :: null 
{org.wso2.carbon.proxyadmin.ui.client.ProxyServiceAdminClient}
org.wso2.carbon.proxyadmin.stub.ProxyServiceAdminProxyAdminException: Error 
trying to add the proxy service to the ESB configuration : GovTest13 :: null
at 
org.wso2.carbon.proxyadmin.ui.client.ProxyServiceAdminClient.addProxy(ProxyServiceAdminClient.java:105)
at 
org.apache.jsp.proxyservices.template_005fpass_002dthrough_jsp._jspService(org.apache.jsp.proxyservices.template_005fpass_002dthrough_jsp:226)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:403)
at 
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:492)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:378)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:155)
at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at 
org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
at 
org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
at 
org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
at 
org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at 
org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:748)
at 
org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:604)
at 
org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:543)
at 
org.eclipse.equinox.http.servlet.internal.RequestDispatcherAdaptor.include(RequestDispatcherAdaptor.java:37)
at 
org.eclipse.equinox.http.helper.ContextPathServletAdaptor$RequestDispatcherAdaptor.include(ContextPathServletAdaptor.java:369)
at 
org.apache.jasper.runtime.JspRuntimeLibrary.include(JspRuntimeLibrary.java:1015)
at 
org.apache.jasper.runtime.PageContextImpl.include(PageContextImpl.java:700)
at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.tiles.jsp.context.JspUtil.doInclude(JspUtil.java:87)
at 
org.apache.tiles.jsp.context.JspTilesRequestContext.include(JspTilesRequestContext.java:88)
at 
org.apache.tiles.jsp.context.JspTilesRequestContext.dispatch(JspTilesRequestContext.java:82)
at 
org.apache.tiles.impl.BasicTilesContainer.render(BasicTilesContainer.java:465)
at 
org.apache.tiles.jsp.taglib.InsertAttributeTag.render(InsertAttributeTag.java:140)
at 

Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Ashen Weerathunga
Thanks all for the suggestions,

There are few assumptions I have made,

   - Clusters are uniform
   - Fraud data always will be outliers to the normal clusters
   - Clusters are not intersect with each other
   - I have given the number of Iterations as 100. So I assume that 100
   iterations will be enough to make almost stable clusters

@Maheshakya,

In this dataset consist of higher amount of anomaly data than the normal
data. But the practical scenario will be otherwise. Because of that It will
be more unrealistic if I use those 65% of anomaly data to evaluate the
model. The amount of normal data I used to build the model is also less
than those 65% of anomaly data. Yes since our purpose is to detect
anomalies It would be good to try with more anomaly data to evaluate the
model.Thanks and I'll try to use them also.

Best Regards,

Ashen

On Tue, Aug 25, 2015 at 12:35 PM, Maheshakya Wijewardena 
mahesha...@wso2.com wrote:

 Is there any particular reason why you are putting aside 65% of anomalous
 data at the evaluation? Since there is an obvious imbalance when the
 numbers of normal and abnormal cases are taken into account, you will get
 greater accuracy at the evaluation because a model tends to produce more
 accurate results for the class with the greater size. But it's not the case
 for the class of smaller size. With less number of records, it wont make
 much impact on the accuracy. Hence IMO, it would be better if you could
 evaluate with more anomalous data.
 i.e. number of records of each class needs to be roughly equal.

 Best regards

 On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this
 process (uniform clusters etc). It will make the process more clear IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to
 cluster the KDD cup 99 network anomaly detection dataset using apache spark
 k means algorithm. So far I was able to achieve 99% accuracy rate from this
 dataset.The steps I have followed during the process are mentioned below.

- Separate the dataset into two parts (normal data and anomaly
data) by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only handle
   numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means and
build the model (15% of both normal and anomaly data were used to tune 
 the
hyper parameters such as k, percentile etc. to get an optimized model)
- Finally evaluate the model using 20% of both normal and anomaly
data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center by
using k means predict function.
- I have calculate 98th percentile distance for each cluster. (98
was the best value I got by tuning the model with different values)
- Then I checked whether the distance of new data point with the
given cluster center is less than or grater than the 98th percentile of
that cluster. If it is less than the percentile it is considered as a
normal data. If it is grater than the percentile it is considered as a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try out it
 with more realistic dataset. A summery of results I have obtained using
 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 Email: as...@wso2.com
 Mobile: +94 716042995 94716042995
 LinkedIn:
 *http://lk.linkedin.com/in/ashenweerathunga
 http://lk.linkedin.com/in/ashenweerathunga*




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 *CD Athuraliya*
 Software Engineer
 WSO2, Inc.
 lean . enterprise . middleware
 Mobile: +94 716288847 94716288847
 LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
 https://twitter.com/cdathuraliya | Blog
 http://cdathuraliya.tumblr.com/




 --
 Pruthuvi Maheshakya 

[Dev] Append escape character in json payload -ESB RC1 pack

2015-08-25 Thread Kesavan Yogarajah
Hi All,
In jira connectoor I need to send json object instead of atomic
entities.When I send ,ESB RC1 pack behave differently (append escape
characters).Is this a expected behavior? or am I miss anything?please find
the wire log below.

[2015-08-25 09:44:49,017] DEBUG - wire  POST /services/updateIssue
HTTP/1.1[\r][\n]
[2015-08-25 09:44:49,018] DEBUG - wire  Host:
kesavan-thinkpad-t530:8280[\r][\n]
[2015-08-25 09:44:49,018] DEBUG - wire  Connection: keep-alive[\r][\n]
[2015-08-25 09:44:49,018] DEBUG - wire  Content-Length: 452[\r][\n]
[2015-08-25 09:44:49,018] DEBUG - wire  Cache-Control: no-cache[\r][\n]
[2015-08-25 09:44:49,018] DEBUG - wire  Origin:
chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm[\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  User-Agent: Mozilla/5.0 (X11;
Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135
Safari/537.36[\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  Content-Type:
application/json[\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  Accept: */*[\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  Accept-Encoding: gzip,
deflate[\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  Accept-Language:
en-US,en;q=0.8[\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  Cookie: CFID=55267374;
CFTOKEN=6f91a96bf8f74eda-C7032C97-0E19-597A-2C58EE90A9C69BA4;
c3a52dd11e=4b1a38dcaa2e362ad44da59630c7272d;
2803525351=8b7e7b116412f31cfed6561ef0db5477[\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  [\r][\n]
[2015-08-25 09:44:49,019] DEBUG - wire  {[\n]
[2015-08-25 09:44:49,020] DEBUG - wire  [0x9]username:admin,[\n]
[2015-08-25 09:44:49,020] DEBUG - wire  password:jira@jaffna,
[\n]
[2015-08-25 09:44:49,020] DEBUG - wire  uri:
https://jafnnacon.atlassian.net;,  [\n]
[2015-08-25 09:44:49,020] DEBUG - wire  issueIdOrKey:TEST-6,[\n]
[2015-08-25 09:44:49,020] DEBUG - wire  issueFields:{[\n]
[2015-08-25 09:44:49,020] DEBUG - wire  update: {[\n]
[2015-08-25 09:44:49,020] DEBUG - wire  summary: [[\n]
[2015-08-25 09:44:49,020] DEBUG - wire  {[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  set: Bug in
business logic[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  }[\n]
[2015-08-25 09:44:49,021] DEBUG - wire],[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  labels: [[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  {[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  add:
triaged[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  },[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  {[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  remove:
blocker[\n]
[2015-08-25 09:44:49,021] DEBUG - wire  }[\n]
[2015-08-25 09:44:49,022] DEBUG - wire  ][\n]
[2015-08-25 09:44:49,022] DEBUG - wire  }[\n]
[2015-08-25 09:44:49,022] DEBUG - wire  }[\n]
[2015-08-25 09:44:49,022] DEBUG - wire  }[\n]
[2015-08-25 09:44:49,022] DEBUG - wire  [\n]
[2015-08-25 09:44:49,283]  INFO - TimeoutHandler This engine will expire
all callbacks after : 120 seconds, irrespective of the timeout action,
after the specified or optional timeout
[2015-08-25 09:44:50,363] DEBUG - wire  PUT /rest/api/2/issue/TEST-6
HTTP/1.1[\r][\n]
[2015-08-25 09:44:50,363] DEBUG - wire  Accept-Language:
en-US,en;q=0.8[\r][\n]
[2015-08-25 09:44:50,363] DEBUG - wire  Cookie: CFID=55267374;
CFTOKEN=6f91a96bf8f74eda-C7032C97-0E19-597A-2C58EE90A9C69BA4;
c3a52dd11e=4b1a38dcaa2e362ad44da59630c7272d;
2803525351=8b7e7b116412f31cfed6561ef0db5477[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Authorization: Basic
YWRtaW46amlyYUBqYWZmbmE=[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Origin:
chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Content-Type: application/json;
charset=UTF-8[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Accept: */*[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Cache-Control: no-cache[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Transfer-Encoding:
chunked[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Host: jafnnacon.atlassian.net:443
[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  Connection: Keep-Alive[\r][\n]
[2015-08-25 09:44:50,364] DEBUG - wire  User-Agent:
Synapse-PT-HttpComponents-NIO[\r][\n]
[2015-08-25 09:44:50,365] DEBUG - wire  [\r][\n]
[2015-08-25 09:44:50,365] DEBUG - wire  7c[\r][\n]
[2015-08-25 09:44:50,365] DEBUG - wire 
*{\update\:{\summary\:[{\set\:\Bug
in business
logic\}],\labels\:[{\add\:\triaged\},{\remove\:\blocker\}]}}*
[\r][\n]
[2015-08-25 09:44:50,365] DEBUG - wire  0[\r][\n]
[2015-08-25 09:44:50,365] DEBUG - wire  [\r][\n]
[2015-08-25 09:44:51,880] DEBUG - wire  HTTP/1.1 400 Bad Request[\r][\n]
[2015-08-25 09:44:51,880] DEBUG - wire  Server: nginx[\r][\n]
[2015-08-25 09:44:51,881] DEBUG - wire  Date: Tue, 25 Aug 2015 04:14:50
GMT[\r][\n]
[2015-08-25 09:44:51,881] DEBUG - wire  Content-Type:
application/json;charset=UTF-8[\r][\n]
[2015-08-25 09:44:51,881] DEBUG - wire  Transfer-Encoding:
chunked[\r][\n]

Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Srinath Perera
Nirmal, Seshika, shall we do a code review? This code should go into ML
after UI part is done.

Thanks
Srinath

On Tue, Aug 25, 2015 at 2:20 PM, Ashen Weerathunga as...@wso2.com wrote:

 Hi all,

 This is the source code of the project.
 https://github.com/ashensw/Spark-KMeans-fraud-detection

 Best Regards,
 Ashen

 On Tue, Aug 25, 2015 at 2:00 PM, Ashen Weerathunga as...@wso2.com wrote:

 Thanks all for the suggestions,

 There are few assumptions I have made,

- Clusters are uniform
- Fraud data always will be outliers to the normal clusters
- Clusters are not intersect with each other
- I have given the number of Iterations as 100. So I assume that 100
iterations will be enough to make almost stable clusters

 @Maheshakya,

 In this dataset consist of higher amount of anomaly data than the normal
 data. But the practical scenario will be otherwise. Because of that It will
 be more unrealistic if I use those 65% of anomaly data to evaluate the
 model. The amount of normal data I used to build the model is also less
 than those 65% of anomaly data. Yes since our purpose is to detect
 anomalies It would be good to try with more anomaly data to evaluate the
 model.Thanks and I'll try to use them also.

 Best Regards,

 Ashen

 On Tue, Aug 25, 2015 at 12:35 PM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Is there any particular reason why you are putting aside 65% of
 anomalous data at the evaluation? Since there is an obvious imbalance when
 the numbers of normal and abnormal cases are taken into account, you will
 get greater accuracy at the evaluation because a model tends to produce
 more accurate results for the class with the greater size. But it's not the
 case for the class of smaller size. With less number of records, it wont
 make much impact on the accuracy. Hence IMO, it would be better if you
 could evaluate with more anomalous data.
 i.e. number of records of each class needs to be roughly equal.

 Best regards

 On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this
 process (uniform clusters etc). It will make the process more clear IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to
 cluster the KDD cup 99 network anomaly detection dataset using apache 
 spark
 k means algorithm. So far I was able to achieve 99% accuracy rate from 
 this
 dataset.The steps I have followed during the process are mentioned below.

- Separate the dataset into two parts (normal data and anomaly
data) by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only
   handle numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means
and build the model (15% of both normal and anomaly data were used to 
 tune
the hyper parameters such as k, percentile etc. to get an optimized 
 model)
- Finally evaluate the model using 20% of both normal and anomaly
data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center by
using k means predict function.
- I have calculate 98th percentile distance for each cluster. (98
was the best value I got by tuning the model with different values)
- Then I checked whether the distance of new data point with the
given cluster center is less than or grater than the 98th percentile 
 of
that cluster. If it is less than the percentile it is considered as a
normal data. If it is grater than the percentile it is considered as a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try out
 it with more realistic dataset. A summery of results I have obtained 
 using
 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 Email: as...@wso2.com
 Mobile: +94 716042995 94716042995
 LinkedIn:
 *http://lk.linkedin.com/in/ashenweerathunga
 http://lk.linkedin.com/in/ashenweerathunga*




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine 

[Dev] Setting the carbon-analytics-common repo version to 5.0.0-SNAPSHOT

2015-08-25 Thread Lasantha Fernando
Hi all,

Since carbon-analytics-common has components that have already being
released with versions 4.2.x, 4.3.x, 4.4.x etc. (e.g.
databridge-components), we have decided to release the next version of
carbon-analytics-common repo as 5.0.0. Since otherwise, OSGi package
conflicts might occur.

So for now, we've updated carbon-analytics-common repo version to
5.0.0-SNAPSHOT. Please update the dependent carbon-analytics-common repo
version in your respective component/product repos as well if you are using
any components from this repo.

Thanks,
Lasantha

-- 
*Lasantha Fernando*
Senior Software Engineer - Data Technologies Team
WSO2 Inc. http://wso2.com

email: lasan...@wso2.com
mobile: (+94) 71 5247551
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Setting the carbon-analytics-common repo version to 5.0.0-SNAPSHOT

2015-08-25 Thread Gokul Balakrishnan
Thanks Lasantha. Noted.

On 25 August 2015 at 14:23, Lasantha Fernando lasan...@wso2.com wrote:

 Hi all,

 Since carbon-analytics-common has components that have already being
 released with versions 4.2.x, 4.3.x, 4.4.x etc. (e.g.
 databridge-components), we have decided to release the next version of
 carbon-analytics-common repo as 5.0.0. Since otherwise, OSGi package
 conflicts might occur.

 So for now, we've updated carbon-analytics-common repo version to
 5.0.0-SNAPSHOT. Please update the dependent carbon-analytics-common repo
 version in your respective component/product repos as well if you are using
 any components from this repo.

 Thanks,
 Lasantha

 --
 *Lasantha Fernando*
 Senior Software Engineer - Data Technologies Team
 WSO2 Inc. http://wso2.com

 email: lasan...@wso2.com
 mobile: (+94) 71 5247551




-- 
Gokul Balakrishnan
Senior Software Engineer,
WSO2, Inc. http://wso2.com
Mob: +94 77 593 5789 | +1 650 272 9927
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Moving app-manager branch from product-es repo to carbon-store repo

2015-08-25 Thread Udara Rathnayake
Hi Rushmin,

I have merged the PR. While browsing app-manager branch I noticed following
This branch is 1270 commits ahead, 3 commits behind master.
Is this 3 commits behind master part correct? It might be due to the fact
that we have preserved git history from both master of carbon-store and
app-manger of product-es.

Anyway if all features in this branch work for App Manager as expected we
should not worry about that. Let's remove the app-manager branch from
product-es once everything looks stable.

Regards,
UdaraR

On Tue, Aug 25, 2015 at 3:30 PM, Rushmin Fernando rush...@wso2.com wrote:


 Hi Udara and Manu,

 As per our discussion on $subject, i moved the code and fixed the maven
 group ids.

 Please review and merge the pull request [1]

 Thanks
 Rushmin


 [1] - https://github.com/wso2/carbon-store/pull/168


 --
 *Rushmin Fernando*
 *Technical Lead*

 WSO2 Inc. http://wso2.com/ - Lean . Enterprise . Middleware

 email : rush...@wso2.com
 mobile : +94772310855



___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Nirmal Fernando
Sure. @Ashen, can you please arrange one?

On Tue, Aug 25, 2015 at 2:35 PM, Srinath Perera srin...@wso2.com wrote:

 Nirmal, Seshika, shall we do a code review? This code should go into ML
 after UI part is done.

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 2:20 PM, Ashen Weerathunga as...@wso2.com wrote:

 Hi all,

 This is the source code of the project.
 https://github.com/ashensw/Spark-KMeans-fraud-detection

 Best Regards,
 Ashen

 On Tue, Aug 25, 2015 at 2:00 PM, Ashen Weerathunga as...@wso2.com
 wrote:

 Thanks all for the suggestions,

 There are few assumptions I have made,

- Clusters are uniform
- Fraud data always will be outliers to the normal clusters
- Clusters are not intersect with each other
- I have given the number of Iterations as 100. So I assume that 100
iterations will be enough to make almost stable clusters

 @Maheshakya,

 In this dataset consist of higher amount of anomaly data than the normal
 data. But the practical scenario will be otherwise. Because of that It will
 be more unrealistic if I use those 65% of anomaly data to evaluate the
 model. The amount of normal data I used to build the model is also less
 than those 65% of anomaly data. Yes since our purpose is to detect
 anomalies It would be good to try with more anomaly data to evaluate the
 model.Thanks and I'll try to use them also.

 Best Regards,

 Ashen

 On Tue, Aug 25, 2015 at 12:35 PM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Is there any particular reason why you are putting aside 65% of
 anomalous data at the evaluation? Since there is an obvious imbalance when
 the numbers of normal and abnormal cases are taken into account, you will
 get greater accuracy at the evaluation because a model tends to produce
 more accurate results for the class with the greater size. But it's not the
 case for the class of smaller size. With less number of records, it wont
 make much impact on the accuracy. Hence IMO, it would be better if you
 could evaluate with more anomalous data.
 i.e. number of records of each class needs to be roughly equal.

 Best regards

 On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this
 process (uniform clusters etc). It will make the process more clear IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to
 cluster the KDD cup 99 network anomaly detection dataset using apache 
 spark
 k means algorithm. So far I was able to achieve 99% accuracy rate from 
 this
 dataset.The steps I have followed during the process are mentioned 
 below.

- Separate the dataset into two parts (normal data and anomaly
data) by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only
   handle numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means
and build the model (15% of both normal and anomaly data were used 
 to tune
the hyper parameters such as k, percentile etc. to get an optimized 
 model)
- Finally evaluate the model using 20% of both normal and
anomaly data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center by
using k means predict function.
- I have calculate 98th percentile distance for each cluster.
(98 was the best value I got by tuning the model with different 
 values)
- Then I checked whether the distance of new data point with the
given cluster center is less than or grater than the 98th percentile 
 of
that cluster. If it is less than the percentile it is considered as a
normal data. If it is grater than the percentile it is considered as 
 a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try out
 it with more realistic dataset. A summery of results I have obtained 
 using
 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 Email: as...@wso2.com
 Mobile: +94 716042995 94716042995
 LinkedIn:
 

Re: [Dev] Java Security Manager needs read permission to h2 db in AS 5.3.0 SNAPSHOT

2015-08-25 Thread Supun Malinga
thanks Isuru. Let me see what I can find.

thanks

On Tue, Aug 25, 2015 at 12:12 PM, Isuru Perera isu...@wso2.com wrote:

 Hi Supun,

 I'm sorry I missed this mail. We need to identify which method is
 accessing the local database. We should never give explicit read
 permissions for the H2 database.

 We need to use Java Privileged Block API in Carbon Context APIs. If you
 cannot figure out the protection domain for the access failure, please
 check Java Security Debug logs. See Troubleshooting section in my Java
 Security Manager related blog post [1].

 With Privileged Block API, we can let Carbon Context APIs to use same
 permissions we give to Carbon code.

 Thanks!

 Best Regards,

 [1]
 http://isuru-perera.blogspot.com/2014/12/enabling-java-security-manager-for-wso2.html


 On Thu, Aug 13, 2015 at 3:37 PM, Supun Malinga sup...@wso2.com wrote:

 Hi,

 For accessing usermgt via CarbonContext had to provide following
 permission for webapp.

 permission java.io.FilePermission
 /home/supun/smoke/java_sec/530_custom/wso2as-5.3.0-SNAPSHOT/repository/database/WSO2CARBON_DB.data.db,
 read;

 I tested with AS 5.2.1 and we don't need this in 5.2.1.

 Can anyone tell why this is needed and if its an issue ?.

 thanks,
 --
 Supun Malinga,

 Senior Software Engineer,
 WSO2 Inc.
 http://wso2.com
 email: sup...@wso2.com sup...@wso2.com
 mobile: +94 (0)71 56 91 321




 --
 Isuru Perera
 Associate Technical Lead | WSO2, Inc. | http://wso2.com/
 Lean . Enterprise . Middleware

 about.me/chrishantha
 Contact: +IsuruPereraWSO2 https://www.google.com/+IsuruPereraWSO2/about




-- 
Supun Malinga,

Senior Software Engineer,
WSO2 Inc.
http://wso2.com
email: sup...@wso2.com sup...@wso2.com
mobile: +94 (0)71 56 91 321
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Atom/RSS Inbound endpoint] Test Server for Automation test

2015-08-25 Thread Krishantha Samaraweera
Hi Rajaaz,

As we discussed offline lets try to use registry atom feeds first rather
implementing our own server or consuming external atom/rss endpoints
(highly discourage as those endpoints trend to change in future - we had
many bad experience with consuming external endpoints).

Thanks,
Krishantha.

On Mon, Aug 24, 2015 at 11:49 PM, Kathees Rajendram kath...@wso2.com
wrote:

 Hi Rajjaz,

 We need to automate the server startup also if the server startup is
 available. We don't need to write the separate connector method for the
 server startup. We can include the server startup in the integration test.

 We can use the google blogspot to feed.
 https://developers.google.com/gdata/docs/auth/oauth

 Thanks,
 Kathees

 On Mon, Aug 24, 2015 at 7:11 PM, Rajjaz Mohammed raj...@wso2.com wrote:

 Hi All,
 I'm Developing Atom Connector and its Feed CRUD Operations are working
 fine for Implemented Testing Server and i need to know how to add the test
 Server to connector and Automation test. i have two options.

 1.write one method as startserver and do the operations follow by
 statrserver.
 2.write the Connector and run the server Separately Before Test the
 Connector.

 please Advice to me which is Best? and Atom/ RSS Connector Currently We
 can't test with 3rd party backend because of the Authentication methods are
 Different to each Backend and  Inbound Working with Atom and RSS and inject
 as Atom into ESB.

 Please Comments on it.

 On Tue, Aug 11, 2015 at 10:23 AM, Krishantha Samaraweera 
 krishan...@wso2.com wrote:

 Hi all,

 On Tue, Aug 11, 2015 at 8:59 AM, Malaka Silva mal...@wso2.com wrote:

 Hi Rajjaz,

 If there is a server already provided by automation framework then you
 don't have a blocker for writing the test cases.

 However if that is not the case, write your tests case assuming that
 there is a running server and comment it out. Once it's available we can
 include it to esb tests cases.

 @Automation Team - Do we already have support for this?


 No automation doesn't support for this. Why don't you guys write your
 own way to obtain atom URL and proceed. Its not practical for us to provide
 all test utilities upfront.

 BTW, you guys can use registry atom URLs -
 https://10.100.1.26:9443/registry/atom/_system/config/

 Thanks,
 Krishantha.





 On Tue, Aug 11, 2015 at 1:19 AM, Rajjaz Mohammed raj...@wso2.com
 wrote:

 Hi Vanji,
 This is the way I’m developing. if I’m wrong please correct me.

 On Tue, Aug 11, 2015 at 12:58 AM, Rajjaz Mohammed raj...@wso2.com
 wrote:

 Hi Vanji,
 We are creating a our own rss feed producer but we need to update it
 in 3rd party backend own by Client. that's the Requirement given to me.
 according that I’m Developing the Connector for 3rd party backend rss 
 feeds.

 On Mon, Aug 10, 2015 at 11:37 PM, Vanjikumaran Sivajothy 
 va...@wso2.com wrote:

 Rather than depending on 3rd party backend (like wordpress/
 blogspot.. etc), it is better to create own rss feed producer.

 On Mon, Aug 10, 2015 at 9:55 AM, Kathees Rajendram kath...@wso2.com
  wrote:

 Hi Rajjaz,

 To write the automation test for the inbound lister, you can create
 a Java client to produce the feed to the URL. The inbound listener will
 consume the feeds from the URL then you can write the couple of test 
 cases
 to test the inbound listener. Other inbound automation test can be 
 found
 below.


 https://github.com/wso2/product-esb/tree/master/modules/integration/tests-integration/tests-transport/src/test/java/org/wso2/carbon/esb

 Thanks,
 Kathees


 On Mon, Aug 10, 2015 at 6:11 PM, Rajjaz Mohammed raj...@wso2.com
 wrote:

 for the Connector Part . Connector will create , update and Delete
 the feeds in given URL. so the URL may be a Blogger or Wordpress or
 something else so Test want to be able to do so.

 On Mon, Aug 10, 2015 at 5:16 PM, Rajjaz Mohammed raj...@wso2.com
 wrote:

 Hi Irham,
 its an polling inbound to listen Atom/RSS feeds update in a given
 URL.

 On Mon, Aug 10, 2015 at 5:01 PM, Irham Iqbal iq...@wso2.com
 wrote:

 Hi Rajjaz,

 Can you elaborate the scenario which you're going to write
 integration tests.

 I mean what are the steps you're following to test this manually?

 Thanks,
 Iqbal

 On Mon, Aug 10, 2015 at 3:45 PM, Rajjaz Mohammed 
 raj...@wso2.com wrote:

 *is there any methods available to do the Automation Test to my
 Atom/RSS inbound endpoint?

 On Mon, Aug 10, 2015 at 3:03 PM, Rajjaz Mohammed 
 raj...@wso2.com wrote:

 Hi All,
 I'm Developing Atom/RSS inbount endpoint for ESB so for the
 Automation test i need to test in multiple server like Blogger , 
 Wordpress
 . is the any method to do the Automation test?

 --
 Thank you
 Best Regards

 *Rajjaz HM*
 Associate Software Engineer
 WSO2 Inc. http://wso2.com/
 lean | enterprise | middleware
 Mobile : +94752833834
 Email  :raj...@wso2.com
 LinkedIn | Blogger | WSO2 Profile
 http://wso2.com/about/team/mohammer_rajjaz/




 --
 Thank you
 Best Regards

 *Rajjaz HM*
 Associate Software Engineer
 WSO2 Inc. 

Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread CD Athuraliya
Hi Ashen,

It would be better if you can add the assumptions you make in this process
(uniform clusters etc). It will make the process more clear IMO.

Regards,
CD

On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to cluster
 the KDD cup 99 network anomaly detection dataset using apache spark k means
 algorithm. So far I was able to achieve 99% accuracy rate from this
 dataset.The steps I have followed during the process are mentioned below.

- Separate the dataset into two parts (normal data and anomaly data)
by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only handle
   numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means and
build the model (15% of both normal and anomaly data were used to tune the
hyper parameters such as k, percentile etc. to get an optimized model)
- Finally evaluate the model using 20% of both normal and anomaly
data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center by
using k means predict function.
- I have calculate 98th percentile distance for each cluster. (98 was
the best value I got by tuning the model with different values)
- Then I checked whether the distance of new data point with the
given cluster center is less than or grater than the 98th percentile of
that cluster. If it is less than the percentile it is considered as a
normal data. If it is grater than the percentile it is considered as a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try out it
 with more realistic dataset. A summery of results I have obtained using
 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 Email: as...@wso2.com
 Mobile: +94 716042995 94716042995
 LinkedIn:
 *http://lk.linkedin.com/in/ashenweerathunga
 http://lk.linkedin.com/in/ashenweerathunga*




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





-- 
*CD Athuraliya*
Software Engineer
WSO2, Inc.
lean . enterprise . middleware
Mobile: +94 716288847 94716288847
LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
https://twitter.com/cdathuraliya | Blog http://cdathuraliya.tumblr.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Intermittent Classloading issue in WebApp

2015-08-25 Thread Supun Malinga
Yeah Good work Madhawa. What was the fix done ?.

On Tue, Aug 25, 2015 at 10:08 AM, Anjana Fernando anj...@wso2.com wrote:

 Hi Madhawa,

 Good work in finding the issue and fixing it! ..

 Cheers,
 Anjana.

 On Tue, Aug 25, 2015 at 8:57 AM, Madhawa Gunasekara madha...@wso2.com
 wrote:

 Hi All,

 able to fix this classloading issue by writing another carbon component.
 It seems carbon-data-core depends with some other components.

 Thanks,
 Madhawa

 On Mon, Aug 24, 2015 at 7:58 PM, Madhawa Gunasekara madha...@wso2.com
 wrote:

 Hi All,

 I found some duplicate exports, is there any reason to export same
 versions from different jars ?

 Packages
 org.wso2.carbon.server.admin.common
   Exported By
 org.wso2.carbon.server.admin_4.4.0.jar at the location
 repository/components/plugins/org.wso2.carbon.server.admin_4.4.0.jar
 org.wso2.carbon.server.admin.common_4.4.0.jar at the location
 repository/components/plugins/org.wso2.carbon.server.admin.common_4.4.0.jar

 Packages
 org.wso2.carbon.event.client.stub.generated.addressing
 org.wso2.carbon.event.client.stub.generated.security
 org.wso2.carbon.event.client.stub.generated
 org.wso2.carbon.event.client.stub.generated.authentication
   Exported By
 org.wso2.carbon.event.client_4.4.3.jar at the location
 repository/components/plugins/org.wso2.carbon.event.client_4.4.3.jar
 org.wso2.carbon.event.client.stub_4.4.3.jar at the location
 repository/components/plugins/org.wso2.carbon.event.client.stub_4.4.3.jar

 Packages
 org.wso2.carbon.user.mgt.common
   Exported By
 org.wso2.carbon.user.mgt.common_4.4.3.jar at the location
 repository/components/plugins/org.wso2.carbon.user.mgt.common_4.4.3.jar
 org.wso2.carbon.user.mgt_4.4.3.jar at the location
 repository/components/plugins/org.wso2.carbon.user.mgt_4.4.3.jar


 Thanks,
 Madhawa

 On Fri, Aug 21, 2015 at 5:11 PM, Madhawa Gunasekara madha...@wso2.com
 wrote:

 Hi all,

 I'm finding intermittent class-loading issue, when accessing a class
 where it bundled in dss from a web-app. I'm using Carbon Environment in my
 web app class-loading.xml.
 Please find the below ClassNotFound Exception below.

 [2015-08-21 16:35:30,763] ERROR
 {org.apache.catalina.core.StandardWrapperValve} -  Servlet.service() for
 servlet [ODataServlet] in context with path [/odataservices] threw
 exception [Servlet execution threw an exception] with root cause
 java.lang.ClassNotFoundException:
 org.wso2.carbon.dataservices.core.odata.ODataServiceRegistryImpl
 at
 org.wso2.carbon.webapp.mgt.loader.CarbonWebappClassLoader.loadClass(CarbonWebappClassLoader.java:154)
 at
 org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1571)
 at
 org.wso2.carbon.dataservices.odata.ODataServlet.service(ODataServlet.java:60)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at
 org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
 at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
 at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
 at
 org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:504)
 at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
 at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
 at
 org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
 at
 org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
 at
 org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
 at
 org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
 at
 org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
 at
 org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
 at
 org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
 at
 org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
 at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
 at
 org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
 at
 

Re: [Dev] Java Security Manager needs read permission to h2 db in AS 5.3.0 SNAPSHOT

2015-08-25 Thread Isuru Perera
Hi Supun,

I'm sorry I missed this mail. We need to identify which method is accessing
the local database. We should never give explicit read permissions for the
H2 database.

We need to use Java Privileged Block API in Carbon Context APIs. If you
cannot figure out the protection domain for the access failure, please
check Java Security Debug logs. See Troubleshooting section in my Java
Security Manager related blog post [1].

With Privileged Block API, we can let Carbon Context APIs to use same
permissions we give to Carbon code.

Thanks!

Best Regards,

[1]
http://isuru-perera.blogspot.com/2014/12/enabling-java-security-manager-for-wso2.html


On Thu, Aug 13, 2015 at 3:37 PM, Supun Malinga sup...@wso2.com wrote:

 Hi,

 For accessing usermgt via CarbonContext had to provide following
 permission for webapp.

 permission java.io.FilePermission
 /home/supun/smoke/java_sec/530_custom/wso2as-5.3.0-SNAPSHOT/repository/database/WSO2CARBON_DB.data.db,
 read;

 I tested with AS 5.2.1 and we don't need this in 5.2.1.

 Can anyone tell why this is needed and if its an issue ?.

 thanks,
 --
 Supun Malinga,

 Senior Software Engineer,
 WSO2 Inc.
 http://wso2.com
 email: sup...@wso2.com sup...@wso2.com
 mobile: +94 (0)71 56 91 321




-- 
Isuru Perera
Associate Technical Lead | WSO2, Inc. | http://wso2.com/
Lean . Enterprise . Middleware

about.me/chrishantha
Contact: +IsuruPereraWSO2 https://www.google.com/+IsuruPereraWSO2/about
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Moving app-manager branch from product-es repo to carbon-store repo

2015-08-25 Thread Rushmin Fernando
Hi Udara and Manu,

As per our discussion on $subject, i moved the code and fixed the maven
group ids.

Please review and merge the pull request [1]

Thanks
Rushmin


[1] - https://github.com/wso2/carbon-store/pull/168


-- 
*Rushmin Fernando*
*Technical Lead*

WSO2 Inc. http://wso2.com/ - Lean . Enterprise . Middleware

email : rush...@wso2.com
mobile : +94772310855
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] About Edge Analytics Service-Mobile IOT

2015-08-25 Thread Srinath Perera
Suho is Siddhi engine thread safe
1. send events
2. to change queries.

Lakini answer depends on above.

Could u explain how two clients override the same query? are they running
the same query?

Thanks
Srinath

On Tue, Aug 25, 2015 at 3:24 PM, Lakini Senanayaka lak...@wso2.com wrote:

 Hi Srinath,

 As an small introduction to my project,it is about EdgeAnalytics Service
 which is an android service and it does edge analytic using Siddhi queries.
 There are 2 types of clients for the service.I have developed Type 1 client
 which sends it's own data to analyse with user define streams,rule and a
 callback.I'm using AIDL interface method for the Client-Service connection.

 I have a copy of working project now.In here I used a single thread to
 develop service as you told me in the meeting ,where we discussed the
 Architecture of Edge Analytic Service.But in my application when we are
 running 2 or more  client apps in the same time the siddhimanager in the
 service overrides the the 1st rule of the first running client with the
 rule of second client.(in here rule mean the Logic)So for first client it
 gives wrong results.So to overcome from that we have to use multi threading
 and I discussed this issue with Shan and he told me that this problem can
 be solve by using multi threading.So Could you please advice me how to
 overcome this issue,although you told me not to use multithreading.
 Suggestions would be appreciate.

 I have attached a documentation about Edge Analytic s Service.


 Thank you.
 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] About Edge Analytics Service-Mobile IOT

2015-08-25 Thread Lakini Senanayaka
 Srinath,thank you for your prompt reply.

My Client and Service run separately in 2 different processes.I have just
developed 2 simple apps which send random data and I controlled it using a
timer.I have set 2 different rules for the 2 apps.one client has a rule
like notify when the value50 and the second client has a rule like notify
when the value20.When running two clients in the same time ,the first
client get notification for value20,although it's rule is value50.

So the SiddhiManager has overrides the 1st client's query by second
client's query.

On Tue, Aug 25, 2015 at 3:36 PM, Srinath Perera srin...@wso2.com wrote:

 Suho is Siddhi engine thread safe
 1. send events
 2. to change queries.

 Lakini answer depends on above.

 Could u explain how two clients override the same query? are they running
 the same query?

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 3:24 PM, Lakini Senanayaka lak...@wso2.com
 wrote:

 Hi Srinath,

 As an small introduction to my project,it is about EdgeAnalytics Service
 which is an android service and it does edge analytic using Siddhi queries.
 There are 2 types of clients for the service.I have developed Type 1 client
 which sends it's own data to analyse with user define streams,rule and a
 callback.I'm using AIDL interface method for the Client-Service connection.

 I have a copy of working project now.In here I used a single thread to
 develop service as you told me in the meeting ,where we discussed the
 Architecture of Edge Analytic Service.But in my application when we are
 running 2 or more  client apps in the same time the siddhimanager in the
 service overrides the the 1st rule of the first running client with the
 rule of second client.(in here rule mean the Logic)So for first client it
 gives wrong results.So to overcome from that we have to use multi threading
 and I discussed this issue with Shan and he told me that this problem can
 be solve by using multi threading.So Could you please advice me how to
 overcome this issue,although you told me not to use multithreading.
 Suggestions would be appreciate.

 I have attached a documentation about Edge Analytic s Service.


 Thank you.
 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




 --
 
 Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
 Site: http://people.apache.org/~hemapani/
 Photos: http://www.flickr.com/photos/hemapani/
 Phone: 0772360902




-- 
*Intern-Engineering*
Lakini S.Senanayaka
Mobile: +94 712295444
Email: lak...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Maheshakya Wijewardena
Is there any particular reason why you are putting aside 65% of anomalous
data at the evaluation? Since there is an obvious imbalance when the
numbers of normal and abnormal cases are taken into account, you will get
greater accuracy at the evaluation because a model tends to produce more
accurate results for the class with the greater size. But it's not the case
for the class of smaller size. With less number of records, it wont make
much impact on the accuracy. Hence IMO, it would be better if you could
evaluate with more anomalous data.
i.e. number of records of each class needs to be roughly equal.

Best regards

On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this process
 (uniform clusters etc). It will make the process more clear IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to cluster
 the KDD cup 99 network anomaly detection dataset using apache spark k means
 algorithm. So far I was able to achieve 99% accuracy rate from this
 dataset.The steps I have followed during the process are mentioned below.

- Separate the dataset into two parts (normal data and anomaly data)
by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only handle
   numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means and
build the model (15% of both normal and anomaly data were used to tune 
 the
hyper parameters such as k, percentile etc. to get an optimized model)
- Finally evaluate the model using 20% of both normal and anomaly
data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center by
using k means predict function.
- I have calculate 98th percentile distance for each cluster. (98
was the best value I got by tuning the model with different values)
- Then I checked whether the distance of new data point with the
given cluster center is less than or grater than the 98th percentile of
that cluster. If it is less than the percentile it is considered as a
normal data. If it is grater than the percentile it is considered as a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try out it
 with more realistic dataset. A summery of results I have obtained using
 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 Email: as...@wso2.com
 Mobile: +94 716042995 94716042995
 LinkedIn:
 *http://lk.linkedin.com/in/ashenweerathunga
 http://lk.linkedin.com/in/ashenweerathunga*




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 *CD Athuraliya*
 Software Engineer
 WSO2, Inc.
 lean . enterprise . middleware
 Mobile: +94 716288847 94716288847
 LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
 https://twitter.com/cdathuraliya | Blog
 http://cdathuraliya.tumblr.com/




-- 
Pruthuvi Maheshakya Wijewardena
Software Engineer
WSO2 : http://wso2.com/
Email: mahesha...@wso2.com
Mobile: +94711228855
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Restructure tuned samples

2015-08-25 Thread Manorama Perera
Updated the samples.

Thanks.

On Tue, Aug 25, 2015 at 11:46 AM, Nirmal Fernando nir...@wso2.com wrote:

 Thanks Mano! Also, we need to make all scrips (*.sh files) executable (chmod
 +x model-generation.sh and commit). Also, in README, we need to mention
 that the samples only works in Unix environment.

 On Tue, Aug 25, 2015 at 11:27 AM, Manorama Perera manor...@wso2.com
 wrote:

 Sure. I'll rename the folders.

 On Tue, Aug 25, 2015 at 11:19 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Mano,

 I feel 'tuned' part is redundant in the folders inside

 https://github.com/wso2/product-ml/tree/master/modules/samples/tuned

 Shall we refactor?

 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 Manorama Perera
 Software Engineer
 WSO2, Inc.;  http://wso2.com/
 Mobile : +94716436216




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





-- 
Manorama Perera
Software Engineer
WSO2, Inc.;  http://wso2.com/
Mobile : +94716436216
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Restructure tuned samples

2015-08-25 Thread Nirmal Fernando
Thanks.

On Tue, Aug 25, 2015 at 12:26 PM, Manorama Perera manor...@wso2.com wrote:

 Updated the samples.

 Thanks.

 On Tue, Aug 25, 2015 at 11:46 AM, Nirmal Fernando nir...@wso2.com wrote:

 Thanks Mano! Also, we need to make all scrips (*.sh files) executable (chmod
 +x model-generation.sh and commit). Also, in README, we need to mention
 that the samples only works in Unix environment.

 On Tue, Aug 25, 2015 at 11:27 AM, Manorama Perera manor...@wso2.com
 wrote:

 Sure. I'll rename the folders.

 On Tue, Aug 25, 2015 at 11:19 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Mano,

 I feel 'tuned' part is redundant in the folders inside

 https://github.com/wso2/product-ml/tree/master/modules/samples/tuned

 Shall we refactor?

 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 Manorama Perera
 Software Engineer
 WSO2, Inc.;  http://wso2.com/
 Mobile : +94716436216




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 Manorama Perera
 Software Engineer
 WSO2, Inc.;  http://wso2.com/
 Mobile : +94716436216




-- 

Thanks  regards,
Nirmal

Team Lead - WSO2 Machine Learner
Associate Technical Lead - Data Technologies Team, WSO2 Inc.
Mobile: +94715779733
Blog: http://nirmalfdo.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] About Edge Analytics Service-Mobile IOT

2015-08-25 Thread Srinath Perera
Please point to your code.

CEP team, please help.

On Tue, Aug 25, 2015 at 4:00 PM, Lakini Senanayaka lak...@wso2.com wrote:

  Srinath,thank you for your prompt reply.

 My Client and Service run separately in 2 different processes.I have just
 developed 2 simple apps which send random data and I controlled it using a
 timer.I have set 2 different rules for the 2 apps.one client has a rule
 like notify when the value50 and the second client has a rule like notify
 when the value20.When running two clients in the same time ,the first
 client get notification for value20,although it's rule is value50.

 So the SiddhiManager has overrides the 1st client's query by second
 client's query.

 On Tue, Aug 25, 2015 at 3:36 PM, Srinath Perera srin...@wso2.com wrote:

 Suho is Siddhi engine thread safe
 1. send events
 2. to change queries.

 Lakini answer depends on above.

 Could u explain how two clients override the same query? are they running
 the same query?

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 3:24 PM, Lakini Senanayaka lak...@wso2.com
 wrote:

 Hi Srinath,

 As an small introduction to my project,it is about EdgeAnalytics Service
 which is an android service and it does edge analytic using Siddhi queries.
 There are 2 types of clients for the service.I have developed Type 1 client
 which sends it's own data to analyse with user define streams,rule and a
 callback.I'm using AIDL interface method for the Client-Service connection.

 I have a copy of working project now.In here I used a single thread to
 develop service as you told me in the meeting ,where we discussed the
 Architecture of Edge Analytic Service.But in my application when we are
 running 2 or more  client apps in the same time the siddhimanager in the
 service overrides the the 1st rule of the first running client with the
 rule of second client.(in here rule mean the Logic)So for first client it
 gives wrong results.So to overcome from that we have to use multi threading
 and I discussed this issue with Shan and he told me that this problem can
 be solve by using multi threading.So Could you please advice me how to
 overcome this issue,although you told me not to use multithreading.
 Suggestions would be appreciate.

 I have attached a documentation about Edge Analytic s Service.


 Thank you.
 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




 --
 
 Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
 Site: http://people.apache.org/~hemapani/
 Photos: http://www.flickr.com/photos/hemapani/
 Phone: 0772360902




 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Nirmal Fernando
Can we see the code too?

On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to cluster
 the KDD cup 99 network anomaly detection dataset using apache spark k means
 algorithm. So far I was able to achieve 99% accuracy rate from this
 dataset.The steps I have followed during the process are mentioned below.

- Separate the dataset into two parts (normal data and anomaly data)
by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only handle
   numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means and
build the model (15% of both normal and anomaly data were used to tune the
hyper parameters such as k, percentile etc. to get an optimized model)
- Finally evaluate the model using 20% of both normal and anomaly data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center by using
k means predict function.
- I have calculate 98th percentile distance for each cluster. (98 was
the best value I got by tuning the model with different values)
- Then I checked whether the distance of new data point with the given
cluster center is less than or grater than the 98th percentile of that
cluster. If it is less than the percentile it is considered as a normal
data. If it is grater than the percentile it is considered as a fraud since
it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try out it
 with more realistic dataset. A summery of results I have obtained using
 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 Email: as...@wso2.com
 Mobile: +94 716042995 94716042995
 LinkedIn:
 *http://lk.linkedin.com/in/ashenweerathunga
 http://lk.linkedin.com/in/ashenweerathunga*




-- 

Thanks  regards,
Nirmal

Team Lead - WSO2 Machine Learner
Associate Technical Lead - Data Technologies Team, WSO2 Inc.
Mobile: +94715779733
Blog: http://nirmalfdo.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Restructure tuned samples

2015-08-25 Thread Nirmal Fernando
Thanks Mano! Also, we need to make all scrips (*.sh files) executable (chmod
+x model-generation.sh and commit). Also, in README, we need to mention
that the samples only works in Unix environment.

On Tue, Aug 25, 2015 at 11:27 AM, Manorama Perera manor...@wso2.com wrote:

 Sure. I'll rename the folders.

 On Tue, Aug 25, 2015 at 11:19 AM, Nirmal Fernando nir...@wso2.com wrote:

 Mano,

 I feel 'tuned' part is redundant in the folders inside

 https://github.com/wso2/product-ml/tree/master/modules/samples/tuned

 Shall we refactor?

 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 Manorama Perera
 Software Engineer
 WSO2, Inc.;  http://wso2.com/
 Mobile : +94716436216




-- 

Thanks  regards,
Nirmal

Team Lead - WSO2 Machine Learner
Associate Technical Lead - Data Technologies Team, WSO2 Inc.
Mobile: +94715779733
Blog: http://nirmalfdo.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Connectors] Storing connector endpoint in registry

2015-08-25 Thread Bhathiya Jayasekara
Hi Dushan/Malaka,

So, is it not possible to achieve the $subject? Any workaround at least?

Thanks,
Bhathiya

On Mon, Aug 24, 2015 at 4:34 PM, Bhathiya Jayasekara bhath...@wso2.com
wrote:

 Hi Dushan,

 The initial service URL[1] can be different from account to account. Hence
 the requirement.

 Thanks,
 Bhathiya

 On Mon, Aug 24, 2015 at 4:26 PM, Dushan Abeyruwan dus...@wso2.com wrote:

 Hi
 Why you need like that ?
 SF connector we have uses initial service URL at init [1] and upon
 successfully authenticate all the other operations other than
 salesforce.init will use dynamic url return from authenticate requests
 (extracting from response payload).[2]

 init

 property expression=//ns:loginResponse/ns:result/ns:serverUrl/text()
 name=salesforce.serviceUrl scope=operation type=STRING
 xmlns:ns=urn:partner.soap.sforce.com /

 property name=salesforce.login.done scope=operation
 type=STRING value=true /


 [1] https://login.salesforce.com/services/Soap/u/27.0
 [2]
 https://github.com/wso2-dev/esb-connectors/blob/master/salesforce/1.0.0/src/main/resources/salesforce/init.xml

 Cheers,
 Dushan

 On Mon, Aug 24, 2015 at 4:12 PM, Bhathiya Jayasekara bhath...@wso2.com
 wrote:

 Hi all,

 Is there a way to store Salesforce endpoint in registry, and refer to
 that in init block of connector?

 Thanks,
 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




 --
 Dushan Abeyruwan | Technical Lead
 Technical Support-Bloomington US
 PMC Member Apache Synpase
 WSO2 Inc. http://wso2.com/
 Blog:*http://www.dushantech.com/ http://www.dushantech.com/*
 Mobile:(001)812-391-7441




 --
 *Bhathiya Jayasekara*
 *Senior Software Engineer,*
 *WSO2 inc., http://wso2.com http://wso2.com*

 *Phone: +94715478185 %2B94715478185*
 *LinkedIn: http://www.linkedin.com/in/bhathiyaj
 http://www.linkedin.com/in/bhathiyaj*
 *Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
 *Blog: http://movingaheadblog.blogspot.com
 http://movingaheadblog.blogspot.com/*




-- 
*Bhathiya Jayasekara*
*Senior Software Engineer,*
*WSO2 inc., http://wso2.com http://wso2.com*

*Phone: +94715478185*
*LinkedIn: http://www.linkedin.com/in/bhathiyaj
http://www.linkedin.com/in/bhathiyaj*
*Twitter: https://twitter.com/bhathiyax https://twitter.com/bhathiyax*
*Blog: http://movingaheadblog.blogspot.com
http://movingaheadblog.blogspot.com/*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Ashen Weerathunga
Hi all,

I am currently working on fraud detection project. I was able to cluster
the KDD cup 99 network anomaly detection dataset using apache spark k means
algorithm. So far I was able to achieve 99% accuracy rate from this
dataset.The steps I have followed during the process are mentioned below.

   - Separate the dataset into two parts (normal data and anomaly data) by
   filtering the label
   - Splits each two parts of data as follows
  - normal data
  - 65% - to train the model
 - 15% - to optimize the model by adjusting hyper parameters
 - 20% - to evaluate the model
  - anomaly data
 - 65% - no use
 - 15% - to optimize the model by adjusting hyper parameters
 - 20% - to evaluate the model
  - Prepossess the dataset
  - Drop out non numerical features since k means can only handle
  numerical values
  - Normalize all the values to 1-0 range
  - Cluster the 65% of normal data using Apache spark K means and build
   the model (15% of both normal and anomaly data were used to tune the hyper
   parameters such as k, percentile etc. to get an optimized model)
   - Finally evaluate the model using 20% of both normal and anomaly data.

Method of identifying a fraud as follows,

   - When a new data point comes, get the closest cluster center by using k
   means predict function.
   - I have calculate 98th percentile distance for each cluster. (98 was
   the best value I got by tuning the model with different values)
   - Then I checked whether the distance of new data point with the given
   cluster center is less than or grater than the 98th percentile of that
   cluster. If it is less than the percentile it is considered as a normal
   data. If it is grater than the percentile it is considered as a fraud since
   it is in outside the cluster.

Our next step is to integrate this feature to ML product and try out it
with more realistic dataset. A summery of results I have obtained using
98th percentile during the process is attached with this.

https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

Thanks and Regards,
Ashen
-- 
*Ashen Weerathunga*
Software Engineer - Intern
WSO2 Inc.: http://wso2.com
lean.enterprise.middleware

Email: as...@wso2.com
Mobile: +94 716042995 94716042995
LinkedIn:
*http://lk.linkedin.com/in/ashenweerathunga
http://lk.linkedin.com/in/ashenweerathunga*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Ashen Weerathunga
Hi all,

This is the source code of the project.
https://github.com/ashensw/Spark-KMeans-fraud-detection

Best Regards,
Ashen

On Tue, Aug 25, 2015 at 2:00 PM, Ashen Weerathunga as...@wso2.com wrote:

 Thanks all for the suggestions,

 There are few assumptions I have made,

- Clusters are uniform
- Fraud data always will be outliers to the normal clusters
- Clusters are not intersect with each other
- I have given the number of Iterations as 100. So I assume that 100
iterations will be enough to make almost stable clusters

 @Maheshakya,

 In this dataset consist of higher amount of anomaly data than the normal
 data. But the practical scenario will be otherwise. Because of that It will
 be more unrealistic if I use those 65% of anomaly data to evaluate the
 model. The amount of normal data I used to build the model is also less
 than those 65% of anomaly data. Yes since our purpose is to detect
 anomalies It would be good to try with more anomaly data to evaluate the
 model.Thanks and I'll try to use them also.

 Best Regards,

 Ashen

 On Tue, Aug 25, 2015 at 12:35 PM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Is there any particular reason why you are putting aside 65% of anomalous
 data at the evaluation? Since there is an obvious imbalance when the
 numbers of normal and abnormal cases are taken into account, you will get
 greater accuracy at the evaluation because a model tends to produce more
 accurate results for the class with the greater size. But it's not the case
 for the class of smaller size. With less number of records, it wont make
 much impact on the accuracy. Hence IMO, it would be better if you could
 evaluate with more anomalous data.
 i.e. number of records of each class needs to be roughly equal.

 Best regards

 On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this
 process (uniform clusters etc). It will make the process more clear IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to
 cluster the KDD cup 99 network anomaly detection dataset using apache 
 spark
 k means algorithm. So far I was able to achieve 99% accuracy rate from 
 this
 dataset.The steps I have followed during the process are mentioned below.

- Separate the dataset into two parts (normal data and anomaly
data) by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only handle
   numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means and
build the model (15% of both normal and anomaly data were used to tune 
 the
hyper parameters such as k, percentile etc. to get an optimized model)
- Finally evaluate the model using 20% of both normal and anomaly
data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center by
using k means predict function.
- I have calculate 98th percentile distance for each cluster. (98
was the best value I got by tuning the model with different values)
- Then I checked whether the distance of new data point with the
given cluster center is less than or grater than the 98th percentile of
that cluster. If it is less than the percentile it is considered as a
normal data. If it is grater than the percentile it is considered as a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try out
 it with more realistic dataset. A summery of results I have obtained using
 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 Email: as...@wso2.com
 Mobile: +94 716042995 94716042995
 LinkedIn:
 *http://lk.linkedin.com/in/ashenweerathunga
 http://lk.linkedin.com/in/ashenweerathunga*




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 *CD Athuraliya*
 Software Engineer
 WSO2, Inc.
 lean . 

[Dev] Few Comments about DAS

2015-08-25 Thread Srinath Perera
Using yesterday's pack

1. Can we make creating a receiver part of the creating/ editing new
streams flow? e.g. by asking what transports to expose and automatically
creating it.

2. In the gadget generation wizard, x, y axis drop downs are now not be
populated.

[image: Inline image 1]

3. When you go into Gadget design view, there is no button to come back.

[image: Inline image 2]

Thanks
Srinath

-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] About Edge Analytics Service-Mobile IOT

2015-08-25 Thread Lasantha Fernando
Hi Srinath, Lakini,

Siddhi is thread safe when sending events. You can send events via multiple
threads without any issue.

When changing queries via multiple threads, the execution plan runtime of
the previous query would have to be shutdown manually. Looking at the
current code, it seems that an ExecutionPlanRuntime instance would be
created after the query is parsed and that would be put to a
ConcurrentHashMap with the execution plan name (specified via annotations)
as the key. So if you do not shutdown that old runtime, it will still keep
running. But if you shutdown the execution plan from the client code, you
should not encounter any issue. @Suho, please correct if my understanding
is incorrect on this.

@Lakini, if you are having two execution plans with different names and
still it seems that one query is overriding the other, that is wrong. Can
you share the execution plan syntax you are using for the two queries and
also point to the code? Based on the scenario described above, it seems
there is a conflict on the execution plan names or the output streams of
the execution plans.

Thanks,
Lasantha

On 25 August 2015 at 16:09, Srinath Perera srin...@wso2.com wrote:

 Please point to your code.

 CEP team, please help.

 On Tue, Aug 25, 2015 at 4:00 PM, Lakini Senanayaka lak...@wso2.com
 wrote:

  Srinath,thank you for your prompt reply.

 My Client and Service run separately in 2 different processes.I have just
 developed 2 simple apps which send random data and I controlled it using a
 timer.I have set 2 different rules for the 2 apps.one client has a rule
 like notify when the value50 and the second client has a rule like notify
 when the value20.When running two clients in the same time ,the first
 client get notification for value20,although it's rule is value50.

 So the SiddhiManager has overrides the 1st client's query by second
 client's query.

 On Tue, Aug 25, 2015 at 3:36 PM, Srinath Perera srin...@wso2.com wrote:

 Suho is Siddhi engine thread safe
 1. send events
 2. to change queries.

 Lakini answer depends on above.

 Could u explain how two clients override the same query? are they
 running the same query?

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 3:24 PM, Lakini Senanayaka lak...@wso2.com
 wrote:

 Hi Srinath,

 As an small introduction to my project,it is about EdgeAnalytics
 Service which is an android service and it does edge analytic using Siddhi
 queries. There are 2 types of clients for the service.I have developed Type
 1 client which sends it's own data to analyse with user define streams,rule
 and a callback.I'm using AIDL interface method for the Client-Service
 connection.

 I have a copy of working project now.In here I used a single thread to
 develop service as you told me in the meeting ,where we discussed the
 Architecture of Edge Analytic Service.But in my application when we are
 running 2 or more  client apps in the same time the siddhimanager in the
 service overrides the the 1st rule of the first running client with the
 rule of second client.(in here rule mean the Logic)So for first client it
 gives wrong results.So to overcome from that we have to use multi threading
 and I discussed this issue with Shan and he told me that this problem can
 be solve by using multi threading.So Could you please advice me how to
 overcome this issue,although you told me not to use multithreading.
 Suggestions would be appreciate.

 I have attached a documentation about Edge Analytic s Service.


 Thank you.
 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




 --
 
 Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
 Site: http://people.apache.org/~hemapani/
 Photos: http://www.flickr.com/photos/hemapani/
 Phone: 0772360902




 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




 --
 
 Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
 Site: http://people.apache.org/~hemapani/
 Photos: http://www.flickr.com/photos/hemapani/
 Phone: 0772360902

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
*Lasantha Fernando*
Senior Software Engineer - Data Technologies Team
WSO2 Inc. http://wso2.com

email: lasan...@wso2.com
mobile: (+94) 71 5247551
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Few Comments about DAS

2015-08-25 Thread Maninda Edirisooriya
On Tue, Aug 25, 2015 at 3:28 PM, Srinath Perera srin...@wso2.com wrote:

 Using yesterday's pack

 1. Can we make creating a receiver part of the creating/ editing new
 streams flow? e.g. by asking what transports to expose and automatically
 creating it.

 2. In the gadget generation wizard, x, y axis drop downs are now not be
 populated.

Yes this issue was observed in latest packs. I am looking at it.


 [image: Inline image 1]

 3. When you go into Gadget design view, there is no button to come back.

 [image: Inline image 2]

 Thanks
 Srinath

 --
 
 Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
 Site: http://people.apache.org/~hemapani/
 Photos: http://www.flickr.com/photos/hemapani/
 Phone: 0772360902

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] jQuery conflict in Wrangler Integration

2015-08-25 Thread Danula Eranjith
Cannot use jquery-1.4.2 alone as bootstrap in ML require 1.9 or higher.
No help from noConflict() as well.
It seems the only option is to load both versions and then resolve the
issues with wrangler.

On Tue, Aug 25, 2015 at 10:36 AM, CD Athuraliya chathur...@wso2.com wrote:



 On Mon, Aug 24, 2015 at 6:49 PM, Nirmal Fernando nir...@wso2.com wrote:

 @Danula please try the suggested approaches.

 On Mon, Aug 24, 2015 at 5:35 PM, Supun Sethunga sup...@wso2.com wrote:

 Can't we use only the jquery-1.4.2 for that particular page  (wrangler
 UI page) as a temporary workaround? jquery-1.4.2 should support
 backward compatibility right?


 We might need to check for compatibility to make sure other components
 work properly.


 Thanks,
 Supun

 On Mon, Aug 24, 2015 at 7:37 AM, Tanya Madurapperuma ta...@wso2.com
 wrote:

 Ideally $.noConflict(); should resolve the issue. [1]

 [1] https://api.jquery.com/jquery.noconflict/

 On Mon, Aug 24, 2015 at 2:43 PM, Nirmal Fernando nir...@wso2.com
 wrote:

 Manu / Tanya,

 Any thoughts?

 On Sun, Aug 23, 2015 at 7:19 PM, Danula Eranjith hmdanu...@gmail.com
 wrote:

 Hi,

 Tried using jQuery Migrate, but it doesn't solve the issue.

 Regards,
 Danula

 On Sun, Aug 23, 2015 at 1:01 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi Danula,

 Can you try using jQuery Migrate [1] plugin? Please make sure you
 place them in correct order.

 [1] http://blog.jquery.com/2013/05/08/jquery-migrate-1-2-1-released/

 Regards,
 CD

 On Sun, Aug 23, 2015 at 11:59 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 @CD any thoughts?

 On Sun, Aug 23, 2015 at 10:34 AM, Danula Eranjith 
 hmdanu...@gmail.com wrote:

 Hi,

 I am trying to integrate a data cleaning tool into ML and I have
 two jQuery versions as ML uses 1.11 and wrangler uses 1.4

 jquery-1.4.2.min.js
 jquery-1.11.1.min.js

 I have tried using .noConflict() method but could not resolve the
 issue.
 Any suggestions on how to implement this?

 Regards,
 Danula







 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 *CD Athuraliya*
 Software Engineer
 WSO2, Inc.
 lean . enterprise . middleware
 Mobile: +94 716288847 94716288847
 LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
 https://twitter.com/cdathuraliya | Blog
 http://cdathuraliya.tumblr.com/





 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 Tanya Madurapperuma

 Senior Software Engineer,
 WSO2 Inc. : wso2.com
 Mobile : +94718184439
 Blog : http://tanyamadurapperuma.blogspot.com




 --
 *Supun Sethunga*
 Software Engineer
 WSO2, Inc.
 http://wso2.com/
 lean | enterprise | middleware
 Mobile : +94 716546324




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 *CD Athuraliya*
 Software Engineer
 WSO2, Inc.
 lean . enterprise . middleware
 Mobile: +94 716288847 94716288847
 LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
 https://twitter.com/cdathuraliya | Blog
 http://cdathuraliya.tumblr.com/

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Using multiple manager nodes after enabling deployment synchronization

2015-08-25 Thread Ramindu De Silva
Looping Suho

On Tue, Aug 25, 2015 at 11:00 AM, Ramindu De Silva ramin...@wso2.com
wrote:

 Hi all,

 As per the offline chat with KasunG from the carbon team, theres no
 particular way to make a CEP node passive or active. What they suggested
 was to handle it from CEP to check whether it should be given rights to
 commit or not and disable editing and creating new artefacts. And also
 another option (not tested) was to, according the the CEP node state,
 disable and enable the depsync configuration in the registry, since it
 checks for the registry before a commit whether its enabled or not.

 Best Regards,

 On Mon, Aug 24, 2015 at 4:35 PM, Ramindu De Silva ramin...@wso2.com
 wrote:

 Hi all,

 When two CEP manager nodes starts in HA-mode, Only one node should be
 able to commit at a given time. As in the documentation [1], it says its
 possible to have an active and passive nodes. Is there a way to specify a
 node as active or passive?

 Best Regards,

 1.
 https://docs.wso2.com/display/CLUSTER420/SVN-Based+Deployment+Synchronizer+for+Carbon+4.3.0+and+4.4.0-Based+Products

 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




-- 
*Ramindu De Silva*
Software Engineer
WSO2 Inc.: http://wso2.com
lean.enterprise.middleware

email: ramin...@wso2.com sanj...@wso2.com
mob: +94 772339350
mob: +94 782731766
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Moving app-manager branch from product-es repo to carbon-store repo

2015-08-25 Thread Rushmin Fernando
Hi Udara,

Thanks for merging !

This should because there are 3 commits in the carbon-store master branch
since we forked the branch.

I will let you know about the stability of this branch after few test
rounds.

Thanks
Rushmin

On Tue, Aug 25, 2015 at 3:38 PM, Udara Rathnayake uda...@wso2.com wrote:

 Hi Rushmin,

 I have merged the PR. While browsing app-manager branch I noticed
 following This branch is 1270 commits ahead, 3 commits behind master.
 Is this 3 commits behind master part correct? It might be due to the
 fact that we have preserved git history from both master of carbon-store
 and app-manger of product-es.

 Anyway if all features in this branch work for App Manager as expected we
 should not worry about that. Let's remove the app-manager branch from
 product-es once everything looks stable.

 Regards,
 UdaraR

 On Tue, Aug 25, 2015 at 3:30 PM, Rushmin Fernando rush...@wso2.com
 wrote:


 Hi Udara and Manu,

 As per our discussion on $subject, i moved the code and fixed the maven
 group ids.

 Please review and merge the pull request [1]

 Thanks
 Rushmin


 [1] - https://github.com/wso2/carbon-store/pull/168


 --
 *Rushmin Fernando*
 *Technical Lead*

 WSO2 Inc. http://wso2.com/ - Lean . Enterprise . Middleware

 email : rush...@wso2.com
 mobile : +94772310855






-- 
*Rushmin Fernando*
*Technical Lead*

WSO2 Inc. http://wso2.com/ - Lean . Enterprise . Middleware

email : rush...@wso2.com
mobile : +94772310855
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Spark K-means clustering on KDD cup 99 dataset

2015-08-25 Thread Ashen Weerathunga
Okay sure.

On Tue, Aug 25, 2015 at 3:55 PM, Nirmal Fernando nir...@wso2.com wrote:

 Sure. @Ashen, can you please arrange one?

 On Tue, Aug 25, 2015 at 2:35 PM, Srinath Perera srin...@wso2.com wrote:

 Nirmal, Seshika, shall we do a code review? This code should go into ML
 after UI part is done.

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 2:20 PM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 This is the source code of the project.
 https://github.com/ashensw/Spark-KMeans-fraud-detection

 Best Regards,
 Ashen

 On Tue, Aug 25, 2015 at 2:00 PM, Ashen Weerathunga as...@wso2.com
 wrote:

 Thanks all for the suggestions,

 There are few assumptions I have made,

- Clusters are uniform
- Fraud data always will be outliers to the normal clusters
- Clusters are not intersect with each other
- I have given the number of Iterations as 100. So I assume that
100 iterations will be enough to make almost stable clusters

 @Maheshakya,

 In this dataset consist of higher amount of anomaly data than the
 normal data. But the practical scenario will be otherwise. Because of that
 It will be more unrealistic if I use those 65% of anomaly data to evaluate
 the model. The amount of normal data I used to build the model is also less
 than those 65% of anomaly data. Yes since our purpose is to detect
 anomalies It would be good to try with more anomaly data to evaluate the
 model.Thanks and I'll try to use them also.

 Best Regards,

 Ashen

 On Tue, Aug 25, 2015 at 12:35 PM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Is there any particular reason why you are putting aside 65% of
 anomalous data at the evaluation? Since there is an obvious imbalance when
 the numbers of normal and abnormal cases are taken into account, you will
 get greater accuracy at the evaluation because a model tends to produce
 more accurate results for the class with the greater size. But it's not 
 the
 case for the class of smaller size. With less number of records, it wont
 make much impact on the accuracy. Hence IMO, it would be better if you
 could evaluate with more anomalous data.
 i.e. number of records of each class needs to be roughly equal.

 Best regards

 On Tue, Aug 25, 2015 at 12:05 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi Ashen,

 It would be better if you can add the assumptions you make in this
 process (uniform clusters etc). It will make the process more clear IMO.

 Regards,
 CD

 On Tue, Aug 25, 2015 at 11:39 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 Can we see the code too?

 On Tue, Aug 25, 2015 at 11:36 AM, Ashen Weerathunga as...@wso2.com
 wrote:

 Hi all,

 I am currently working on fraud detection project. I was able to
 cluster the KDD cup 99 network anomaly detection dataset using apache 
 spark
 k means algorithm. So far I was able to achieve 99% accuracy rate from 
 this
 dataset.The steps I have followed during the process are mentioned 
 below.

- Separate the dataset into two parts (normal data and anomaly
data) by filtering the label
- Splits each two parts of data as follows
   - normal data
   - 65% - to train the model
  - 15% - to optimize the model by adjusting hyper
  parameters
  - 20% - to evaluate the model
   - anomaly data
  - 65% - no use
  - 15% - to optimize the model by adjusting hyper
  parameters
  - 20% - to evaluate the model
   - Prepossess the dataset
   - Drop out non numerical features since k means can only
   handle numerical values
   - Normalize all the values to 1-0 range
   - Cluster the 65% of normal data using Apache spark K means
and build the model (15% of both normal and anomaly data were used 
 to tune
the hyper parameters such as k, percentile etc. to get an optimized 
 model)
- Finally evaluate the model using 20% of both normal and
anomaly data.

 Method of identifying a fraud as follows,

- When a new data point comes, get the closest cluster center
by using k means predict function.
- I have calculate 98th percentile distance for each cluster.
(98 was the best value I got by tuning the model with different 
 values)
- Then I checked whether the distance of new data point with
the given cluster center is less than or grater than the 98th 
 percentile of
that cluster. If it is less than the percentile it is considered as 
 a
normal data. If it is grater than the percentile it is considered 
 as a
fraud since it is in outside the cluster.

 Our next step is to integrate this feature to ML product and try
 out it with more realistic dataset. A summery of results I have 
 obtained
 using 98th percentile during the process is attached with this.


 https://docs.google.com/a/wso2.com/spreadsheets/d/1E5fXk9CM31QEkyFCIEongh8KAa6jPeoY7OM3HraGPd4/edit?usp=sharing

 Thanks and Regards,
 Ashen
 --
 *Ashen Weerathunga*
 Software Engineer - Intern
 WSO2 Inc.: http://wso2.com
 

[Dev] [ML] Wrangler Integration

2015-08-25 Thread Danula Eranjith
Hi all,

Can you suggest where I should be ideally integrating these files[1]
https://github.com/danula/wso2-ml-wrangler-integration/tree/master/src in
ML.

[1] - https://github.com/danula/wso2-ml-wrangler-integration/tree/master/src

Thanks,
Danula
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] About Edge Analytics Service-Mobile IOT

2015-08-25 Thread Lakini Senanayaka
Hi all,

As per request I have shared my source code.You can find my sample project
[1] from this link and I have point out the place[2][3][4] it will be
easier to you to understand.

[1] - sample edgeAnalyticsService
https://github.com/Lakini/EdgeAnalyticServiceSample/
[2] - EdgeAnalyticsService
https://github.com/Lakini/EdgeAnalyticServiceSample/blob/master/EdgeAnalyticsService/app/src/main/java/com/example/lakini/edgeanalyticsservice/CEP.java#L37
[3] - ClientApp1
https://github.com/Lakini/EdgeAnalyticServiceSample/blob/master/ClientApp/app/src/main/java/com/example/lakini/edgeanalyticsservice/MainActivity.java#L58
[4] -ClientApp2
https://github.com/Lakini/EdgeAnalyticServiceSample/blob/master/Second/app/src/main/java/com/example/lakini/edgeanalyticsservice/MainActivity.java#L58

Thank you.

On Tue, Aug 25, 2015 at 4:30 PM, Lasantha Fernando lasan...@wso2.com
wrote:

 Hi Srinath, Lakini,

 Siddhi is thread safe when sending events. You can send events via
 multiple threads without any issue.

 When changing queries via multiple threads, the execution plan runtime of
 the previous query would have to be shutdown manually. Looking at the
 current code, it seems that an ExecutionPlanRuntime instance would be
 created after the query is parsed and that would be put to a
 ConcurrentHashMap with the execution plan name (specified via annotations)
 as the key. So if you do not shutdown that old runtime, it will still keep
 running. But if you shutdown the execution plan from the client code, you
 should not encounter any issue. @Suho, please correct if my understanding
 is incorrect on this.

 @Lakini, if you are having two execution plans with different names and
 still it seems that one query is overriding the other, that is wrong. Can
 you share the execution plan syntax you are using for the two queries and
 also point to the code? Based on the scenario described above, it seems
 there is a conflict on the execution plan names or the output streams of
 the execution plans.

 Thanks,
 Lasantha

 On 25 August 2015 at 16:09, Srinath Perera srin...@wso2.com wrote:

 Please point to your code.

 CEP team, please help.

 On Tue, Aug 25, 2015 at 4:00 PM, Lakini Senanayaka lak...@wso2.com
 wrote:

  Srinath,thank you for your prompt reply.

 My Client and Service run separately in 2 different processes.I have
 just developed 2 simple apps which send random data and I controlled it
 using a timer.I have set 2 different rules for the 2 apps.one client has a
 rule like notify when the value50 and the second client has a rule like
 notify when the value20.When running two clients in the same time ,the
 first client get notification for value20,although it's rule is value50.

 So the SiddhiManager has overrides the 1st client's query by second
 client's query.

 On Tue, Aug 25, 2015 at 3:36 PM, Srinath Perera srin...@wso2.com
 wrote:

 Suho is Siddhi engine thread safe
 1. send events
 2. to change queries.

 Lakini answer depends on above.

 Could u explain how two clients override the same query? are they
 running the same query?

 Thanks
 Srinath

 On Tue, Aug 25, 2015 at 3:24 PM, Lakini Senanayaka lak...@wso2.com
 wrote:

 Hi Srinath,

 As an small introduction to my project,it is about EdgeAnalytics
 Service which is an android service and it does edge analytic using Siddhi
 queries. There are 2 types of clients for the service.I have developed 
 Type
 1 client which sends it's own data to analyse with user define 
 streams,rule
 and a callback.I'm using AIDL interface method for the Client-Service
 connection.

 I have a copy of working project now.In here I used a single thread to
 develop service as you told me in the meeting ,where we discussed the
 Architecture of Edge Analytic Service.But in my application when we are
 running 2 or more  client apps in the same time the siddhimanager in the
 service overrides the the 1st rule of the first running client with the
 rule of second client.(in here rule mean the Logic)So for first client it
 gives wrong results.So to overcome from that we have to use multi 
 threading
 and I discussed this issue with Shan and he told me that this problem can
 be solve by using multi threading.So Could you please advice me how
 to overcome this issue,although you told me not to use multithreading.
 Suggestions would be appreciate.

 I have attached a documentation about Edge Analytic s Service.


 Thank you.
 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




 --
 
 Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
 Site: http://people.apache.org/~hemapani/
 Photos: http://www.flickr.com/photos/hemapani/
 Phone: 0772360902




 --
 *Intern-Engineering*
 Lakini S.Senanayaka
 Mobile: +94 712295444
 Email: lak...@wso2.com




 --
 
 Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
 Site: http://people.apache.org/~hemapani/
 Photos: 

Re: [Dev] Using multiple manager nodes after enabling deployment synchronization

2015-08-25 Thread Sriskandarajah Suhothayan
The correct way is to make only one CEP manager node to have write access,
and all others having read access.

Suho

On Tue, Aug 25, 2015 at 6:14 AM, Ramindu De Silva ramin...@wso2.com wrote:

 Looping Suho

 On Tue, Aug 25, 2015 at 11:00 AM, Ramindu De Silva ramin...@wso2.com
 wrote:

 Hi all,

 As per the offline chat with KasunG from the carbon team, theres no
 particular way to make a CEP node passive or active. What they suggested
 was to handle it from CEP to check whether it should be given rights to
 commit or not and disable editing and creating new artefacts. And also
 another option (not tested) was to, according the the CEP node state,
 disable and enable the depsync configuration in the registry, since it
 checks for the registry before a commit whether its enabled or not.

 Best Regards,

 On Mon, Aug 24, 2015 at 4:35 PM, Ramindu De Silva ramin...@wso2.com
 wrote:

 Hi all,

 When two CEP manager nodes starts in HA-mode, Only one node should be
 able to commit at a given time. As in the documentation [1], it says its
 possible to have an active and passive nodes. Is there a way to specify a
 node as active or passive?

 Best Regards,

 1.
 https://docs.wso2.com/display/CLUSTER420/SVN-Based+Deployment+Synchronizer+for+Carbon+4.3.0+and+4.4.0-Based+Products

 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




-- 

*S. Suhothayan*
Technical Lead  Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* http://wso2.com/*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
http://suhothayan.blogspot.com/twitter: http://twitter.com/suhothayan
http://twitter.com/suhothayan | linked-in:
http://lk.linkedin.com/in/suhothayan http://lk.linkedin.com/in/suhothayan*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Jagath Sisirakumara Ariyarathne
Hi,

I executed performance tests for basic scenarios with this pack. No issues
observed.

[X] Stable - go ahead and release

Thanks.

On Mon, Aug 24, 2015 at 10:27 PM, Chanaka Fernando chana...@wso2.com
wrote:

 Hi Devs,

 WSO2 ESB 4.9.0 RC1 Release Vote

 This release fixes the following issues:
 https://wso2.org/jira/browse/ESBJAVA-4093?filter=12363

 Please download ESB 490 RC1 and test the functionality and vote. Vote will
 be open for 72 hours or as needed.

 *Source  binary distribution files:*

 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/org/wso2/esb/wso2esb/4.9.0-RC1/

 *Maven staging repository:*
 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/

 *The tag to be voted upon :*
 https://github.com/wso2/product-esb/tree/esb-parent-4.9.0-RC1


 [ ] Broken - do not release (explain why)
 [ ] Stable - go ahead and release


 Thanks and Regards,
 ~ WSO2 ESB Team ~

 --
 --
 Chanaka Fernando
 Senior Technical Lead
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 mobile: +94 773337238
 Blog : http://soatutorials.blogspot.com
 LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
 Twitter:https://twitter.com/chanakaudaya
 Wordpress:http://chanakaudaya.wordpress.com




 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
Jagath Ariyarathne
Technical Lead
WSO2 Inc.  http://wso2.com/
Email: jaga...@wso2.com
Mob  : +94 77 386 7048
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Using multiple manager nodes after enabling deployment synchronization

2015-08-25 Thread Ramindu De Silva
Thanks for the reply suho. I  will do as suggested.

Best Regards,

On Tue, Aug 25, 2015 at 5:47 PM, Sriskandarajah Suhothayan s...@wso2.com
wrote:

 The correct way is to make only one CEP manager node to have write access,
 and all others having read access.

 Suho

 On Tue, Aug 25, 2015 at 6:14 AM, Ramindu De Silva ramin...@wso2.com
 wrote:

 Looping Suho

 On Tue, Aug 25, 2015 at 11:00 AM, Ramindu De Silva ramin...@wso2.com
 wrote:

 Hi all,

 As per the offline chat with KasunG from the carbon team, theres no
 particular way to make a CEP node passive or active. What they suggested
 was to handle it from CEP to check whether it should be given rights to
 commit or not and disable editing and creating new artefacts. And also
 another option (not tested) was to, according the the CEP node state,
 disable and enable the depsync configuration in the registry, since it
 checks for the registry before a commit whether its enabled or not.

 Best Regards,

 On Mon, Aug 24, 2015 at 4:35 PM, Ramindu De Silva ramin...@wso2.com
 wrote:

 Hi all,

 When two CEP manager nodes starts in HA-mode, Only one node should be
 able to commit at a given time. As in the documentation [1], it says its
 possible to have an active and passive nodes. Is there a way to specify a
 node as active or passive?

 Best Regards,

 1.
 https://docs.wso2.com/display/CLUSTER420/SVN-Based+Deployment+Synchronizer+for+Carbon+4.3.0+and+4.4.0-Based+Products

 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




 --
 *Ramindu De Silva*
 Software Engineer
 WSO2 Inc.: http://wso2.com
 lean.enterprise.middleware

 email: ramin...@wso2.com sanj...@wso2.com
 mob: +94 772339350
 mob: +94 782731766




 --

 *S. Suhothayan*
 Technical Lead  Team Lead of WSO2 Complex Event Processor
 *WSO2 Inc. *http://wso2.com
 * http://wso2.com/*
 lean . enterprise . middleware


 *cell: (+94) 779 756 757 %28%2B94%29%20779%20756%20757 | blog:
 http://suhothayan.blogspot.com/ http://suhothayan.blogspot.com/twitter:
 http://twitter.com/suhothayan http://twitter.com/suhothayan | linked-in:
 http://lk.linkedin.com/in/suhothayan http://lk.linkedin.com/in/suhothayan*




-- 
*Ramindu De Silva*
Software Engineer
WSO2 Inc.: http://wso2.com
lean.enterprise.middleware

email: ramin...@wso2.com sanj...@wso2.com
mob: +94 772339350
mob: +94 782731766
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] P2 profile generation fails after upgrading

2015-08-25 Thread CD Athuraliya
Hi KasunG,

Kernel version is already 4.4.1.

On Tue, Aug 25, 2015 at 6:15 PM, KasunG Gajasinghe kas...@wso2.com wrote:

 Did u update kernel version to 4.4.1?

 On Tue, Aug 25, 2015 at 6:07 PM, CD Athuraliya chathur...@wso2.com
 wrote:

 Hi all,

 I upgraded following dependencies to given versions and now P2 profile
 generation fails with below error.

 carbon.registry.version4.4.5/carbon.registry.version
 carbon.identity.version4.5.4/carbon.identity.version
 carbon.deployment.version4.5.1/carbon.deployment.version
 carbon.multitenancy.version4.4.3/carbon.multitenancy.version


 Installation failed.
 Cannot complete the install because one or more required items could not
 be found.
  Software being installed: WSO2 Carbon - CXF Runtime Environment 4.5.1
 (org.wso2.carbon.as.runtimes.cxf.feature.group 4.5.1)
  Missing requirement: org.wso2.carbon.identity.sso.agent 4.5.4
 (org.wso2.carbon.identity.sso.agent 4.5.4) requires 'package
 com.google.gson [2.2.4,2.3.0)' but it could not be found
  Cannot satisfy dependency:
   From: WSO2 Carbon - CXF Runtime Environment 4.5.1
 (org.wso2.carbon.as.runtimes.cxf.feature.group 4.5.1)
   To: org.wso2.carbon.webapp.mgt.server.feature.group [4.5.1,4.6.0)
  Cannot satisfy dependency:
   From: WSO2 Carbon - Webapp Management Core Feature 4.5.1
 (org.wso2.carbon.webapp.mgt.server.feature.group 4.5.1)
   To: org.wso2.carbon.identity.sso.agent [4.5.4]
 Application failed, log file location:
 /home/cdathuraliya/.m2/repository/org/eclipse/tycho/tycho-p2-runtime/0.13.0/eclipse/configuration/1440490557905.log

 Any idea why the dependency fails? Any help would be much appreciated.

 Thanks,
 CD

 --
 *CD Athuraliya*
 Software Engineer
 WSO2, Inc.
 lean . enterprise . middleware
 Mobile: +94 716288847 94716288847
 LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
 https://twitter.com/cdathuraliya | Blog
 http://cdathuraliya.tumblr.com/

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --

 *Kasun Gajasinghe*Senior Software Engineer, WSO2 Inc.
 email: kasung AT spamfree wso2.com
 linked-in: http://lk.linkedin.com/in/gajasinghe
 blog: http://kasunbg.org






-- 
*CD Athuraliya*
Software Engineer
WSO2, Inc.
lean . enterprise . middleware
Mobile: +94 716288847 94716288847
LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
https://twitter.com/cdathuraliya | Blog http://cdathuraliya.tumblr.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [ML] P2 profile generation fails after upgrading

2015-08-25 Thread CD Athuraliya
Hi all,

I upgraded following dependencies to given versions and now P2 profile
generation fails with below error.

carbon.registry.version4.4.5/carbon.registry.version
carbon.identity.version4.5.4/carbon.identity.version
carbon.deployment.version4.5.1/carbon.deployment.version
carbon.multitenancy.version4.4.3/carbon.multitenancy.version


Installation failed.
Cannot complete the install because one or more required items could not be
found.
 Software being installed: WSO2 Carbon - CXF Runtime Environment 4.5.1
(org.wso2.carbon.as.runtimes.cxf.feature.group 4.5.1)
 Missing requirement: org.wso2.carbon.identity.sso.agent 4.5.4
(org.wso2.carbon.identity.sso.agent 4.5.4) requires 'package
com.google.gson [2.2.4,2.3.0)' but it could not be found
 Cannot satisfy dependency:
  From: WSO2 Carbon - CXF Runtime Environment 4.5.1
(org.wso2.carbon.as.runtimes.cxf.feature.group 4.5.1)
  To: org.wso2.carbon.webapp.mgt.server.feature.group [4.5.1,4.6.0)
 Cannot satisfy dependency:
  From: WSO2 Carbon - Webapp Management Core Feature 4.5.1
(org.wso2.carbon.webapp.mgt.server.feature.group 4.5.1)
  To: org.wso2.carbon.identity.sso.agent [4.5.4]
Application failed, log file location:
/home/cdathuraliya/.m2/repository/org/eclipse/tycho/tycho-p2-runtime/0.13.0/eclipse/configuration/1440490557905.log

Any idea why the dependency fails? Any help would be much appreciated.

Thanks,
CD

-- 
*CD Athuraliya*
Software Engineer
WSO2, Inc.
lean . enterprise . middleware
Mobile: +94 716288847 94716288847
LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
https://twitter.com/cdathuraliya | Blog http://cdathuraliya.tumblr.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Kevin Ratnasekera
Hi all,

I have tested HTTP inbound use cases, dispatching logic for proxy, api and
defined sequences for both super tenant and tenant mode also Kafka inbound
endpoint basic consumer use cases. No issues found.

[X] Stable - go ahead and release

Thanks.

On Tue, Aug 25, 2015 at 6:01 PM, Jagath Sisirakumara Ariyarathne 
jaga...@wso2.com wrote:

 Hi,

 I executed performance tests for basic scenarios with this pack. No issues
 observed.

 [X] Stable - go ahead and release

 Thanks.

 On Mon, Aug 24, 2015 at 10:27 PM, Chanaka Fernando chana...@wso2.com
 wrote:

 Hi Devs,

 WSO2 ESB 4.9.0 RC1 Release Vote

 This release fixes the following issues:
 https://wso2.org/jira/browse/ESBJAVA-4093?filter=12363

 Please download ESB 490 RC1 and test the functionality and vote. Vote
 will be open for 72 hours or as needed.

 *Source  binary distribution files:*

 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/org/wso2/esb/wso2esb/4.9.0-RC1/

 *Maven staging repository:*
 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/

 *The tag to be voted upon :*
 https://github.com/wso2/product-esb/tree/esb-parent-4.9.0-RC1


 [ ] Broken - do not release (explain why)
 [ ] Stable - go ahead and release


 Thanks and Regards,
 ~ WSO2 ESB Team ~

 --
 --
 Chanaka Fernando
 Senior Technical Lead
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 mobile: +94 773337238
 Blog : http://soatutorials.blogspot.com
 LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
 Twitter:https://twitter.com/chanakaudaya
 Wordpress:http://chanakaudaya.wordpress.com




 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Jagath Ariyarathne
 Technical Lead
 WSO2 Inc.  http://wso2.com/
 Email: jaga...@wso2.com
 Mob  : +94 77 386 7048


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] P2 profile generation fails after upgrading

2015-08-25 Thread KasunG Gajasinghe
Did u update kernel version to 4.4.1?

On Tue, Aug 25, 2015 at 6:07 PM, CD Athuraliya chathur...@wso2.com wrote:

 Hi all,

 I upgraded following dependencies to given versions and now P2 profile
 generation fails with below error.

 carbon.registry.version4.4.5/carbon.registry.version
 carbon.identity.version4.5.4/carbon.identity.version
 carbon.deployment.version4.5.1/carbon.deployment.version
 carbon.multitenancy.version4.4.3/carbon.multitenancy.version


 Installation failed.
 Cannot complete the install because one or more required items could not
 be found.
  Software being installed: WSO2 Carbon - CXF Runtime Environment 4.5.1
 (org.wso2.carbon.as.runtimes.cxf.feature.group 4.5.1)
  Missing requirement: org.wso2.carbon.identity.sso.agent 4.5.4
 (org.wso2.carbon.identity.sso.agent 4.5.4) requires 'package
 com.google.gson [2.2.4,2.3.0)' but it could not be found
  Cannot satisfy dependency:
   From: WSO2 Carbon - CXF Runtime Environment 4.5.1
 (org.wso2.carbon.as.runtimes.cxf.feature.group 4.5.1)
   To: org.wso2.carbon.webapp.mgt.server.feature.group [4.5.1,4.6.0)
  Cannot satisfy dependency:
   From: WSO2 Carbon - Webapp Management Core Feature 4.5.1
 (org.wso2.carbon.webapp.mgt.server.feature.group 4.5.1)
   To: org.wso2.carbon.identity.sso.agent [4.5.4]
 Application failed, log file location:
 /home/cdathuraliya/.m2/repository/org/eclipse/tycho/tycho-p2-runtime/0.13.0/eclipse/configuration/1440490557905.log

 Any idea why the dependency fails? Any help would be much appreciated.

 Thanks,
 CD

 --
 *CD Athuraliya*
 Software Engineer
 WSO2, Inc.
 lean . enterprise . middleware
 Mobile: +94 716288847 94716288847
 LinkedIn http://lk.linkedin.com/in/cdathuraliya | Twitter
 https://twitter.com/cdathuraliya | Blog
 http://cdathuraliya.tumblr.com/

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 

*Kasun Gajasinghe*Senior Software Engineer, WSO2 Inc.
email: kasung AT spamfree wso2.com
linked-in: http://lk.linkedin.com/in/gajasinghe
blog: http://kasunbg.org
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 ESB 4.9.0 RC1

2015-08-25 Thread Prabath Ariyarathna
HI All.

I have tested basic security scenarios on the Java7 and Java8. No issues
found.
[X] Stable - go ahead and release.

Thanks.

On Tue, Aug 25, 2015 at 9:20 PM, Viraj Senevirathne vir...@wso2.com wrote:

 Hi all,

 I have tested VFS inbound and transport use cases for file, ftp and sftp
 protocols. No issues found.
 [X] Stable - go ahead and release

 Thanks.

 On Tue, Aug 25, 2015 at 8:28 PM, Nadeeshaan Gunasinghe 
 nadeesh...@wso2.com wrote:

 Hi,

 I tested JMS use cases and MSMP fail over use cases. No issues found.

 [X] Stable - go ahead and release

 Regards.


 *Nadeeshaan Gunasinghe*
 Software Engineer, WSO2 Inc. http://wso2.com
 +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
 #14f658e48984972a_14f655ebf8b0b7e4_
 http://www.facebook.com/nadeeshaan.gunasinghe
 http://lk.linkedin.com/in/nadeeshaan  http://twitter.com/Nadeeshaan
 http://nadeeshaan.blogspot.com/
 Get a signature like this: Click here!
 http://ws-promos.appspot.com/r?rdata=eyJydXJsIjogImh0dHA6Ly93d3cud2lzZXN0YW1wLmNvbS9lbWFpbC1pbnN0YWxsP3dzX25jaWQ9NjcyMjk0MDA4JnV0bV9zb3VyY2U9ZXh0ZW5zaW9uJnV0bV9tZWRpdW09ZW1haWwmdXRtX2NhbXBhaWduPXByb21vXzU3MzI1Njg1NDg3Njk3OTIiLCAiZSI6ICI1NzMyNTY4NTQ4NzY5NzkyIn0=

 On Tue, Aug 25, 2015 at 6:01 PM, Jagath Sisirakumara Ariyarathne 
 jaga...@wso2.com wrote:

 Hi,

 I executed performance tests for basic scenarios with this pack. No
 issues observed.

 [X] Stable - go ahead and release

 Thanks.

 On Mon, Aug 24, 2015 at 10:27 PM, Chanaka Fernando chana...@wso2.com
 wrote:

 Hi Devs,

 WSO2 ESB 4.9.0 RC1 Release Vote

 This release fixes the following issues:
 https://wso2.org/jira/browse/ESBJAVA-4093?filter=12363

 Please download ESB 490 RC1 and test the functionality and vote. Vote
 will be open for 72 hours or as needed.

 *Source  binary distribution files:*

 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/org/wso2/esb/wso2esb/4.9.0-RC1/

 *Maven staging repository:*
 http://maven.wso2.org/nexus/content/repositories/orgwso2esb-051/

 *The tag to be voted upon :*
 https://github.com/wso2/product-esb/tree/esb-parent-4.9.0-RC1


 [ ] Broken - do not release (explain why)
 [ ] Stable - go ahead and release


 Thanks and Regards,
 ~ WSO2 ESB Team ~

 --
 --
 Chanaka Fernando
 Senior Technical Lead
 WSO2, Inc.; http://wso2.com
 lean.enterprise.middleware

 mobile: +94 773337238
 Blog : http://soatutorials.blogspot.com
 LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
 Twitter:https://twitter.com/chanakaudaya
 Wordpress:http://chanakaudaya.wordpress.com




 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Jagath Ariyarathne
 Technical Lead
 WSO2 Inc.  http://wso2.com/
 Email: jaga...@wso2.com
 Mob  : +94 77 386 7048


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev



 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 Viraj Senevirathne
 Software Engineer; WSO2, Inc.

 Mobile : +94 71 818 4742 %2B94%20%280%29%20773%20451194
 Email : vir...@wso2.com thili...@wso2.com

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 

*Prabath Ariyarathna*

*Associate Technical Lead*

*WSO2, Inc. *

*lean . enterprise . middleware *


*Email: prabat...@wso2.com prabat...@wso2.com*

*Blog: http://prabu-lk.blogspot.com http://prabu-lk.blogspot.com*

*Flicker : https://www.flickr.com/photos/47759189@N08
https://www.flickr.com/photos/47759189@N08*

*Mobile: +94 77 699 4730 *
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev