Re: [Dev] [VOTE] Release of WSO2 Stream Processor 4.3.0 RC3

2018-09-17 Thread Grainier Perera
Tested Distributed Deployment. No blockers found.

[+] Stable - Go ahead and release.

Thanks,

On Sat, Sep 15, 2018 at 1:05 AM Dilini Muthumala  wrote:

> Hi all,
>
>
> WSO2 Stream Processor team is pleased to announce the third
> release candidate of WSO2 Stream Processor 4.3.0.
>
>
> WSO2 Stream Processor is an open source embodiment of the WSO2 Analytics
> platform, of which the real-time, incremental & intelligent data processing
> capabilities let digital businesses create actionable business insights and
> data products.
>
>
> Please find the improvements and fixes related to this release:
>
> - siddhi
> <https://github.com/wso2/siddhi/issues?utf8=%E2%9C%93=is%3Aissue+closed%3A2018-06-20..2018-09-15>
>
> - carbon-analytics-common
> <https://github.com/wso2/carbon-analytics-common/issues?utf8=%E2%9C%93=is%3Aissue+closed%3A2018-06-20..2018-09-15>
>
> - carbon-analytics
> <https://github.com/wso2/carbon-analytics/issues?utf8=%E2%9C%93=is%3Aissue+closed%3A2018-06-20..2018-09-15>
>
> - carbon-dashboards
> <https://github.com/wso2/carbon-dashboards/issues?utf8=%E2%9C%93=is%3Aissue+closed%3A2018-06-20..2018-09-15>
>
> - analytics-solutions
> <https://github.com/wso2/analytics-solutions/issues?utf8=%E2%9C%93=is%3Aissue+closed%3A2018-06-20..2018-09-15>
>
> - product-sp
> <https://github.com/wso2/product-sp/issues?utf8=%E2%9C%93=is%3Aissue+closed%3A2018-06-20..2018-09-15>
>
>
> You can download the product distribution from:
> https://github.com/wso2/product-sp/releases/download/v4.3.0-rc3/wso2sp-4.3.0-rc3.zip
>
>
> The tag to be voted upon: 
> *https://github.com/wso2/product-sp/releases/tag/v4.3.0-rc3
> <https://github.com/wso2/product-sp/releases/tag/v4.3.0-rc3>*
>
>
> Please download, test the product and vote.
>
>
> [+] Stable - go ahead and release
>
> [-] Broken - do not release (explain why)
>
>
> Thanks,
>
> - WSO2 Stream Processor Team -
>


-- 
*Grainier Perera* | Associate Technical Lead | WSO2 Inc.
(m) +94 716 122 384 | (w) +1 323 366 5083 | (e) grain...@wso2.com
GET INTEGRATION AGILE
Integration Agility for Digitally Driven Business
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS][HA] Reason for using a receiver queue in HA mode

2018-01-31 Thread Grainier Perera
InputEventDispatcher will send the event to the callback immediately while
QueueInputEventDispatcher will queue events first and there will be an
internal worker (QueueInputEventDispatcherWorker) which send the events to
the callback. The reason to accumulate events in a queue in HA is to be
used for event syncing. If the event duplicated in the cluster is set to
false, then this queue will be used for the event sync among other nodes in
the HA deployment.

Regards,
Grainier

On Wed, Jan 31, 2018 at 6:31 AM, Nirmal Fernando <nir...@wso2.com> wrote:

> Hi All,
>
> Can any of you remember the reason for using a receiver queue in HA mode?
>
> if (mode == Mode.HA) {
> HAConfiguration haConfiguration = 
> EventReceiverServiceValueHolder.getEventManagementService()
> .getManagementModeInfo().getHaConfiguration();
> Lock readLock = 
> EventReceiverServiceValueHolder.getCarbonEventReceiverManagementService().getReadLock();
> inputEventDispatcher = new QueueInputEventDispatcher(tenantId,
> EventManagementUtil.constructEventSyncId(tenantId,
> eventReceiverConfiguration.getEventReceiverName(), 
> Manager.ManagerType.Receiver),
> readLock, exportedStreamDefinition, 
> haConfiguration.getEventSyncReceiverMaxQueueSizeInMb(),
> haConfiguration.getEventSyncReceiverQueueSize());
> inputEventDispatcher.setSendToOther(!isEventDuplicatedInCluster);
> EventReceiverServiceValueHolder.getEventManagementService()
> .registerEventSync((EventSync) inputEventDispatcher, 
> Manager.ManagerType.Receiver);
> } else {
> inputEventDispatcher = new InputEventDispatcher();
> }
>
>
> --
>
> Thanks & regards,
> Nirmal
>
> Technical Lead, WSO2 Inc.
> Mobile: +94715779733 <+94%2071%20577%209733>
> Blog: http://nirmalfdo.blogspot.com/
>
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Stream Processor Editor - Removing .siddhi file extension from showing on the ui

2017-12-13 Thread Grainier Perera
 tab section.
>>>>>>>
>>>>>>> [image: Inline image 1]
>>>>>>>
>>>>>>> We can show the .siddhi extension on workspace tree and other file
>>>>>>> open/import etc. modals.
>>>>>>>
>>>>>>> WDYT?
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Eranga
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Dec 7, 2017 at 8:48 AM, Damith Wickramasinghe <
>>>>>>> dami...@wso2.com> wrote:
>>>>>>>
>>>>>>>> Hi Eranga/ All,
>>>>>>>>
>>>>>>>> In stream processor we support only files with .siddhi extension.
>>>>>>>> Even though it is the case IMO user needs to know that the file should 
>>>>>>>> have
>>>>>>>> .siddhi extension. Because one can just copy paste a file to workspace
>>>>>>>> directory of editor without adding the .siddhi extension when he see 
>>>>>>>> only
>>>>>>>> files without extension created. And having it there also will not 
>>>>>>>> make any
>>>>>>>> UX issues as I see it. Please raise any concerns if we need to 
>>>>>>>> accommodate
>>>>>>>> this requirement.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Damith
>>>>>>>>
>>>>>>>> --
>>>>>>>> Senior Software Engineer
>>>>>>>> WSO2 Inc.; http://wso2.com
>>>>>>>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>>>>>>>> lean.enterprise.middleware
>>>>>>>>
>>>>>>>> mobile: *+94728671315 <+94%2072%20867%201315>*
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Eranga Liyanage*
>>>>>>> Senior UX Engineer | WSO2
>>>>>>> Mob : +94 77 395 
>>>>>>> Blog : https://medium.com/@erangatl
>>>>>>> Linkedin : https://www.linkedin.com/in/erangaliyanage
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Senior Software Engineer
>>>>>> WSO2 Inc.; http://wso2.com
>>>>>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>>>>>> lean.enterprise.middleware
>>>>>>
>>>>>> mobile: *+94728671315 <+94%2072%20867%201315>*
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Eranga Liyanage*
>>>> Senior UX Engineer | WSO2
>>>> Mob : +94 77 395 
>>>> Blog : https://medium.com/@erangatl
>>>> Linkedin : https://www.linkedin.com/in/erangaliyanage
>>>>
>>>>
>>>
>>>
>>> --
>>> *Ramindu De Silva*
>>> Software Engineer
>>> WSO2 Inc.: http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> email: ramin...@wso2.com <sanj...@wso2.com>
>>> mob: +94 719678895
>>>
>>
>>
>>
>> --
>> Senior Software Engineer
>> WSO2 Inc.; http://wso2.com
>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>> lean.enterprise.middleware
>>
>> mobile: *+94728671315 <+94%2072%20867%201315>*
>>
>>
>
>
> --
> Raveen Savinda Rathnayake,
> Software Engineering Intern,
> WSO2 Inc.
>
> *lean. enterprise. middleware  *
> Web: www.WSO2.com Mobile : +94771144549  Blog : https://blog.raveen.me
>
> <https://lk.linkedin.com/in/raveensr>
>
> <http://wso2.com/signature>
>



-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [C5] How to programmatically shut down C5 based products?

2017-10-25 Thread Grainier Perera
Hi Devs,

I have a requirement where I need to programmatically shut down a C5 based
product. Is there a carbon utility to do that?

Thanks,
Grainier.
-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] query help

2017-10-09 Thread Grainier Perera
Hi Jayesh,

I don't think the OOTB event windows will support the feature that you are
looking for. The best approach for you is to use an event table instead.
You try something similar to below;

@Plan:name('TestExecutionPlan')
> define stream jobsStream (id int, eventdata string);
> define table jobsTable (id int, eventdata string, arrival_timestamp long,
> event_timestamp long);
> define trigger dailyTrigger at '0 0 3 ? * * *';


> /* Introduce additional attributes, to be used with the logic.
>  */
> from jobsStream
> select
> id,
> eventdata,
> time:timestampInMilliseconds() as arrival_timestamp,
> time:extract(time:timestampInMilliseconds(), 'HOUR') as arrival_hour,
> time:timestampInMilliseconds(eventdata) as event_timestamp,
> time:extract('HOUR', eventdata, '-MM-dd HH:mm:ss.SSS') as event_hour
> insert into enrichedJobsStream;


> /* Filter events arrives between 00:00 to 03:00 (exclusive)
>  */
> from enrichedJobsStream[arrival_hour >= 0 AND arrival_hour < 3]
> insert into candidateJobsStream;


> /* Assuming, if the event_hour >= 3 (events which came in the past day
> between 3 - midnight),
>  * Update the table with the highest event_timestamp.
>  * So that the table will have one entry per id with the highest
> event_timestamp.
>  * These are the events which doesn't match the criteria.
>  */
> from candidateJobsStream[event_hour >= 3]
> select id, eventdata, arrival_timestamp, event_timestamp
> insert overwrite jobsTable
> on jobsTable.id == id and jobsTable.event_timestamp < event_timestamp;
>

/* Once the trigger get fired at 03:00 Join trigger with the table,
>  * at this time table will only have events that doesn't match the
> criteria.
>  */
> from dailyTrigger join jobsTable
> select jobsTable.id, jobsTable.eventdata
> insert into criteriaNotMatchedStream;


> /* If the event_hour < 3 (events which came between midnight - 3),
>  * Which means it satisfies the criteria.
>  */
> from candidateJobsStream[event_hour < 3]
> select id, eventdata
> insert into criteriaMatchedStream;


> /* Clean the table, so that it'll be empty before processing it next
> midnight.
>  */
> from criteriaMatchedStream
> delete jobsTable
> on jobsTable.id == id;


> from criteriaNotMatchedStream
> delete jobsTable
> on jobsTable.id == id;


Regards,
Grainier.

On Sat, Oct 7, 2017 at 5:22 AM, Jayesh Senjaliya <jayesh.senjal...@gmail.com
> wrote:

> Hi Grainier,
>
> I have following requirement
>
> job typically runs multiple time a day, but for all the jobs that runs
> between midnight to 3AM, there has to be a job that has eventdata >
> midnight,
>
> so want to detect an event which satisfy the condition, or create and
> alert event at 3AM that says there were no event matching that criteria.
>
>
> define stream Jobs (id int, arrival_time string, eventdata string);
>
> from Jobs
> select id, eventdata
> having time:timestampInMilliseconds(eventdata) -
> time:timestampInMilliseconds('2017-10-03 00:00:00.000') > 0
> insert into test;
>
>
>
> sample data :
>
> Jobs=[1,2017-10-03 00:01:59.000,2017-10-02 21:01:59.000]
> Jobs=[1,2017-10-03 00:02:15.000,2017-10-02 21:30:59.000]
> Jobs=[1,2017-10-03 00:02:30.000,2017-10-03 00:15:00.000]
> Jobs=[1,2017-10-03 00:02:58.000,2017-10-03 01:30:59.000]
>
> delay(1000)
> Jobs=[2,2017-10-03 00:00:50.000,2017-10-02 20:01:59.000]
> Jobs=[2,2017-10-03 00:01:15.000,2017-10-02 21:30:00.000]
> Jobs=[2,2017-10-03 00:02:30.000,2017-10-02 21:30:00.000]
> Jobs=[2,2017-10-03 00:02:40.000,2017-10-02 21:30:00.000]
>
> delay(1000)
> Jobs=[3,2017-10-03 00:00:50.000,2017-10-02 22:01:59.000]
> Jobs=[3,2017-10-03 00:01:15.000,2017-10-02 22:30:59.000]
> Jobs=[3,2017-10-03 00:02:30.000,2017-10-03 01:40:00.000]
> Jobs=[3,2017-10-03 00:02:40.000,2017-10-03 02:30:59.000]
>
>
>
> the output should be
> event with id : 1 and 3 met the requirement, but event with id 2 didnt.
>
>
>
> i was trying this with window.cron and then grouping it by id, but that
> didnt work.
>
> do you have an idea on who this can be achive?
>
> basically i douldnt find anything that will keep giving me events between
> midnight to 3 and then also collective result at the end of the window
> which is 3AM here.
>
> logically i want to do following.
>
> - start collecting events at midnight, filter it further for events having
> eventdata > midnight, if there is an event matching that criteria, create
> an event in a stream.
> - at 3AM i want to look at all those collected events and see if there is
> any event that has not matched the criteria at all and send an event with
> the id to another stream
>
>
> Thanks
> Jayesh
>
>
>
>
>
>
>
>
>
>
>
>
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Siddhi] non-matched or expired events in pattern query

2017-09-30 Thread Grainier Perera
cribe event for the same
>>> publish job.
>>> now i want to catch if certain jobs (publish -> subscribe) has been
>>> finished with 10 sec.
>>> I have all the registered jobs in db table, which i use to gather all
>>> the required publish-subscribe job events.
>>>
>>> define table jobTable( pid string, sid string);
>>> define stream pubStream (pid int, status string);
>>> define stream subStream (pid int, sid int, status string);
>>>
>>> -- this will get all the publish-> subscribe jobs events as master list
>>> from pubStream as p join jobTable as t
>>> on p.pid == t.pid
>>> select p.pid, t.sid insert into allPSJobs;
>>>
>>> -- this is where i need to do intersection where if subStream event is
>>> seen within 2 sec then remove that from master list ( allPSJobs ) if not
>>> include that in not_completed_jobs_in_time
>>>
>>> from every ( a=allPSJobs ) -> s= subStream[sid == a.sid and pid==a.pid ]
>>> within 2 sec
>>> select s.pid, s.sid insert into completed_jobs_in_time;
>>>
>>>
>>> hope that make sense from what i am trying to do.
>>>
>>> Thanks
>>> Jayesh
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Sep 25, 2017 at 8:39 AM, Grainier Perera <grain...@wso2.com>
>>> wrote:
>>>
>>>> Hi Jay,
>>>>
>>>> You can try something similar to this to get non-matched events during
>>>> last 10 secs; You can find some documentation on this as well; link
>>>> <https://docs.wso2.com/display/CEP420/Sample+0111+-+Detecting+non-occurrences+with+Patterns>
>>>>
>>>>
>>>>
>>>>> define stream publisher (pid string, time string);
>>>>> define stream subscriber (pid string, sid string, time string);
>>>>
>>>>
>>>>> from publisher#window.time(10 sec)
>>>>> select *
>>>>> insert expired events into expired_publisher;
>>>>
>>>>
>>>>> from every pub=publisher -> sub=subscriber[pub.pid == pid] or
>>>>> exp=expired_publisher[pub.pid == pid]
>>>>> select pub.pid as pid, pub.time as time, sub.pid as subPid
>>>>> insert into filter_stream;
>>>>
>>>>
>>>>> from filter_stream [(subPid is null)]
>>>>> select pid, time
>>>>> insert into not_seen_in_last_10_sec_events;
>>>>
>>>>
>>>> Moreover, I didn't get what you meant by "also is there a way to
>>>> perform intersection of events based on grouping or time window ?" can you
>>>> please elaborate on this?
>>>>
>>>> Regards,
>>>>
>>>> On Mon, Sep 25, 2017 at 11:02 AM, Jayesh Senjaliya <jhsonl...@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> is there a way to get events that didnt match within the given time
>>>>> frame.
>>>>>
>>>>> for example:
>>>>>
>>>>> define stream publisher (pid string, time string);
>>>>> define stream subscriber (pid string, sid string, time string);
>>>>>
>>>>> from every (e1=publisher) -> e2=subscriber[e1.pid == pid]
>>>>> within 10 sec
>>>>> select e1.pid, e2.sid
>>>>> insert into seen_in_last_10_sec_events;
>>>>>
>>>>>
>>>>> so if i have matching event above, i will see it in
>>>>> seen_in_last_10_sec_events, but is there a way to get all events or non
>>>>> matched events during that last 10 seconds from publisher or subscriber ?
>>>>>
>>>>> also is there a way to perform intersection of events based on
>>>>> grouping or time window ?
>>>>>
>>>>>
>>>>> Thanks
>>>>> Jay
>>>>>
>>>>> ___
>>>>> Dev mailing list
>>>>> Dev@wso2.org
>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Grainier Perera
>>>> Senior Software Engineer
>>>> Mobile : +94716122384 <+94%2071%20612%202384>
>>>> WSO2 Inc. | http://wso2.com
>>>> lean.enterprise.middleware
>>>>
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> *Gobinath** Loganathan*
>> Graduate Student,
>> Electrical and Computer Engineering,
>> Western University.
>> Email  : slgobin...@gmail.com
>> Blog: javahelps.com <http://www.javahelps.com/>
>>
>>
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Siddhi] non-matched or expired events in pattern query

2017-09-25 Thread Grainier Perera
Hi Jay,

You can try something similar to this to get non-matched events during last
10 secs; You can find some documentation on this as well; link
<https://docs.wso2.com/display/CEP420/Sample+0111+-+Detecting+non-occurrences+with+Patterns>



> define stream publisher (pid string, time string);
> define stream subscriber (pid string, sid string, time string);


> from publisher#window.time(10 sec)
> select *
> insert expired events into expired_publisher;


> from every pub=publisher -> sub=subscriber[pub.pid == pid] or
> exp=expired_publisher[pub.pid == pid]
> select pub.pid as pid, pub.time as time, sub.pid as subPid
> insert into filter_stream;


> from filter_stream [(subPid is null)]
> select pid, time
> insert into not_seen_in_last_10_sec_events;


Moreover, I didn't get what you meant by "also is there a way to perform
intersection of events based on grouping or time window ?" can you please
elaborate on this?

Regards,

On Mon, Sep 25, 2017 at 11:02 AM, Jayesh Senjaliya <jhsonl...@gmail.com>
wrote:

> Hi,
>
> is there a way to get events that didnt match within the given time frame.
>
> for example:
>
> define stream publisher (pid string, time string);
> define stream subscriber (pid string, sid string, time string);
>
> from every (e1=publisher) -> e2=subscriber[e1.pid == pid]
> within 10 sec
> select e1.pid, e2.sid
> insert into seen_in_last_10_sec_events;
>
>
> so if i have matching event above, i will see it in
> seen_in_last_10_sec_events, but is there a way to get all events or non
> matched events during that last 10 seconds from publisher or subscriber ?
>
> also is there a way to perform intersection of events based on grouping or
> time window ?
>
>
> Thanks
> Jay
>
> _______
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] pattern for one to many match or multi match events ?

2017-09-18 Thread Grainier Perera
Hi Jay,

In that case, you can either remove events from the table which are older
than 24hrs, or you can include a condition to the join query to ignore
events that are older than 24hrs. Please refer to the sample below;


@Plan:name('TestExecutionPlan')
> define stream publisher (pid string, time string);
> define stream subscriber (pid string, sid string, time string);
> define table publisherTable (pid string, time string, timestamp long);
> define trigger purgeTrigger at '0 0 0 ? * *';


> -- Purge records older than 24 hours
>
>
> *from purgeTrigger delete publisherTable on (triggered_time -
> publisherTable.timestamp) >= 1000 * 60 * 60 * 24; *


> from publisher
> select pid, time, time:timestampInMilliseconds() as timestamp
> insert into publisherTable;


> -- Double check the time range
> from subscriber as s join publisherTable as p
> on p.pid == s.pid *AND ((time:timestampInMilliseconds() - p.timestamp) <
> 1000 * 60 * 60 * 24)*
> select p.pid, s.sid, s.time
> insert into AlertStream;



Regards,
Grainier.

On Fri, Sep 15, 2017 at 2:21 PM, Jayesh Senjaliya <jhsonl...@gmail.com>
wrote:

> Hi Grainier,
>
> even table approach make sense, but is there a way to limit the event
> table to keep the events for let say 24 hour or so and then discard it ?
>
> Thanks for looking into this.
> Jay
>
>
>
>
> On Thu, Sep 14, 2017 at 11:51 PM, Grainier Perera <grain...@wso2.com>
> wrote:
>
>> Hi Jay,
>>
>> In your pattern, when a match found, it will discard that event (e1 in
>> your scenario), so it won't get compared with other events. However, if you
>> need to hold that event and match it with more than a single event, then
>> you can use an event table as shown below.
>>
>> @Plan:name('TestExecutionPlan')
>>> define stream publisher (pid string, time string);
>>> define stream subscriber (pid string, sid string, time string);
>>> define table publisherTable (pid string, time string);
>>
>>
>>> from publisher
>>> insert into publisherTable;
>>
>>
>>
>> -- Option 1
>>
>> from subscriber[publisherTable.pid == pid in publisherTable]
>>> select pid, time
>>> insert into AlertStream1;
>>
>>
>>
>> -- Option 2
>>> from subscriber as s join publisherTable as p
>>> on p.pid == s.pid
>>> select p.pid, s.sid, s.time
>>> insert into AlertStream2;
>>
>>
>> Regards,
>> Grainier.
>>
>> On Fri, Sep 15, 2017 at 11:41 AM, Jayesh Senjaliya <jhsonl...@gmail.com>
>> wrote:
>>
>>> Hello WSO2 community.
>>>
>>> I am trying to implement a siddhi query where 1 event in publisher can
>>> have multiple event in subscriber. this fits well in pattern query but it
>>> looks like it outputs as soon as 1 event is matched and there is no way to
>>> window or tell the count.
>>>
>>> here is the execution plan i have came up with that should have matches
>>> all mapping but its not working that way, it only outputs 1 event, the
>>> first one that matches.
>>>
>>> can someone please look at this and help me figure out why it is not
>>> working? or what would be right way to get this?
>>>
>>> Thanks
>>> Jay
>>>
>>>
>>> Execution Plan:
>>>
>>> @Plan:name('TestExecutionPlan')
>>> define stream publisher (pid string, time string);
>>> define stream subscriber (pid string, sid string, time string);
>>>
>>> @info(name = 'query2')
>>> from every( e1=publisher ) -> e2=subscriber[ e2.pid == e1.pid ]
>>> select e1.pid, e2.sid, e2.time
>>> insert into AlertStream;
>>>
>>>
>>> here is the sample events if you want to try on siddhi-try-it tool
>>>
>>> publisher=[1,2017-08-15 01:08:30.253]
>>> publisher=[2,2017-08-15 02:08:30.253]
>>> publisher=[3,2017-08-15 03:08:30.253]
>>>
>>> subscriber=[1, 12,2017-08-15 21:08:30.253]
>>> subscriber=[1, 13,2017-08-15 21:10:30.253]
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Grainier Perera
>> Senior Software Engineer
>> Mobile : +94716122384 <+94%2071%20612%202384>
>> WSO2 Inc. | http://wso2.com
>> lean.enterprise.middleware
>>
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] pattern for one to many match or multi match events ?

2017-09-15 Thread Grainier Perera
Hi Jay,

In your pattern, when a match found, it will discard that event (e1 in your
scenario), so it won't get compared with other events. However, if you need
to hold that event and match it with more than a single event, then you can
use an event table as shown below.

@Plan:name('TestExecutionPlan')
> define stream publisher (pid string, time string);
> define stream subscriber (pid string, sid string, time string);
> define table publisherTable (pid string, time string);


> from publisher
> insert into publisherTable;



-- Option 1

from subscriber[publisherTable.pid == pid in publisherTable]
> select pid, time
> insert into AlertStream1;



-- Option 2
> from subscriber as s join publisherTable as p
> on p.pid == s.pid
> select p.pid, s.sid, s.time
> insert into AlertStream2;


Regards,
Grainier.

On Fri, Sep 15, 2017 at 11:41 AM, Jayesh Senjaliya <jhsonl...@gmail.com>
wrote:

> Hello WSO2 community.
>
> I am trying to implement a siddhi query where 1 event in publisher can
> have multiple event in subscriber. this fits well in pattern query but it
> looks like it outputs as soon as 1 event is matched and there is no way to
> window or tell the count.
>
> here is the execution plan i have came up with that should have matches
> all mapping but its not working that way, it only outputs 1 event, the
> first one that matches.
>
> can someone please look at this and help me figure out why it is not
> working? or what would be right way to get this?
>
> Thanks
> Jay
>
>
> Execution Plan:
>
> @Plan:name('TestExecutionPlan')
> define stream publisher (pid string, time string);
> define stream subscriber (pid string, sid string, time string);
>
> @info(name = 'query2')
> from every( e1=publisher ) -> e2=subscriber[ e2.pid == e1.pid ]
> select e1.pid, e2.sid, e2.time
> insert into AlertStream;
>
>
> here is the sample events if you want to try on siddhi-try-it tool
>
> publisher=[1,2017-08-15 01:08:30.253]
> publisher=[2,2017-08-15 02:08:30.253]
> publisher=[3,2017-08-15 03:08:30.253]
>
> subscriber=[1, 12,2017-08-15 21:08:30.253]
> subscriber=[1, 13,2017-08-15 21:10:30.253]
>
>
>
>
>
>
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [GSOC][Siddhi][DEV] Deployment and Code Management of PySiddhi

2017-08-23 Thread Grainier Perera
Hi Madhawa,

Merged both PRs and the Wiki.

Regards,
Grainier.

On Wed, Aug 23, 2017 at 9:41 PM, Madhawa Vidanapathirana <
madhawavidanapathir...@gmail.com> wrote:

> Hi,
>
> I have sent the new PRs [1] [2] which merge WSO2 DAS Client with PySiddhi.
>
> The Wiki [3] has also been updated, which would require a manual merge
> since PRs are not possible for GitHub Wikis.
>
> Kindly reach me if any changes are required.
>
> [1] https://github.com/wso2/PySiddhi/pull/3
> [2] https://github.com/wso2/PySiddhi/pull/4
> [3] https://github.com/madhawav/PySiddhi/wiki
>
> Regards,
> Madhawa
>
>
> On Fri, Aug 18, 2017 at 7:27 PM, Madhawa Vidanapathirana <
> madhawavidanapathir...@gmail.com> wrote:
>
>> Hi,
>>
>> I just noticed that it is not possible to send PRs on Wiki pages I made
>> for the project, that are available at [1]. I believe a collaborator of
>> main repository [2] would have to manually review the wiki at [1] and get
>> it copied to main repository [2].
>>
>> The link [3] describes a technique which can be useful to copy the wiki
>> pages from [1] to [2].
>>
>> Also, I am looking forward for your comments on updated PRs I sent to
>> branches master and 3.x of main repository [2].
>>
>> [1] - https://github.com/madhawav/PySiddhi/wiki
>> [2] - https://github.com/wso2/PySiddhi
>> [3] - https://stackoverflow.com/questions/10642928/how-to-pull-
>> request-a-wiki-page-on-github
>>
>> Kind Regards,
>> Madhawa
>>
>> On Thu, Aug 10, 2017 at 11:02 AM, Madhawa Vidanapathirana <
>> madhawavidanapathir...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have sent PRs to both branches 3.x and master of [1] with relevant
>>> code versions.
>>> Kindly review and let me know any changes that are required.
>>>
>>> Meanwhile, I will check on generation of *.whl files which are required
>>> for distribution to PyPI.
>>>
>>> [1] https://github.com/wso2/pysiddhi
>>>
>>> Kind Regards,
>>> Madhawa
>>>
>>> On Wed, Aug 9, 2017 at 5:03 PM, Grainier Perera <grain...@wso2.com>
>>> wrote:
>>>
>>>> Hi Madhawa,
>>>>
>>>> I have created a branch for PySiddhi 3.x at [1], and we are thinking of
>>>> maintaining the PySiddhi 4.x in the master branch. Please send PRs to 3.x
>>>> branch and master branch.
>>>>
>>>> [1] https://github.com/wso2/pysiddhi/tree/3.x
>>>>
>>>> Regards,
>>>>
>>>> On Wed, Aug 9, 2017 at 9:17 AM, Madhawa Vidanapathirana <
>>>> madhawavidanapathir...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have prepared the branch for PySiddhi 3.1 (in fork of main repo) and
>>>>> it is available at [1]. However, I am unable to send the PR since 3.1
>>>>> branch is not in the main repository at [2].
>>>>>
>>>>> Also, would be requiring a branch for 4.0 to PR the 4.0 version which
>>>>> will be ready soon.
>>>>>
>>>>> [1] https://github.com/madhawav/PySiddhi/tree/3.1
>>>>> [2] https://github.com/wso2/PySiddhi
>>>>>
>>>>> Kind Regards,
>>>>>
>>>>> --
>>>>> *Madhawa Vidanapathirana*
>>>>> Student
>>>>> Department of Computer Science and Engineering
>>>>> University of Moratuwa
>>>>> Sri Lanka
>>>>>
>>>>> Mobile: (+94) 716874425 <+94%2071%20687%204425>
>>>>> Email: madhawavidanapathir...@gmail.com
>>>>> Linked-In: https://lk.linkedin.com/in/madhawa-vidanapathirana-3430b94
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Grainier Perera
>>>> Senior Software Engineer
>>>> Mobile : +94716122384 <+94%2071%20612%202384>
>>>> WSO2 Inc. | http://wso2.com
>>>> lean.enterprise.middleware
>>>>
>>>
>>>
>>>
>>> --
>>> *Madhawa Vidanapathirana*
>>> Student
>>> Department of Computer Science and Engineering
>>> University of Moratuwa
>>> Sri Lanka
>>>
>>> Mobile: (+94) 716874425 <+94%2071%20687%204425>
>>> Email: madhawavidanapathir...@gmail.com
>>> Linked-In: https://lk.linkedin.com/in/madhawa-vidanapathirana-3430b94
>>>
>>
>>
>>
>> --
>> *Madhawa Vidanapathirana*
>> Student
>> Department of Computer Science and Engineering
>> University of Moratuwa
>> Sri Lanka
>>
>> Mobile: (+94) 716874425 <+94%2071%20687%204425>
>> Email: madhawavidanapathir...@gmail.com
>> Linked-In: https://lk.linkedin.com/in/madhawa-vidanapathirana-3430b94
>>
>
>
>
> --
> *Madhawa Vidanapathirana*
> Student
> Department of Computer Science and Engineering
> University of Moratuwa
> Sri Lanka
>
> Mobile: (+94) 716874425 <+94%2071%20687%204425>
> Email: madhawavidanapathir...@gmail.com
> Linked-In: https://lk.linkedin.com/in/madhawa-vidanapathirana-3430b94
>



-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Summarization of JSON data in DAS

2017-08-20 Thread Grainier Perera
Hi Lahiru,

You can achieve that by using JSON custom mapping [1] + Siddhi MAP
extension + IfThenElse (optional to check the availability of attributes).
Basically what you have to do is, have a;

JSON structure like this;

>
> {
>"userData": {
>"timestamp": 19900813115534,
>
> *"dataMap": {*
>
> *  "id": 1, // these will be your arbitary data
> "name": grainier**   }*
>}
>}
> }


Input mapping like this;

> 

   // other mappings


>
>
> *
>   ***
> 


Stream definition like this;

> {
>   "streamId": "org.wso2.event.user.stream:1.0.0",
>   "name": "org.wso2.event.user.stream",
>   "version": "1.0.0",
>   "nickName": "",
>   "description": "",
>   "metaData": [],
>   "correlationData": [],
>   "payloadData": [
> {
>   "name": "timestamp",
>   "type": "LONG"
> },
>
>
>
> *{  "name": "userData",  "type": "STRING"**}*
>   ]
> }


Finally within the execution plan;

> @Import('org.wso2.event.user.stream:1.0.0')
> define stream dataIn (timestamp long, *userData** string*);
>


from dataIn
> select *map:createFromJSON(**userData) as userData**Map*
> insert into tempStream;



from tempStream
>
> *select map:get(userDataMap, "id") as id, map:get(userDataMap, "name") as
> name*insert into tempStream2;



-- now you can use those arbitary fields here...



Hope that helped...

[1]
https://docs.wso2.com/display/CEP420/Input+Mapping+Types#InputMappingTypes-JSONinputmappingJSONInputMapping
[2] https://docs.wso2.com/display/CEP420/Map+Extension

Regards,
Grainier.

On Sat, Jul 8, 2017 at 11:34 AM, Lahiru Madushanka <lahirum...@wso2.com>
wrote:

> Hi Nirmal,
>
> Documentation says arbitrary data support can be used with wso2event input
> format. But in my case its "JSON".
> Custom event receiver will be an option. But is there a way I can do this
> without writing a custom event receiver ?
>
> Thanks for the help.
>
> Regards,
> Lahiru
>
> On Sat, Jul 8, 2017 at 10:02 AM, Nirmal Fernando <nir...@wso2.com> wrote:
>
>> Check on arbitrary data support https://docs.wso2.com/
>> display/DAS310/Input+Mapping+Types
>>
>> On Sat, Jul 8, 2017 at 7:48 AM, Lahiru Madushanka <lahirum...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> I have a requirement where data values published to DAS are not
>>> predefined (dynamic)
>>> ( Ex :- cpu usage of a given laptop ) So I push them as a JSON string
>>> Ex:-  "cpuinfo":{"corePercentages": [4.1, 3.1, 5.2, 7.1], "numOfCores":
>>> 4}
>>> This JSON string will be changed with no of cores in the PC which pushes
>>> the data.
>>>
>>> Is there a way I can write a summarization query in siddhiql to take
>>> average of average corePercentages for a time interval. (first take avg of
>>> percentage values and then average it over time )
>>>
>>> Thanks
>>> Lahiru
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>>
>> Thanks & regards,
>> Nirmal
>>
>> Technical Lead, WSO2 Inc.
>> Mobile: +94715779733 <+94%2071%20577%209733>
>> Blog: http://nirmalfdo.blogspot.com/
>>
>>
>>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Where is the create publishers, streams and receivers page in wso2das-4.0.0-M6

2017-08-20 Thread Grainier Perera
Hi Sagar,

>From SP 4.0.0-MX [1] (formerly DASw-4.0.0-MX) <, we don't have a management
console page (localhost:9443/carbon). Instead, we have the editor. Also,
there are some changes to key concepts such as receivers and publishers.
Instead of receivers and publishers, we have a new concept called event
source[2] and event sink[3], which can be defined within the Siddhi App
(execution plan) it self. Therefore, you don't have to create streams,
receivers, and publishers separately as you did in DAS 3.1.0.

Note: You can find the latest milestone release at [4] with further
improvements.

[1] https://docs.wso2.com/display/SP400/
[2] https://docs.wso2.com/display/SP400/Receiving+Events
[3] https://docs.wso2.com/display/SP400/Publishing+Events
[4] https://github.com/wso2/product-sp/releases/download/
v4.0.0-M9/wso2sp-4.0.0-M9.zip

Regards,
Grainier.

On Sat, Jul 22, 2017 at 11:04 AM, Sagar Kapadia <ks197...@gmail.com> wrote:

> Hi,
> I installed wso2das-4.0.0-M6, and ran it using the "editor.bat" command.
> However, I cant figure out where is the localhost:9443/carbon page which
> was present in wso2das-3.1.0. I need that to create
> streams, publishers and receivers. The composer in wso2das-4.0.0-M6 allows
> me to handle events by linking streams. But where do I create the steams,
> receivers and publishers themselves?
> Sagar
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [GSOC][Siddhi][DEV] Deployment and Code Management of PySiddhi

2017-08-09 Thread Grainier Perera
Hi Madhawa,

I have created a branch for PySiddhi 3.x at [1], and we are thinking of
maintaining the PySiddhi 4.x in the master branch. Please send PRs to 3.x
branch and master branch.

[1] https://github.com/wso2/pysiddhi/tree/3.x

Regards,

On Wed, Aug 9, 2017 at 9:17 AM, Madhawa Vidanapathirana <
madhawavidanapathir...@gmail.com> wrote:

> Hi,
>
> I have prepared the branch for PySiddhi 3.1 (in fork of main repo) and it
> is available at [1]. However, I am unable to send the PR since 3.1 branch
> is not in the main repository at [2].
>
> Also, would be requiring a branch for 4.0 to PR the 4.0 version which will
> be ready soon.
>
> [1] https://github.com/madhawav/PySiddhi/tree/3.1
> [2] https://github.com/wso2/PySiddhi
>
> Kind Regards,
>
> --
> *Madhawa Vidanapathirana*
> Student
> Department of Computer Science and Engineering
> University of Moratuwa
> Sri Lanka
>
> Mobile: (+94) 716874425 <+94%2071%20687%204425>
> Email: madhawavidanapathir...@gmail.com
> Linked-In: https://lk.linkedin.com/in/madhawa-vidanapathirana-3430b94
>



-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Siddhi] [Bug] [GSoC] Issue in releaseBreakPoint of Siddhi Debugger

2017-06-03 Thread Grainier Perera
Hi Madhawa,

I think you are using *debugger.next()* within the callbacks' debugEvent
method. That could be the reason why you'd get two debugger callbacks.
Because even you release the breakpoint, next will trigger the callback at
next debuggable element. To get rid of that, use *debugger.play() *instead.

SiddhiDebugger siddhiDebugger = executionPlanRuntime.debug();
> siddhiDebugger.setDebuggerCallback(new SiddhiDebuggerCallback() {
> @Override
> public void debugEvent(ComplexEvent event, String queryName,
> SiddhiDebugger.QueryTerminal queryTerminal,
>SiddhiDebugger debugger) {
> log.info("Query: " + queryName + ":" + queryTerminal);
> log.info(event);
>
> *debugger.play();*}
> });
> siddhiDebugger.acquireBreakPoint("query 1",
> SiddhiDebugger.QueryTerminal.IN);
> inputHandler.send(new Object[]{"WSO2", 50f, 60});
> siddhiDebugger.releaseBreakPoint("query 1",
> SiddhiDebugger.QueryTerminal.IN);
> inputHandler.send(new Object[]{"WSO2", 70f, 40});
>
> executionPlanRuntime.shutdown();


Can you please confirm?

Regards,
Grainier

On Sat, Jun 3, 2017 at 12:01 AM, Madhawa Vidanapathirana <
madhawavidanapathir...@gmail.com> wrote:

> Hi,
>
> I wrote the following code snippet on releaseBreakPoint feature of
> siddhiDebugger (Siddhi4.0.0-M5-SNAPSHOT) but the breakpoint doesn't get
> released.
>
>
> *Am I doing something wrong here? Or is it a bug with Siddhi Debugger
> since its still in development?*
>
>> siddhiDebugger.acquireBreakPoint("query 1", SiddhiDebugger.QueryTerminal.IN);
>>
>> inputHandler.send(new Object[]{"WSO2", 50f, 60});
>> //Debugger Callback is triggered twice as expected
>>
>> siddhiDebugger.releaseBreakPoint("query 1", SiddhiDebugger.QueryTerminal.IN);
>>
>> inputHandler.send(new Object[]{"WSO2", 70f, 40});
>>
>> //Debugger Callback is triggered twice. (Not expected)
>>
>>
>
> Thanks and Regards,
> Madhawa
>
> --
> *Madhawa Vidanapathirana*
> Student
> Department of Computer Science and Engineering
> University of Moratuwa
> Sri Lanka
>
> Mobile: (+94) 716874425 <+94%2071%20687%204425>
> Email: madhawavidanapathir...@gmail.com
> Linked-In: https://lk.linkedin.com/in/madhawa-vidanapathirana-3430b94
>



-- 
Grainier Perera
Senior Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [GSOC][CEP][DEV] Python API for Siddhi CEP

2017-05-30 Thread Grainier Perera
Hi Madhawa,

Great progress so far. As per our offline discussion, for the above
callback issue, please go ahead with the workaround you found on [1]. Since
it only affects Python 3.4 and above, use a version check to decide whether
to use the workaround or not. Once you get all the test cases to work with
Python 3.X, please verify it with Python 2.X as well. Once it's completed,
we can then move on to wrapping Event simulator.

[1] https://bugs.python.org/issue20891

Thanks,
Grainier.

On Mon, May 29, 2017 at 9:04 AM, Madhawa Vidanapathirana <
madhawavidanapathir...@gmail.com> wrote:

> Hi,
> I have started working on debugger wrapping. So far I have made progress
> through first two test cases at [1]. However, the code is failing in test
> case 3.
>
> *This is because of python crashing with following error when callback
> events are fired from non python created threads. The issue affects not
> only the debugger but other callbacks such as StreamCallback and
> QueryCallback.*
>
>> Fatal Python error: take_gil: NULL tstate
>
>
> This error appear only in Python versions 3.4+.  The code works fine when
> I tested with Python 3.3 and below. Furthermore, the error doesn't appear
> in any Python version when the debugger is connected. During further
> investigation of error, I found the documented bug [2] of Python which
> affect Python 3.4+ in similar manner.
>
> So far, a possible solution we have is to use a single python created
> thread to transfer callbacks from Java side to Python. For this purpose, we
> can use a queue (in java) to hold callback events triggered from java and a
> single (python created) thread to transfer the events in queue to python.
> However, this approach might have some performance issues. Can I know your
> opinion on this approach? (So far I have not implemented this approach)
>
> Furthermore, I have opened an issue in pyjnius github [3] (the API I use
> to communicate between Java and Python) on this matter. It might also be
> possible to fix the issue from pyjnius side using some of the suggestions
> available in [2] and [4]. I will look into this approach as well.
>
> [1] - https://github.com/wso2/siddhi/blob/79608b6aea4fbd99bb08cf64493c8b
> 4144bc32e4/modules/siddhi-core/src/test/java/org/wso2/
> siddhi/core/debugger/TestDebugger.java
> [2] - https://bugs.python.org/issue20891
> [3] - https://github.com/kivy/pyjnius/issues/276
> [4] - https://bugs.python.org/issue19576
>
> Thanks and Regards,
> Madhawa
>
> On Thu, May 25, 2017 at 5:32 AM, Sriskandarajah Suhothayan <s...@wso2.com>
> wrote:
>
>> Great, you can do the debugger now, he said after finishing the debugger
>> work, work on wrap event simulator .
>>
>> Regards
>> Suho
>>
>> On Wed, May 24, 2017 at 6:19 PM, Madhawa Vidanapathirana <
>> madhawavidanapathir...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I have updated the README.md file. Will let you know when I have
>>> progress with debugger.
>>>
>>> Thanks
>>> Madhawa
>>>
>>> On Mon, May 22, 2017 at 12:41 PM, Grainier Perera <grain...@wso2.com>
>>> wrote:
>>>
>>>> Hi Madhawa,
>>>>
>>>> I went through your impl and it works great. Try to improve the test
>>>> cases and documentation (README.md on git) when you get a time.
>>>>
>>>> Furthermore, I have added Event simulator REST API endpoints to the doc
>>>> that you shared earlier. After getting debugger to work, try to wrap event
>>>> simulator endpoints as well. You can refer to [1] for the documentation on
>>>> them.
>>>>
>>>> [1] https://docs.wso2.com/display/DAS400/Simulating+Events
>>>>
>>>> Regards,
>>>> Grainier.
>>>>
>>>> On Sun, May 21, 2017 at 6:00 PM, Madhawa Vidanapathirana <
>>>> madhawavidanapathir...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I started working on the native wrapper. So far, have the very basic
>>>>> functionality working for Siddhi 3.1 and Siddhi 4.0.
>>>>> I am maintaining my code at the repo [1].
>>>>>
>>>>> Next step is getting the Siddhi debugger working. I will let you know
>>>>> when it is done.
>>>>>
>>>>> [1] - https://github.com/madhawav/SiddhiCEPPythonAPI
>>>>>
>>>>> Thanks,
>>>>> Madhawa
>>>>>
>>>>>
>>>>> On Fri, May 19, 2017 at 12:47 PM, Sriskandarajah Suhothayan <
>>>>> s...@wso2.com

Re: [Dev] [GSOC][CEP][DEV] Python API for Siddhi CEP

2017-05-22 Thread Grainier Perera
t;>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Regards
>>>>>>>>>>>>>>>>>>>>>>>>> Suho
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Mar 14, 2017 at 9:55 AM, Madhawa
>>>>>>>>>>>>>>>>>>>>>>>>> Vidanapathirana <madhawavidanapathir...@gmail.com>
>>>>>>>>>>>>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>>>>>>>>> Thank you for your quick reply.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Since directly using Siddhi Library is the more
>>>>>>>>>>>>>>>>>>>>>>>>>> general case, I'll first focus on it.
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> 1) I am thinking about following the same
>>>>>>>>>>>>>>>>>>>>>>>>>> structure in JAVA API, centered around Siddhi 
>>>>>>>>>>>>>>>>>>>>>>>>>> Manager. Any opinions on
>&

Re: [Dev] WSO2 Committers += Sajith Perera

2017-01-04 Thread Grainier Perera
Congratz Sajith..!! :)

On Thu, Jan 5, 2017 at 8:32 AM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

> Hi Devs,
>
> It is my pleasure to welcome Sajith Perera as a WSO2 Committer. Sajith has
> been a valuable contributor in WSO2 analytics space and performed enormous
> tasks on CEP tooling, CEP HA and Log Analyzer.
>
> SajithD, welcome aboard and keep up the good work.
>
>
> Thanks,
> Mohan
>
>
> --
> *V. Mohanadarshan*
> *Associate Tech Lead,*
> *Data Technologies Team,*
> *WSO2, Inc. http://wso2.com <http://wso2.com> *
> *lean.enterprise.middleware.*
>
> email: mo...@wso2.com
> phone:(+94) 771117673 <+94%2077%20111%207673>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] WSO2 Committers += Ashen Weerathunga

2016-10-04 Thread Grainier Perera
Congratulations Ashen.!

On Tue, Oct 4, 2016 at 11:03 AM, Nirmal Fernando <nir...@wso2.com> wrote:

> Hi all,
>
> It is my pleasure to welcome Ashen Weerathunga as a WSO2 Committer. In
> recognition of Ashen's contributions to ML, Siddhi and IoT analytics, he
> has been voted as a Committer.
>
> Ashen, welcome aboard and keep up the good work.
>
> --
>
> Thanks & regards,
> Nirmal
>
> Team Lead - WSO2 Machine Learner
> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
> Mobile: +94715779733
> Blog: http://nirmalfdo.blogspot.com/
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [DEV] [MSF4J] [IS] OAuth 2.0 Token Introspection API doesn't work as expected

2016-09-24 Thread Grainier Perera
terceptor$1.run(ServiceInvokerInterceptor.java:58)
> at
> org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:94)
> at
> org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
> ... 38 more
> Caused by: java.lang.NullPointerException
> at java.lang.String$CaseInsensitiveComparator.compare(String.java:1192)
> at java.lang.String$CaseInsensitiveComparator.compare(String.java:1186)
> at java.util.TreeMap.getEntryUsingComparator(TreeMap.java:376)
> at java.util.TreeMap.getEntry(TreeMap.java:345)
> at java.util.TreeMap.get(TreeMap.java:278)
> at
> org.wso2.carbon.identity.oauth2.validators.TokenValidationHandler.findAccessTokenValidator(TokenValidationHandler.java:395)
> at
> org.wso2.carbon.identity.oauth2.validators.TokenValidationHandler.buildIntrospectionResponse(TokenValidationHandler.java:220)
> at
> org.wso2.carbon.identity.oauth2.OAuth2TokenValidationService.buildIntrospectionResponse(OAuth2TokenValidationService.java:104)
> at
> org.wso2.carbon.identity.oauth.introspection.IntrospectResource.introspect(IntrospectResource.java:90)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:188)
> at
> org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:104)
> ... 43 more


[1] https://github.com/wso2/msf4j/tree/master/samples/oauth2-security

Best,
-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Vote] Release WSO2 Complex Event Processor (CEP) 4.2.0-RC2

2016-09-13 Thread Grainier Perera
product-cep/releases/download/v4.2.0-rc2/wso2cep-4.2.0-RC2.zip
>>>>
>>>> <https://github.com/wso2/product-cep/releases/download/v4.2.0-rc2/wso2cep-4.2.0-RC2.zip>*
>>>>
>>>>
>>>> Please download, test, and vote. The README file under the
>>>> distribution contains guide and instructions on how to try it out locally.
>>>> [+] Stable - Go ahead and release [-] Broken - Do not release (explain why)
>>>> This vote will be open for 72 hours or as needed. Regards,
>>>> WSO2 CEP Team
>>>>
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Tharindu Edirisinghe
>>> Senior Software Engineer | WSO2 Inc
>>> Platform Security Team
>>> Blog : tharindue.blogspot.com
>>> mobile : +94 775181586
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Tishan Dahanayakage
>> Senior Software Engineer
>> WSO2, Inc.
>> Mobile:+94 716481328
>>
>> Disclaimer: This communication may contain privileged or other
>> confidential information and is intended exclusively for the addressee/s.
>> If you are not the intended recipient/s, or believe that you may have
>> received this communication in error, please reply to the sender indicating
>> that fact and delete the copy you received and in addition, you should not
>> print, copy, re-transmit, disseminate, or otherwise use the information
>> contained in this communication. Internet communications cannot be
>> guaranteed to be timely, secure, error or virus-free. The sender does not
>> accept liability for any errors or omissions.
>>
>
>
>
> --
> Pamoda Wimalasiri
> *Software Engineering Intern*
> Mobile : 0713705814
> Email : pam...@wso2.com
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] WSO2 Committers += Mushthaq Rumy

2016-09-04 Thread Grainier Perera
Congratulations Rumy!!!

On Fri, Sep 2, 2016 at 11:50 AM, Chandana Napagoda <chand...@wso2.com>
wrote:

> Hi all,
>
> It is my pleasure to welcome Mushthaq Rumy as a WSO2 Committer. Rumy has
> made some great contributions to WSO2 Governance Registry and WSO2
> Enterprise Store products during the last few months and in recognition of
> his commitment and contributions, he has been voted as a Committer for WSO2.
>
> Rumy, welcome aboard and keep up the good work.
>
> Best Regards,
> Chandana
>
> --
> *Chandana Napagoda*
> Associate Technical Lead
> WSO2 Inc. - http://wso2.org
>
> *Email  :  chand...@wso2.com <chand...@wso2.com>**Mobile : +94718169299
> <%2B94718169299>*
>
> *Blog  :http://cnapagoda.blogspot.com <http://cnapagoda.blogspot.com>
> | http://chandana.napagoda.com <http://chandana.napagoda.com>*
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] sub query in siddhi QL

2016-08-31 Thread Grainier Perera
Hi Aneela,

At the moment, Siddhi does not support sub-queries. But, if you just need
to count all the events, you can try Charini's answer. However, if you are
having an event table (i.e. in-memory table) of Employees and you want to
get the count of records in that event table (or get record count when the
table gets updated), you can try a query similar to this;

/* Enter a unique ExecutionPlan */
@Plan:name('TestExecutionPlan')

/* define streams/tables and write queries here ... */
@Import('DEL_STREAM:1.0.0')
define stream DEL (id int);

@Import('INSERT_STREAM:1.0.0')
define stream INST (EMPLOYEE_ID int, EMPLOYEE_NAME string);

@Export('COUNT_STREAM:1.0.0')
define stream COUNT (EMPLOYEE_ID int, COUNT long);

define table EMPLOYEE_TABLE (EMPLOYEE_ID int, EMPLOYEE_NAME string);

define trigger START at 'start';

from INST
insert into EMPLOYEE_TABLE;

from DEL
delete EMPLOYEE_TABLE
on EMPLOYEE_TABLE.EMPLOYEE_ID == id;
from START
select UUID() as EVENT_ID, -1 as EMPLOYEE_ID
insert into INST_PROCESSED;

from INST
select UUID() as EVENT_ID, EMPLOYEE_ID
insert into INST_PROCESSED;

from DEL
select UUID() as EVENT_ID, id as EMPLOYEE_ID
insert into INST_PROCESSED;

from INST_PROCESSED#window.time(10 sec)
select *
insert expired events into INST_EXPIRED;

from INST_PROCESSED join EMPLOYEE_TABLE
select EVENT_ID, EMPLOYEE_TABLE.EMPLOYEE_ID
insert into INST_TBL_STREAM;

from INST_PROCESSED#window.length(1) join INST_TBL_STREAM
select INST_PROCESSED.EVENT_ID, INST_TBL_STREAM.EMPLOYEE_ID
insert into JOINED_STREAM;

from JOINED_STREAM#window.timeBatch(5 sec)
select EVENT_ID, count() as COUNT
group by EVENT_ID
insert into COUNT_STREAM;

from INST_PROCESSED#window.length(1) join COUNT_STREAM
on COUNT_STREAM.EVENT_ID==EVENT_ID
select INST_PROCESSED.EVENT_ID, EMPLOYEE_ID, COUNT_STREAM.COUNT
insert into COUNT_INNER_STREAM;

from every(e1=INST_PROCESSED) ->
e2=COUNT_INNER_STREAM[e1.EVENT_ID==EVENT_ID] OR
e3=INST_EXPIRED[e1.EVENT_ID==EVENT_ID]
select e1.EMPLOYEE_ID, e2.COUNT
insert into FILTER_COUNT;

from FILTER_COUNT[(COUNT is null)]
select EMPLOYEE_ID, 0L as COUNT
insert into COUNT;

from FILTER_COUNT[not (COUNT is null)]
select EMPLOYEE_ID, COUNT
insert into COUNT;


Regards,
Grainier.

On Wed, Aug 31, 2016 at 7:55 AM, Charini Nanayakkara <chari...@wso2.com>
wrote:

> Hi Aneela,
>
> If you need to count all the records (without grouping by employee_id) you
> will have to do something similar to the following. (This is just one way
> of addressing your requirement)
>
> (define an in-memory table to store count)
>
> define table CountTable (count long);
>
> from inputStream#window.timeBatch(2 min)
> select count() as count
> insert into CountTable;
>
> from inputStream#window.timeBatch(4 min)
> select employee_id
> group by employee_id
> insert into TempStream;
>
> from TempStream as t join CountTable as c
> select t.employee_id, c.count
> insert into OutputStream;
>
>
> The execution plan would work if you have prior knowledge that all the
> input events would arrive within 2 minutes. In the second query a larger
> batch time is used to ensure that, the count is already written to table
> CountTable, by the time events start being sent to TempStream.
>
> Thank you
> Charini
>
>
>
> On Mon, Aug 29, 2016 at 11:59 PM, Aneela Safdar <ansaf_...@yahoo.com>
> wrote:
>
>> Hi,
>>
>> How can I achieve this sql in siddhi query languge:
>>
>> select employee_id, (select count(*) from employees)
>> from employees
>>
>> I want just a two columns of a stream, one legitimate and other is count
>> of all records.
>>
>> Thanks,
>>
>> Regards,
>> Aneela Safdar
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Charini Vimansha Nanayakkara*
> Software Engineer at WSO2
>
> Mobile: 0714126293
> E-mail: chari...@wso2.com
> Blog: http://www.charini.me/
>
> <http://wso2.com/signature>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Getting accumulative values from stream

2016-08-23 Thread Grainier Perera
Hi Aneela,

At glace, there's some issues with your query. Is there a particular reason
for you to use `group by milliseconds`? otherwise you shouldn't be group by
such rapidly changing variable.  And also, the count aggregate  function
does not require any parameters and it does not have to be casted in to
int. Try the following query and see whether it works, if it doesn't can
you provide us with sample data and stream definitions?, then we can test
the query.

from FTPInStream[command == 'USER']
> select time:timestampInMilliseconds(time:dateAdd(str:replaceAll(ts,'T','
> '), 5, 'hour',"-MM-dd HH:mm:ss"),'-MM-dd HH:mm*:ss*') as
> milliseconds , uid, id_orig_h, id_orig_p, id_resp_h, id_resp_p
> insert into intermediateStream;



>
> from intermediateStream#window.externalTimeBatch( milliseconds ,1 min,
> milliseconds, 1 min)
> select time:dateFormat(milliseconds, '-MM-dd HH:mm' ) as ts , count() as
> ftp_requests
> insert into FTPOutStream;


Regards,
Grainier

On Tue, Aug 23, 2016 at 2:16 PM, Aneela Safdar <ansaf_...@yahoo.com> wrote:

> Hi,
>
> I have been using following siddhi query to get the events count per
> minute;  ts as timestamp (string) and ftp_requests as count (int)
>
> from FTPInStream[command == 'USER']
> select time:timestampInMilliseconds(time:dateAdd(str:replaceAll(ts,'T','
> '), 5, 'hour',"-MM-dd HH:mm:ss"),'-MM-dd HH:mm') as milliseconds ,
> uid, id_orig_h, id_orig_p, id_resp_h, id_resp_p
> insert into intermediateStream;
>
> from intermediateStream#window.externalTimeBatch( milliseconds ,1 min,
> milliseconds, 1 min)
> select time:dateFormat(milliseconds, '-MM-dd HH:mm' ) as ts ,
> cast(count(milliseconds), 'int') as ftp_requests
> group by milliseconds
> insert into FTPOutStream;
>
> There is a need to populate ftp_requests parameter of FTPOutStream as
> accumulative i.e. (each new no of requests is addition of itself and
> previously added to stream). To achieve this I changed the query as below:
>
> from FTPInStream[command == 'USER']
> select time:timestampInMilliseconds(time:dateAdd(str:replaceAll(ts,'T','
> '), 5, 'hour',"-MM-dd HH:mm:ss"),'-MM-dd HH:mm') as milliseconds ,
> uid, id_orig_h, id_orig_p, id_resp_h, id_resp_p
> insert into intermediateStream;
>
> from intermediateStream
> select milliseconds, cast(count(milliseconds), 'int') as ftp_request
> group by milliseconds
> insert into intermediateStream111;
>
> from intermediateStream111#window.externalTimeBatch( milliseconds ,1 min,
> milliseconds, 1 min)
> select time:dateFormat(milliseconds, '-MM-dd HH:mm' ) as ts ,
> cast(sum(ftp_request), 'int') as ftp_requests
> insert into FTPOutStream;
>
> But I am getting nothing in ftp_requests parameter.
>
> Any suggestions?
>
> Regards,
> Aneela Safdar
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] wso2das : Upgraded version with wso2cep 4.2.0

2016-08-21 Thread Grainier Perera
Hi Aneela,

>From the next immediate release, both CEP and DAS will have parallel
releases. Therefore, all the new functionalities/improvements of CEP (and
Siddhi) will get included in the DAS release as well. As for now, you can
go ahead and use DAS 3.1.0 RC1 [1].

[1] https://github.com/wso2/product-das/releases/tag/v3.1.0-RC1

Regards,

On Sun, Aug 21, 2016 at 2:57 PM, Aneela Safdar <ansaf_...@yahoo.com> wrote:

> Hi,
>
> As there is a new candidate release of wso2cep, 4.2.0  is available and I
> am using some of the upgraded functions in siddhi query. I need to shift
> from cep to das but the new functions are unable to compile as I downloaded
> the latest version wso2das 3.0.1.
>
> Is there any release of DAS having CEP 4.2.0 integrated??
>
> Regards,
> Aneela Safdar
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] rdbms table data view

2016-08-18 Thread Grainier Perera
Hi Aneela,

First of all if you haven't change 'WSO2_CARBON_DB'
data-source configurations, by default it will use H2 db. Therefore,
the 'ftp_log_table'
table will get created in H2. You can enable and use built-in H2 browser to
view the data  [1]. Moreover, data added to the RDBMS tables will be stored
permanently. However, you can have a delete query from Siddhi to purge data.

[1] http://www.vitharana.org/2012/04/how-to-browse-h2-database-of-wso2.html

Regards,

On Thu, Aug 18, 2016 at 9:24 PM, Aneela Safdar <ansaf_...@yahoo.com> wrote:

> I have created an rdbms table in an execution plan and write below query
>
> @from(eventtable = 'rdbms' , datasource.name = 'WSO2_CARBON_DB' ,
> table.name ='ftp_log_table')
> define table ftp_log_table (ts string, uid string, id_orig_h string,
> id_orig_p int, id_resp_h string, id_resp_p int, user string, password
> string,command string, arg string, mime_type string, file_size string,
> reply_code int, reply_msg string);
>
> from FTPInStream
> select *
> insert into ftp_log_table;
>
> There is no error of invalidity but I haven't created this table before in
> the database. Also how can I view data in it? Is it going to store data
> permanently or a long period of time for further correspondence?
>
> Regards,
> Aneela Safdar
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Updating an output stream in Siddhi Query

2016-08-07 Thread Grainier Perera
Hi Aneela,

As I pointed out in [1], you can use externalTimeBatch [2] with above
scenario. Furthermore, in CEP 4.2.0 (Siddhi 3.1.0 <), it'll introduce 2 new
optional parameters to externalTimeBatch, namely startTime and timeout.
However, you might not be needing those params with above scenario.

Moreover, CEP 4.2.0 is yet to be released, and at the time being you can
download CEP 4.2.0 RC1 [3] as @damith pointed out in reply to [4].


>- *startTime* (Optional): User defined start time. This could either
>be a constant (of type int, long or time) or an attribute of the
>corresponding stream (of type long). If an attribute is provided, initial
>value of attribute would be considered as startTime. When startTime is not
>given, initial value of timestamp is used as the default.
>
>
>- *timeout *(Optional): Time to wait for arrival of new event, before
>flushing and giving output for events belonging to a specific batch. If
>timeout is not provided, system waits till an event from next batch arrives
>to flush current batch.
>
>
[1] http://stackoverflow.com/a/38811132/1805178
[2]
https://docs.wso2.com/display/CEP410/Inbuilt+Windows#InbuiltWindows-externalTimeBatch
[3] https://github.com/wso2/product-cep/releases/download/
v4.2.0-rc1/wso2cep-4.2.0-RC1.zip
[4] [Dev] Donwloading latest wso2cep 4.2.0 release

Regards,
Grainier.

On Sat, Aug 6, 2016 at 2:31 PM, Aneela Safdar <ansaf_...@yahoo.com> wrote:

> Hi,
>
> I have a siddhi query to get the count of total events in one minute by
> using a timebatch window. Using an output stream I am updating a bar chart
> with constant coming values, tmestammps(date and time,till minute ) on
> x-axis and events count on y-axis.
>
> But there are sometimes when either the number of events in one minute
> take too long to be transmitted and hence query doesnot give correct
> resutls.
>
> For instance if I get total 60 events and this query first gives me the
> count of 40 which dislpayed in bar chart but then after a minute it changes
> its value to 20 which is correct according to the logic but I am concerned
> if there is a way I could update the stream as well as bar chart for any
> previous timestamps (in that case 40+20) and insert into it new values for
> the next upcoming timestamps.
>
> I have seen update function is used with tables not stream, is it so?
> And also I want two outputStreams populating two different bar charts from
> a same input stream. So is below query correct for that purpose?
>
> Query is:
>
> /* Enter a unique ExecutionPlan */
> @Plan:name('FTPExecutionPlan')
>
> /* Enter a unique description for ExecutionPlan */
> -- @Plan:description('ExecutionPlan')
>
> /* define streams/tables and write queries here ... */
>
> @Import('FTPInStream:1.0.0')
> define stream FTPInStream (ts string, uid string, id_orig_h string,
> id_orig_p int, id_resp_h string, id_resp_p int, user string, password
> string,command string, arg string, mime_type string, file_size string,
> reply_code int, reply_msg string);
>
> @Export('FTPIPOutStream:1.0.0')
> define stream FTPIPOutStream (ip_address string, ftp_requests int);
>
> @Export('FTPOutStream:1.0.0')
> define stream FTPOutStream (ts string, ftp_requests int);
>
>
> from FTPInStream
> select time:dateFormat(str:replaceAll(ts,'T',' '),'-MM-dd HH:mm',
> '-MM-dd HH:mm:ss') as ts, uid, id_orig_h, id_orig_p, id_resp_h,
> id_resp_p
> insert into intermediateStream;
>
> from intermediateStream#window.timeBatch(1 min)
> select ts, cast(count(ts), 'int') as ftp_requests
> group by ts
> insert into FTPOutStream;
>
> from intermediateStream#window.timeBatch(1 min)
> select id_orig_h as ip_address, cast(count(id_orig_h), 'int') as
> ftp_requests
> group by id_orig_h
> insert into FTPIPOutStream;
>
> Regards,
> Aneela Safdar
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Cannot log into management console when disk usage is at 100%

2016-07-04 Thread Grainier Perera
Current audit.log/wso2carbon.log only seems to have logs after the
incident. Earlier logs (wso2carbon.log.2016-07-03) seems to ends with;

> org.apache.qpid.pTID: [-1234] [] [2016-07-03 00:53:07,213] ERROR
> {org.wso2.carbon.registry.core.dataaccess.TransactionManager} -  Failed to
> start new registry transaction.
> {org.wso2.carbon.registry.core.dataaccess.TransactionManager}
> org.h2.jdbc.JdbcSQLException: IO Exception: "java.io.IOException: No space
> left on device";
> "/home/ladmin/cloud_setup/wso2das-3.0.1/repository/database/WSO2CARBON_DB.h2.db"
> [90031-140]

May be it couldn't log the incident, since there's No space left on device.

On Mon, Jul 4, 2016 at 2:39 PM, Dulanja Liyanage <dula...@wso2.com> wrote:

> What was getting printed on the logs?
>
> Thanks,
> Dulanja
>
> On Mon, Jul 4, 2016 at 2:11 PM, Grainier Perera <grain...@wso2.com> wrote:
>
>> Hi All,
>>
>> I was trying to access Carbon Management Console (of DAS 3.0.1) with
>> correct credentials and it kept returning "Login failed! Please recheck the
>> username and password and try again.". Then I checked disk usage of that
>> node and it was at 100% (/dev/sda1 29G 29G 0 100% /). After cleaning some
>> logs and artifacts, I tried to log back in with the same credentials, and
>> this time it worked. Is this the expected behavior?
>>
>> Regards,
>> --
>> Grainier Perera
>> Software Engineer
>> Mobile : +94716122384
>> WSO2 Inc. | http://wso2.com
>> lean.enterprise.middleware
>>
>
>
>
> --
> Thanks & Regards,
> Dulanja Liyanage
> Lead, Platform Security Team
> WSO2 Inc.
>



-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Cannot log into management console when disk usage is at 100%

2016-07-04 Thread Grainier Perera
Hi All,

I was trying to access Carbon Management Console (of DAS 3.0.1) with
correct credentials and it kept returning "Login failed! Please recheck the
username and password and try again.". Then I checked disk usage of that
node and it was at 100% (/dev/sda1 29G 29G 0 100% /). After cleaning some
logs and artifacts, I tried to log back in with the same credentials, and
this time it worked. Is this the expected behavior?

Regards,
-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] WSO2 Committers += Natasha Wijesekara

2016-06-28 Thread Grainier Perera
Congratulations Natasha..!!! :)

On Wed, Jun 29, 2016 at 10:42 AM, Ashen Weerathunga <as...@wso2.com> wrote:

> Congratulations Natasha!!! :)
>
> On Wed, Jun 29, 2016 at 10:23 AM, Nandika Jayawardana <nand...@wso2.com>
> wrote:
>
>> Hi All,
>>
>> It's my pleasure to announce Natasha Wijesekara as a WSO2 Committer.
>> Natasha has been a great contributor to BPS and PC products and in
>> recognition of her contributions, she's been voted as a WSO2 Committer.
>>
>> Congratulations Natasha and keep up the good work!
>>
>> Regards
>> Nandika
>>
>> --
>> Nandika Jayawardana
>> WSO2 Inc ; http://wso2.com
>> lean.enterprise.middleware
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Ashen Weerathunga*
> Software Engineer
> WSO2 Inc.: http://wso2.com
> lean.enterprise.middleware
>
> Email: as...@wso2.com
> Mobile: +94 716042995 <94716042995>
> LinkedIn: *http://lk.linkedin.com/in/ashenweerathunga
> <http://lk.linkedin.com/in/ashenweerathunga>*
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] java.lang.NoSuchMethodError when trying to add Kafka Event Receiver

2016-06-05 Thread Grainier Perera
As +sajith mentioned, DAS already have scala 2.10.x packed for spark. And
Kafka packs two binary builds for multiple versions of Scala (Scala
2.10/Scala 2.11) [1]. So, you should use kafka_2.10-0.9.0.1.tgz and copy
it's jar files as dependencies, and when adding dependencies to
/components/lib/, *do not add scala-library-2.10.5.jar,* since it's already
available with DAS 3.0.1

[1] http://kafka.apache.org/downloads.html

Regards,

On Mon, Jun 6, 2016 at 12:47 AM, Lasantha Fernando <lasan...@wso2.com>
wrote:

>
> On 5 June 2016 at 23:52, Mohanadarshan Vivekanandalingam <mo...@wso2.com>
> wrote:
>
>>
>>
>> On Sun, Jun 5, 2016 at 11:33 PM, Sajith Ravindra <saji...@wso2.com>
>> wrote:
>>
>>> It seems Spark has a dependency on scala-library_2.10.4 and this jar is
>>> already included in DAS pack, therefore copying scala-library-2.11.7.jar
>>> as per the documentation will lead to a conflict. I tested deleting
>>> ./repository/components/plugins/org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a39653b.jar
>>> in DAS pack and it fixes the above error. But it give a NoClassDefFound
>>> error in startup.
>>>
>>> AFAIU there's no straightforward way to workaround this issue as it
>>> requires to two different versions of the same .jar other than upgrading
>>> the Spark scla-library dependency.
>>>
>>
>> Yes, @Charith you have few options..
>>
>> 1) Build a single jar/bundle by wrapping necessary dependencies and use.
>>
>
> +1. I think building a proper jar with the correct dependencies would be
> good since that way we can use the latest version of Kafka. WDYT?
>
> Also, @Charitha, can you try putting the scala-library to dropins instead
> of lib. From looking at the jar, it already has an OSGi manifest file, so
> no need to OSGify it again. Did we try starting the server without putting
> the scala-library-2.11.7.jar at all? Anyway, since this is an OSGi binding
> issue, there might be some non-standard bundle combinations that can get it
> to work. But that wouldn't be a clean solution I think.
>
> Thanks,
> Lasantha
>
>
>> 2) Try with Kafka 2.10-0.8.1.1 server
>>
>>
>>>
>>> P.S : @Charith if this is for the sake of testing you can use CEP as it
>>> has not scala-libarary dependency.
>>>
>>
>>
>>
>> --
>> *V. Mohanadarshan*
>> *Associate Tech Lead,*
>> *Data Technologies Team,*
>> *WSO2, Inc. http://wso2.com <http://wso2.com> *
>> *lean.enterprise.middleware.*
>>
>> email: mo...@wso2.com
>> phone:(+94) 771117673
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Lasantha Fernando*
> Senior Software Engineer - Data Technologies Team
> WSO2 Inc. http://wso2.com
>
> email: lasan...@wso2.com
> mobile: (+94) 71 5247551
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS minimum HA deployment in one node?

2016-04-12 Thread Grainier Perera
Hi Nirmal,

With the new event-processor.xml, there's no single node element. If you
put false on both HA & Distributed, it'll run as single node. If you need a
single node HA, you just have to put true to the HA config, worker-enabled,
presenter-enabled accordingly.

https://github.com/wso2/carbon-analytics-common/blob/master/features/event-processor-manager/org.wso2.carbon.event.processor.manager.core.feature/src/main/resources/conf/event-processor.xml

Regards,
Grainier

On Tue, Apr 12, 2016 at 12:07 PM, Nirmal Fernando <nir...@wso2.com> wrote:

> @Grainier
> https://github.com/wso2/carbon-analytics-common/commit/270cc54e51c56a200c262d364a2e439cfa15#diff-74f33ea3cfb8dcbab510606ba65d09a4
> removes the single node mode, but when single node mode element wasn't set
> to false explicitly, server doesn't start! Please fix properly.
>
> On Tue, Apr 12, 2016 at 11:09 AM, Samuel Gnaniah <sam...@wso2.com> wrote:
>
>> I thought we recommend two nodes for minimum deployment in production
>> environments. Normally we document the minimum recommended deployment
>> pattern for production. Should we be documenting the configurations for 1
>> node as well?
>>
>> *Samuel Gnaniah*
>> Lead Technical Writer
>>
>> WSO2 (pvt.) Ltd.
>> Colombo, Sri Lanka
>> (+94) 773131798
>>
>> On Tue, Apr 12, 2016 at 10:45 AM, Nirmal Fernando <nir...@wso2.com>
>> wrote:
>>
>>> Great! Can we have this in docs please?
>>>
>>> On Tue, Apr 12, 2016 at 10:42 AM, Niranda Perera <nira...@wso2.com>
>>> wrote:
>>>
>>>> Yes, since both the nodes are in the same file system, there is no need
>>>> for a symbolic link
>>>>
>>>> best
>>>>
>>>> On Tue, Apr 12, 2016 at 10:34 AM, Gihan Anuruddha <gi...@wso2.com>
>>>> wrote:
>>>>
>>>>> Normally what we are doing is stating one node path in both
>>>>> configuration files. Basically, this symbolic link is used to load some
>>>>> classes to runtime. Since both node are identical this won't be an issue
>>>>> IMO.
>>>>>
>>>>> Regards,
>>>>> Gihan
>>>>>
>>>>> On Tue, Apr 12, 2016 at 10:30 AM, Nirmal Fernando <nir...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> Is it possible to setup DAS minimum HA deployment in one node? AFAIS
>>>>>> the requirement to create a symbolic link makes it impossible?
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Thanks & regards,
>>>>>> Nirmal
>>>>>>
>>>>>> Team Lead - WSO2 Machine Learner
>>>>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>>>>>> Mobile: +94715779733
>>>>>> Blog: http://nirmalfdo.blogspot.com/
>>>>>>
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Dev mailing list
>>>>>> Dev@wso2.org
>>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> W.G. Gihan Anuruddha
>>>>> Senior Software Engineer | WSO2, Inc.
>>>>> M: +94772272595
>>>>>
>>>>> ___
>>>>> Dev mailing list
>>>>> Dev@wso2.org
>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Niranda Perera*
>>>> Software Engineer, WSO2 Inc.
>>>> Mobile: +94-71-554-8430
>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>> https://pythagoreanscript.wordpress.com/
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> Thanks & regards,
>>> Nirmal
>>>
>>> Team Lead - WSO2 Machine Learner
>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>>> Mobile: +94715779733
>>> Blog: http://nirmalfdo.blogspot.com/
>>>
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>
>
> --
>
> Thanks & regards,
> Nirmal
>
> Team Lead - WSO2 Machine Learner
> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
> Mobile: +94715779733
> Blog: http://nirmalfdo.blogspot.com/
>
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] API Requirements for IoT Snapshot Dashboard

2016-03-29 Thread Grainier Perera
Hi all,
As per offline discussion, following are the finalized request-response
structures for IoT snapshot dashboard API;

1. Retrieve security concerns for the given filters.

>
> *Request: *POST /securityConcerns
> *[*
> {
> "*filteringContext*":"connectivityStatus",
> "*filteringGroups*":*[*"active"*]*
> },
> {
> "filteringContext":"alerts",
> "filteringGroups":["high"]
> }
> *]*
>
> *Response:**[*
> {
> "*context*": "securityConcerns",
> "*data*": *[*
> {
> "*group*": "non-compliant",
> "*label*": "Non Compliant",
> "*count*": 5
> },
> {
> "group": "no-passcode",
> "label": "No Passcode",
> "count": 18
> },
> {
> "group": "no-encryption",
> "label": "Non encrypted",
> "count": 23
> },
> {
> "group": "unmonitored",
> "label": "Unmonitored",
> "count": 12
> }
> *]*
> }
> *]*


2. Retrieve all devices.

>
> *Request:*POST /devices
> *[]*
>
> *Response:**[*
> {
> "*context*": "devices",
> "*data*": *[*
> {
> "*id*": "001",
> "*label*": "Nexus P",
> "*status*": "Blocked",
> "*platform*": "Android",
> "*model*": "HNP001",
> "*actions*": URL,
> },
> {
> "id": "002",
> "label": "iPad Mini",
> "status": "Inactive",
> "platform": "iOS",
> "model": "IPM005",
> "actions": URL,
> }
> *]*
> }
> *]*


3. Retrieve devices count for the given filters.

>
> *Request:*POST /devicesCount
> *[*
> {
> "*filteringContext*":"connectivityStatus",
> "*filteringGroups*":*[*"active"*]*
> },
> {
> "filteringContext":"alerts",
> "filteringGroups":["high"]
> }
> *]*
>
> *Response:**[*
> {
> "*context*": "deviceCount",
> "*data*": *[*
> {
> "*group*": "totalCount",
> "*label*": "Total Count",
> "*count*": 210
> },
> {
> "group": "filteredCount",
> "label": "Filtered Count",
> "count": 57
> }
>* ]*
> }
> *]*


Regards,

On Fri, Mar 11, 2016 at 2:02 PM, Grainier Perera <grain...@wso2.com> wrote:

> Hi all,
>
> I'm in the process of implementing the first phase of IoT Snapshot
> Dashboard. So far I have managed to create generic bar (both vertical &
> horizontal), stack charts (with inter gadget communication) and a LeafletJS
> based OSM map gadget to be used with the IoT Snapshot Dashboard.
>
> However, there ain't any API to get required data to populate gadgets. [1]
> describes the API requirements for the IoT Snapshot Dashboard. Furthermore,
> as per offline discussion had with DilanA, we decided to use following
> request/response structure and implement the required APIs.
>
> WDYT?
>
> i.e. : all security concerns for the given filters.
> *Request:*
>
>> POST /iot-analytics/securityConcerns
>> {
>> "filters": [
>> {
>> "filter": "platform",
>> "selections" : ["Android", "iOS"]
>> },
>> {
>> "filter": "ownership",
>> "selections" : ["BYOD"]
>> }
>> ],
>> }
>
>
> *Response:*
>
>> {
>> "status": "success",
>> "message": null,
>>   "data": {
>> [
>>{
>>"filter_id":"non-compliant",
>>"name":"Non Compliant Devices",
>>"count":12
>>},
>>{
>>"filter_id":"unmonitored",
>>"name":"Unmonitored Devices",
>>"count":15
>>}
>> ]
>>   },
>> }
>
>
>
> [1]
> https://docs.google.com/a/wso2.com/spreadsheets/d/1EjGCgMvo8Hgi8HQC9AjHxKVdfYYYMwwQ9Rxugk7lwIE/edit?usp=sharing
>
> Regards,
> --
> Grainier Perera
> Software Engineer
> Mobile : +94716122384
> WSO2 Inc. | http://wso2.com
> lean.enterprise.middleware
>



-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] API Requirements for IoT Snapshot Dashboard

2016-03-11 Thread Grainier Perera
Hi all,

I'm in the process of implementing the first phase of IoT Snapshot
Dashboard. So far I have managed to create generic bar (both vertical &
horizontal), stack charts (with inter gadget communication) and a LeafletJS
based OSM map gadget to be used with the IoT Snapshot Dashboard.

However, there ain't any API to get required data to populate gadgets. [1]
describes the API requirements for the IoT Snapshot Dashboard. Furthermore,
as per offline discussion had with DilanA, we decided to use following
request/response structure and implement the required APIs.

WDYT?

i.e. : all security concerns for the given filters.
*Request:*

> POST /iot-analytics/securityConcerns
> {
> "filters": [
> {
> "filter": "platform",
> "selections" : ["Android", "iOS"]
> },
> {
> "filter": "ownership",
> "selections" : ["BYOD"]
> }
> ],
> }


*Response:*

> {
> "status": "success",
> "message": null,
>   "data": {
> [
>{
>"filter_id":"non-compliant",
>"name":"Non Compliant Devices",
>"count":12
>},
>{
>"filter_id":"unmonitored",
>"name":"Unmonitored Devices",
>"count":15
>}
> ]
>   },
> }



[1]
https://docs.google.com/a/wso2.com/spreadsheets/d/1EjGCgMvo8Hgi8HQC9AjHxKVdfYYYMwwQ9Rxugk7lwIE/edit?usp=sharing

Regards,
-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 Complex Event Processor 4.1.0 RC2

2016-02-21 Thread Grainier Perera
Tested following scenarios/features.

1. Distributed deployment of CEP.
2. High availability deployment of CEP.
3. High availability deployment with state persistence (Single Node,
Clustered).
4. Siddhi Extensions in Storm (Map, Reorder, Time Series, Event Table, ML,
Kalman, Geo).

No issues found.
[x] Stable - Please go ahead and release.

Regards,

On Sun, Feb 21, 2016 at 4:04 PM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

> Tested following scenarios/features..
>
> 1. All Samples(-0022) related to Event Receivers
> 2. Basic end-to-end execution Flow
> 3. Execution Manager
> 4. Event Tracer
>
> No issues found.
> [x] Stable - Please go ahead and release..
>
> Thanks,
> Mohan
>
> On Fri, Feb 19, 2016 at 2:04 AM, Grainier Perera <grain...@wso2.com>
> wrote:
> > Hi Devs,
> >
> > This is the second release candidate of WSO2 Complex Event Processor
> 4.1.0.
> >
> > This release fixes the following issues:
> > https://wso2.org/jira/issues/?filter=12644
> >
> > Please download CEP 4.1.0 RC2 and test the functionality and vote. Vote
> will
> > be open for 72 hours or as needed.
> >
> > Source & binary distribution files:
> > https://github.com/wso2/product-cep/releases/tag/v4.1.0-RC2
> >
> > The tag to be voted upon:
> > https://github.com/wso2/product-cep/tree/release-4.1.0-RC2
> >
> > [+] Stable - go ahead and release
> > [-] Broken - do not release (explain why)
> >
> > Thanks and Regards,
> > WSO2 CEP Team.
> >
> > --
> > Grainier Perera
> > Software Engineer
> > Mobile : +94716122384
> > WSO2 Inc. | http://wso2.com
> > lean.enterprise.middleware
> >
> > ___
> > Dev mailing list
> > Dev@wso2.org
> > http://wso2.org/cgi-bin/mailman/listinfo/dev
> >
>
>
>
> --
> V. Mohanadarshan
> Senior Software Engineer,
> Data Technologies Team,
> WSO2, Inc. http://wso2.com
> lean.enterprise.middleware.
>
> email: mo...@wso2.com
> phone:(+94) 771117673
>



-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [VOTE] Release WSO2 Complex Event Processor 4.1.0 RC2

2016-02-18 Thread Grainier Perera
Hi Devs,

This is the second release candidate of WSO2 Complex Event Processor 4.1.0.

*This release fixes the following issues:*
https://wso2.org/jira/issues/?filter=12644

Please download CEP 4.1.0 RC2 and test the functionality and vote. Vote
will be open for 72 hours or as needed.

*Source & binary distribution files:*
https://github.com/wso2/product-cep/releases/tag/v4.1.0-RC2

*The tag to be voted upon:*
https://github.com/wso2/product-cep/tree/release-4.1.0-RC2

[+] Stable - go ahead and release
[-] Broken - do not release (explain why)

Thanks and Regards,
WSO2 CEP Team.

-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 Complex Event Processor 4.1.0 RC1

2016-02-17 Thread Grainier Perera
Hi all,

Added to above issues;

Major
- Running CEP in HA mode without persistence, throws an exception
(NoPersistenceStoreException). [1]

[1] https://wso2.org/jira/browse/CEP-1475

Regards,
Grainier

On Wed, Feb 17, 2016 at 11:15 AM, Tishan Dahanayakage <tis...@wso2.com>
wrote:

> Hi all,
>
> We are calling off this vote due to following reasons. Will fix the issues
> and revert back with RC2.
>
> Major
> - Starting pack in sample mode does not work when additional system
> parameters are provided. [1]
> - Storm dependency jar does not recursively pack dependencies. Hence new
> Siddhi extensions are failing in Storm. [2]
>
> Minor
> - PMML feature is not working when ML feature already installed[3]
> - Small blue box appears in the top when tooltips appear[4]
>
> [1] https://wso2.org/jira/browse/CEP-1472
> [2] https://wso2.org/jira/browse/CEP-1473
> [3] https://wso2.org/jira/browse/CEP-1471
> [4] https://wso2.org/jira/browse/CEP-1470
>
> Thanks
> Tishan
>
> On Sun, Feb 14, 2016 at 11:11 AM, Grainier Perera <grain...@wso2.com>
> wrote:
>
>> Hi Devs,
>>
>> This is the first release candidate of WSO2 Complex Event Processor
>> 4.1.0.
>>
>> *This release fixes the following issues:*
>> https://wso2.org/jira/issues/?filter=12644
>>
>> Please download CEP 4.1.0 RC1 and test the functionality and vote. Vote
>> will be open for 72 hours or as needed.
>>
>> *Source & binary distribution files:*
>> https://github.com/wso2/product-cep/releases/tag/v4.1.0-RC1
>>
>> *The tag to be voted upon:*
>> https://github.com/wso2/product-cep/tree/release-4.1.0-RC1
>>
>> [+] Stable - go ahead and release
>> [-] Broken - do not release (explain why)
>>
>> Thanks and Regards,
>> WSO2 CEP Team.
>> --
>> Grainier Perera
>> Software Engineer
>> Mobile : +94716122384
>> WSO2 Inc. | http://wso2.com
>> lean.enterprise.middleware
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Tishan Dahanayakage
> Software Engineer
> WSO2, Inc.
> Mobile:+94 716481328
>
> Disclaimer: This communication may contain privileged or other
> confidential information and is intended exclusively for the addressee/s.
> If you are not the intended recipient/s, or believe that you may have
> received this communication in error, please reply to the sender indicating
> that fact and delete the copy you received and in addition, you should not
> print, copy, re-transmit, disseminate, or otherwise use the information
> contained in this communication. Internet communications cannot be
> guaranteed to be timely, secure, error or virus-free. The sender does not
> accept liability for any errors or omissions.
>



-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Handling time-consuming processes within Carbon Metrics Gauges

2016-01-20 Thread Grainier Perera
Hi suho,

It does not have any impact on Siddhi's runtime performance. It just
affects on metrics reporting, which runs separately.

Thanks,
Grainier.

On Thu, Jan 21, 2016 at 10:09 AM, Sriskandarajah Suhothayan <s...@wso2.com>
wrote:

> You have not answered my Qn
>
> whats the impact on this when it comes to Siddhi's runtime performance ?
>
> Suho
>
> On Wednesday, January 20, 2016, Sajith Ravindra <saji...@wso2.com> wrote:
>
>> Hi Granier, Suho
>>
>> The memory calculation will be a time consuming issue since it has to
>> traverse through the complete object tree. IMO, we should have the option
>> of executing matrix calculation in a separate thread and report back to the
>> caller with the result.
>>
>> I think it's a valid case to have matrices which consume time/resources
>> to calculate. Therefore, it will be a good idea to make available the
>> option of matrix calculation to be asynchronous and report back once done
>> in carbon metrics library it self.
>>
>> WDYT?
>>
>> Thanks
>> *,Sajith Ravindra*
>> Senior Software Engineer
>> WSO2 Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> mobile: +94 77 2273550
>> blog: http://sajithr.blogspot.com/
>> <http://lk.linkedin.com/pub/shani-ranasinghe/34/111/ab>
>>
>> On Wed, Jan 20, 2016 at 10:32 AM, Sriskandarajah Suhothayan <
>> s...@wso2.com> wrote:
>>
>>> I understand that Matrics reporting it getting slow.
>>>
>>> At the meantime whats the impact on this when it comes to Siddhi
>>> performance ?
>>> If Siddhi query is also getting halted for 3 sec, then this is going to
>>> be a bigger problem.
>>>
>>> Suho
>>>
>>> On Wed, Jan 20, 2016 at 12:25 PM, Grainier Perera <grain...@wso2.com>
>>> wrote:
>>>
>>>> Currently, the memory usage calculation mechanism used on a Siddhi
>>>> query takes around 3 seconds. Therefore, when it comes to complex flow with
>>>> several of execution plans, it takes around (# of queries * 3) seconds.
>>>> Moreover, we have integrated carbon-metrics [1] (Gauges in this scenario)
>>>> with CEP for metrics calculation and reporting. Therefore, if we were to
>>>> use the same mechanism within the getValue() method of carbon-metrics
>>>> Gauges, it will increase the reporting time consumed by scheduled reporters
>>>> (per iteration) by ~(# of queries * 3) seconds. That might cause issues
>>>> such as reporters does not report according to the defined PollingPeriod,
>>>> takes a considerable amount of time to update and render Carbon Metrics UI,
>>>> etc. Therefore, is there a way to handle such time-consuming process within
>>>> Carbon Metrics Gauges?
>>>>
>>>> Gauge.getValue() Implementation:
>>>>
>>>> new Gauge() {
>>>> @Override
>>>> public Long getValue() {
>>>> *// Below process takes ~3 seconds.*
>>>> ObjectGraphMeasurer.Footprint footprint =
>>>> ObjectGraphMeasurer.measure(object);
>>>> return MemoryMeasurerUtil.footprintSizeEstimate(footprint);
>>>> }
>>>> });
>>>>
>>>> [1] https://github.com/wso2/carbon-metrics
>>>>
>>>> Thanks,
>>>> Grainier.
>>>> --
>>>> Grainier Perera
>>>> Software Engineer
>>>> Mobile : +94716122384
>>>> WSO2 Inc. | http://wso2.com
>>>> lean.enterprise.middleware
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> *S. Suhothayan*
>>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>>> *WSO2 Inc. *http://wso2.com
>>> * <http://wso2.com/>*
>>> lean . enterprise . middleware
>>>
>>>
>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>>
>>
>>
>
> --
>
> *S. Suhothayan*
> Technical Lead & Team Lead of WSO2 Complex Event Processor
> *WSO2 Inc. *http://wso2.com
> * <http://wso2.com/>*
> lean . enterprise . middleware
>
>
> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>
>


-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Handling time-consuming processes within Carbon Metrics Gauges

2016-01-19 Thread Grainier Perera
Currently, the memory usage calculation mechanism used on a Siddhi query
takes around 3 seconds. Therefore, when it comes to complex flow with
several of execution plans, it takes around (# of queries * 3) seconds.
Moreover, we have integrated carbon-metrics [1] (Gauges in this scenario)
with CEP for metrics calculation and reporting. Therefore, if we were to
use the same mechanism within the getValue() method of carbon-metrics
Gauges, it will increase the reporting time consumed by scheduled reporters
(per iteration) by ~(# of queries * 3) seconds. That might cause issues
such as reporters does not report according to the defined PollingPeriod,
takes a considerable amount of time to update and render Carbon Metrics UI,
etc. Therefore, is there a way to handle such time-consuming process within
Carbon Metrics Gauges?

Gauge.getValue() Implementation:

new Gauge() {
@Override
public Long getValue() {
*// Below process takes ~3 seconds.*
ObjectGraphMeasurer.Footprint footprint =
ObjectGraphMeasurer.measure(object);
return MemoryMeasurerUtil.footprintSizeEstimate(footprint);
}
});

[1] https://github.com/wso2/carbon-metrics

Thanks,
Grainier.
-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [CEP] Clarification on Frequent / Lossy Frequent window processors

2015-09-23 Thread Grainier Perera
Hi Sachini,

Could you provide a small description on the functionality, parameters and
return types of Frequent window processor and Lossy frequent window
processor of Siddhi inbuilt windows?

Thanks,
Grainier.
-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [CEP] Clarification on Frequent / Lossy Frequent window processors

2015-09-23 Thread Grainier Perera
Hi Sachini,

Thanks for the explanation.

Regards,
Grainier

On Thu, Sep 24, 2015 at 3:47 AM, Sachini Jayasekara <sachi...@wso2.com>
wrote:

> Hi Grainier,
>
>
> *FrequentWindowProcessor* is based on Misra-Gries counting algorithm.
>
>
> Parameter: sizeOftheMap + key to group the events (If no keys are given
> the concatenation of all the attributes of the event is considered as the
> key)
>
> We keep a map of (key,counter)
> Whenever an event comes,
>  generate the key based on the parameters given
>   check the map for that key.
>   if the key is already in the map increment the
> count by 1; add event to the window ; remove the old event corresponding to
> the key from the window
>   else if it's not there
>   if map size <=
> sizeOftheMap  add a new entry to the map with (key,count=1); add event to
> the window
>   else if size >
> sizeOftheMap
>decrement the
> count stored in the map by 1 for all the entries
>   remove all the entries which has count 0 from the map and from the
> window
>if count of
> none of the entries in the map becomes 0, discard the new event
>else add
> the new entry to map with (key,count=1) ; add event to the window
>
>
>
>
> *LossyFrequentProcessor* - based on Lossy Counting algorithm is also for
> frequency counts
>
> Parameters: support threshold, error bound, key to group the events (If no
> keys are given the concatenation of all the attributes of the event is
> considered as the key)
>
> In this algorithm events are divided into buckets.
>
> We keep a map of (key, lossycount [bucket id, count] )
> Whenever an event comes,
>   calculate the bucketId
>  generate the key based on the parameters given
>  check the map for that key.
>   if the key is already in the map increment the
> count by 1;
>   else if it's not there add a new entry to the
> map with (key,[bucketId,1])
>   event removal and insertion to the window is
> done based on the user given parameters. (You can check the code for the
> exact conditions)
>
>
> Thanks,
> Sachini
>
> On Wed, Sep 23, 2015 at 1:04 AM, Grainier Perera <grain...@wso2.com>
> wrote:
>
>> Hi Sachini,
>>
>> Could you provide a small description on the functionality, parameters
>> and return types of Frequent window processor and Lossy frequent window
>> processor of Siddhi inbuilt windows?
>>
>> Thanks,
>> Grainier.
>> --
>> Grainier Perera
>> Software Engineer
>> Mobile : +94716122384
>> WSO2 Inc. | http://wso2.com
>> lean.enterprise.middleware
>>
>
>
>
> --
>
>
>
> *Thanks & Regards,Sachini JayasekaraSoftware Engineer; **WSO2 Inc. *
>
> *lean . enterprise . middleware |  http://wso2.com <http://wso2.com> *
>
> Mobile : +94712371165
>



-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [CEP] How to extract 24H formatted hour value from a milliseconds time stamp using Siddhi?

2015-09-21 Thread Grainier Perera
Hi,

I'm trying to extract 'hour' from a millisecond time stamp using the below
method of Siddhi time extension which is mentioned on [1
<https://docs.wso2.com/display/CEP400/Siddhi+Extensions#SiddhiExtensions-time>
].

 extract( timestampInMilliseconds, unit)
>

My test cases as follows;

1. ts = 1442707210 : Sun 20 Sep 2015 05:30:10 AM IST GMT+5:30
2. ts = 1442750400 : Sun 20 Sep 2015 *05:30:00 PM* IST GMT+5:30
3. ts = 1442793630 : Mon 21 Sep 2015 05:30:30 AM IST GMT+5:30

And result for time:extract( ts, 'hour' ) for each test case;
1. 5
2. *5*
3. 5

Seems it is returning extracted hour value in 12h format. Is there a way to
get the extracted hour value in 24h format?

[1]
https://docs.wso2.com/display/CEP400/Siddhi+Extensions#SiddhiExtensions-time

Thanks,
Grainier
-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [CEP] How to extract 24H formatted hour value from a milliseconds time stamp using Siddhi?

2015-09-21 Thread Grainier Perera
Hi,

According to the "ExtractAttributesFunctionExtension.java" of Siddhi shows
that;

if (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_YEAR)) {
> returnValue = cal.get(Calendar.YEAR);
>
> } else if (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_MONTH)) {
> returnValue = (cal.get(Calendar.MONTH) + 1);
>
> } else if (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_SECOND))
> {
> returnValue = cal.get(Calendar.SECOND);
>
> } else if (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_MINUTE))
> {
> returnValue = cal.get(Calendar.MINUTE);
>
> }
>
> * else if (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_HOUR))
> {returnValue = cal.get(Calendar.HOUR);*
> } else if (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_DAY)) {
> returnValue = cal.get(Calendar.DAY_OF_MONTH);
>
> } else if (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_WEEK)) {
> returnValue = cal.get(Calendar.WEEK_OF_YEAR);
>
> } else if
> (unit.equals(TimeExtensionConstants.EXTENSION_TIME_UNIT_QUARTER)) {
> returnValue = (cal.get(Calendar.MONTH) / 3) + 1;
> }
> return returnValue;
>

According to [1] Java Docs of GregorianCalendar, HOUR_OF_DAY should be used
to return 24H format and, it seems that Siddhi doesn't provide that option.

[1] http://docs.oracle.com/javase/7/docs/api/java/util/Calendar.html#HOUR


On Tue, Sep 22, 2015 at 7:55 AM, Grainier Perera <grain...@wso2.com> wrote:

> Hi,
>
> I'm trying to extract 'hour' from a millisecond time stamp using the below
> method of Siddhi time extension which is mentioned on [1
> <https://docs.wso2.com/display/CEP400/Siddhi+Extensions#SiddhiExtensions-time>
> ].
>
>  extract( timestampInMilliseconds, unit)
>>
>
> My test cases as follows;
>
> 1. ts = 1442707210 : Sun 20 Sep 2015 05:30:10 AM IST GMT+5:30
> 2. ts = 1442750400 : Sun 20 Sep 2015 *05:30:00 PM* IST GMT+5:30
> 3. ts = 1442793630 : Mon 21 Sep 2015 05:30:30 AM IST GMT+5:30
>
> And result for time:extract( ts, 'hour' ) for each test case;
> 1. 5
> 2. *5*
> 3. 5
>
> Seems it is returning extracted hour value in 12h format. Is there a way
> to get the extracted hour value in 24h format?
>
> [1]
> https://docs.wso2.com/display/CEP400/Siddhi+Extensions#SiddhiExtensions-time
>
> Thanks,
> Grainier
> --
> Grainier Perera
> Software Engineer
> Mobile : +94716122384
> WSO2 Inc. | http://wso2.com
> lean.enterprise.middleware
>



-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ESB]Load balancing in Point-to-Point Channel

2015-08-10 Thread Grainier Perera
Hi Sachith,

The behavior you are looking for is known as Source IP affinity, and that
can be achieved using Load-balance Endpoint. Refer [1]

[1]
https://docs.wso2.com/display/ESB481/Sample+54%3A+Session+Affinity+Load+Balancing+between+Three+Endpoints

Regards,
Grainier

On Mon, Aug 10, 2015 at 12:35 PM, Sachith Punchihewa sachi...@wso2.com
wrote:

 Hi All,
 I have followed the Point-to-Point Channel EIP tutorial on [1]. According
 to the settings provided, the load balancing picks up an endpoint randomly
 (using Round robin) to serve the request. However, subsequent requests
 originating from the same location (IP) might not be served using the same
 endpoint.

 In a situation where you want to make sure that all the requests
 originating from the same location should be served using the same
 endpoint, how to configure the load balancing to guarantee this behavior?

 [1]
 https://docs.wso2.com/display/IntegrationPatterns/Point-to-Point+Channel
 https://www.google.com/url?q=https%3A%2F%2Fdocs.wso2.com%2Fdisplay%2FIntegrationPatterns%2FPoint-to-Point%2BChannelsa=Dsntz=1usg=AFQjCNFgkjciudOQGz4LkCsnxris78m4Yg

 Thanks and Regards.

 Kamidu Sachith Punchihewa
 *Software Engineer*
 WSO2, Inc.
 lean . enterprise . middleware
 Mobile : +94 (0) 770566749 %2B94%20%280%29%20773%20451194


 Disclaimer: This communication may contain privileged or other
 confidential information and is intended exclusively for the addressee/s.
 If you are not the intended recipient/s, or believe that you may have
 received this communication in error, please reply to the sender indicating
 that fact and delete the copy you received and in addition, you should not
 print, copy, retransmit, disseminate, or otherwise use the information
 contained in this communication. Internet communications cannot be
 guaranteed to be timely, secure, error or virus-free. The sender does not
 accept liability for any errors or omissions.

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
Grainier Perera
Software Engineer
Mobile : +94716122384
WSO2 Inc. | http://wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev