Re: [Architecture] [Dev] [VOTE] Release WSO2 Stream Processor 4.0.0 RC2

2017-12-22 Thread Gimantha Bandara
On Fri, Dec 22, 2017 at 8:14 AM, Chandana Napagoda <cnapag...@gmail.com>
wrote:

> -1, Unable to start  Stream Processor Studio in the windows machine[1][2].
> It was hanging on the below step for more than 20 minutes.
>
I confirmed what Isuru has mentioned.
This is not a bug. If you select a character in command prompt (by* mark*ing),
the current process holds and will not continue. Press *Enter,* then the
process will continue.

​

>
> Also, it seems "Installing on Windows"[1] doc is outdated, I can't find
> any place to copy snappy-java jar file.
>
>
> ​
>
> [1]. https://docs.wso2.com/display/SP400/Installing+on+Windows
> [2]. https://docs.wso2.com/display/SP400/Running+the+Product
>
> Regards,
> Chandana
>
> On 22 December 2017 at 09:59, SajithAR Ariyarathna <sajit...@wso2.com>
> wrote:
>
>> Hi Devs,
>>
>> We are pleased to announce the release candidate of WSO2 Stream Processor
>> 4.0.0.
>>
>> This is the Release Candidate version 2 of the WSO2 Stream Processor 4.0.0
>>
>> Please download, test the product and vote. Vote will be open for 72
>> hours or as needed.
>>
>> Known issues: https://github.com/wso2/product-sp/issues
>>
>> Source and binary distribution files: https://github.com/wso2
>> /product-sp/releases/tag/v4.0.0-RC2
>>
>> The tag to be voted upon: https://github.com/wso2/
>> product-sp/tree/v4.0.0-RC2
>>
>> Please vote as follows.
>> [+] Stable - go ahead and release
>> [-] Broken - do not release (explain why)
>>
>> ~ The WSO2 Analytics Team ~
>> Thanks.
>>
>> --
>> Sajith Janaprasad Ariyarathna
>> Senior Software Engineer; WSO2, Inc.;  http://wso2.com/
>> <https://wso2.com/signature>
>>
>> ___
>> Dev mailing list
>> d...@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
>
> Blog: http://blog.napagoda.com
> Linkedin: https://www.linkedin.com/in/chandananapagoda/
>
>
> ___
> Dev mailing list
> d...@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] wso2is-analytics 5.3.0 how store:// resolved to url ?

2017-06-30 Thread Gimantha Bandara
On Sat, Jul 1, 2017 at 2:04 AM Dmitry Lukyanov <dlukya...@ukr.net> wrote:

> Hello all,
>
> Please help me to understand how "store://" resolved to real url.
> wso2 analytics connects to itself by localhost for urls defined as
> store://... and I need to change this for docker.
>
> Sample gadget.json file:
>
> https://docs.wso2.com/display/DS200/Creating+a+Gadget#CreatingaGadget-Step3-Definethegadget.jsonfile
>
> Thanks,
>  Dmitry
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Adding reusable PDF table generator OSGI component to IS 6.0.0 Analytics

2017-02-07 Thread Gimantha Bandara
+1 to move to shared-analytics.

On Tue, Feb 7, 2017 at 1:32 PM, Danoja Dias <dan...@wso2.com> wrote:

> Hi All,
>
> We decided to write a reusable OSGI component to generate PDF with tables
> using PDFBOX library that is already used in IS pack for PDF generation
> in server side.
>
> We have implemented this in the following location [1] under PR [2]
>
> The usage of this component is described here [3] .
>
> Since this functionality seems common for all analytics, Would it be
> better to add this component in "shared-analytics" instead of within IS
> analytics ?
>
>
>
> [1] https://github.com/wso2/analytics-is/tree/master/components
> [2] https://github.com/wso2/analytics-is/pull/330/files
> [3] https://github.com/DanojaDias/ServersidePDFGenerator/
> blob/master/README.md
>
> --
> Best Regards,
> Danoja Dias
> Intern Software Engineering - WSO2
>
> Email : dan...@wso2.com
> Mobile : +94771160393 <+94%2077%20116%200393>
>
> [image: http://wso2.com/signature] <http://wso2.com/signature>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Analytics] Removing FACET from Indexing data types

2016-04-22 Thread Gimantha Bandara
Hi Isuru,

In Spark, we use "-sp" to denote score parameters and it also denotes that
the specific parameter will be indexed. Thats because, we can make score
parameter from indexed fields only. Same applies to facets, If so we use
"-f" to denote facets, we shouldn't use "-i", since facets will anyway be
indexed.

On Fri, Apr 22, 2016 at 10:00 AM, Gimantha Bandara <giman...@wso2.com>
wrote:

> Hi Isuru,
>
> Older FACET keyword is also supported. Yes, we are planing to add -f to
> denote facet attribute.
>
> @Anjana/Niranda WDYT?
>
>
> On Friday, April 22, 2016, Isuru Wijesinghe <isur...@wso2.com> wrote:
>
>> Hi Gimantha,
>>
>> How can we denote a given field in any data type as a facet in
>> *spark-sql.* Lets say as an example I have a field called
>> processDefinitionId (string data-type) and I need to define it as a facet
>> as well (see below example).
>>
>> CREATE TEMPORARY TABLE PROCESS_USAGE_SUMMARY USING CarbonAnalytics
>> OPTIONS (tableName "PROCESS_USAGE_SUMMARY_DATA",
>> schema "processDefinitionId string -i *-f*,
>> processVersion string -i,
>> processInstanceId string -i,,
>> primaryKeys "processInstanceId"
>> );
>>
>> is this the way that we can define it in newer version ?
>>
>>
>> On Fri, Apr 22, 2016 at 2:39 AM, Gimantha Bandara <giman...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> We are planning to remove "FACET" (this type is used to
>>> categorize/group, to get unique values and to drill-down) from indexing
>>> data types and we will introduce an attribute to mark other data types as a
>>> FACET or not.  Earlier FACETs can be defined only for STRING fields and
>>> even if we define a STRING as a FACET, then we will not be able to search
>>> it as a STRING field. With this change, any data type field can be marked
>>> as a FACET and then the field can be used as a FACET and as the usual data
>>> type as well.
>>> This change will not affect the older DAS capps or event-store
>>> configurations; It will be backward compatible with previous DAS versions
>>> (3.0.0 and 3.0.1). However if you try to get the Schema of a table using JS
>>> APIs, REST APIs or the Webservice, FACET type will not be there. A
>>> attribute called "isFacet" is used to identify the FACETed fields. See
>>> below for an example.
>>>
>>>
>>>
>>> *Older schema*
>>> {
>>> "columns" : {
>>>"logFile" : { "type" : "STRING", "isIndex" : true,
>>> "isScoreParam" : false },
>>>"level" : { "type" : "DOUBLE", "isIndex" : true,
>>> "isScoreParam" : false },
>>>"location" : { "type" : "FACET", "isIndex" : true,
>>> "isScoreParam" : false } },
>>> "primaryKeys" : ["logFile", "level"]
>>> }
>>>
>>>
>>> *Equivalent new schema*
>>>
>>>
>>> *{ "columns" : {   "logFile" : { "type" : "STRING",
>>> "isIndex" : true, "isScoreParam" : false, **, isFacet : *false
>>> * },   "*level*" : { "type" : "DOUBLE", "isIndex" : true,
>>> "isScoreParam" : false, **, isFacet : *false* },*
>>> *   "location" : { "type" : "*STRING*", "isIndex" : true,
>>> "isScoreParam" : false, isF*acet : true
>>> * } },//FACET field is removed "primaryKeys" : ["logFile", "*
>>> level
>>>
>>>
>>> *"] }*
>>> --
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Isuru Wijesinghe
>> *Software Engineer*
>> WSO2 inc : http://wso2.com
>> lean.enterprise.middleware
>> Mobile: 0710933706
>> isur...@wso2.com
>>
>
>
> --
> Gimantha Bandara
> Software Engineer
> WSO2. Inc : http://wso2.com
> Mobile : +94714961919
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Analytics] Removing FACET from Indexing data types

2016-04-21 Thread Gimantha Bandara
Hi Isuru,

Older FACET keyword is also supported. Yes, we are planing to add -f to
denote facet attribute.

@Anjana/Niranda WDYT?

On Friday, April 22, 2016, Isuru Wijesinghe <isur...@wso2.com> wrote:

> Hi Gimantha,
>
> How can we denote a given field in any data type as a facet in
> *spark-sql.* Lets say as an example I have a field called
> processDefinitionId (string data-type) and I need to define it as a facet
> as well (see below example).
>
> CREATE TEMPORARY TABLE PROCESS_USAGE_SUMMARY USING CarbonAnalytics OPTIONS
> (tableName "PROCESS_USAGE_SUMMARY_DATA",
> schema "processDefinitionId string -i *-f*,
> processVersion string -i,
> processInstanceId string -i,,
> primaryKeys "processInstanceId"
> );
>
> is this the way that we can define it in newer version ?
>
>
> On Fri, Apr 22, 2016 at 2:39 AM, Gimantha Bandara <giman...@wso2.com
> <javascript:_e(%7B%7D,'cvml','giman...@wso2.com');>> wrote:
>
>> Hi all,
>>
>> We are planning to remove "FACET" (this type is used to categorize/group,
>> to get unique values and to drill-down) from indexing data types and we
>> will introduce an attribute to mark other data types as a FACET or not.
>> Earlier FACETs can be defined only for STRING fields and even if we define
>> a STRING as a FACET, then we will not be able to search it as a STRING
>> field. With this change, any data type field can be marked as a FACET and
>> then the field can be used as a FACET and as the usual data type as well.
>> This change will not affect the older DAS capps or event-store
>> configurations; It will be backward compatible with previous DAS versions
>> (3.0.0 and 3.0.1). However if you try to get the Schema of a table using JS
>> APIs, REST APIs or the Webservice, FACET type will not be there. A
>> attribute called "isFacet" is used to identify the FACETed fields. See
>> below for an example.
>>
>>
>>
>> *Older schema*
>> {
>> "columns" : {
>>"logFile" : { "type" : "STRING", "isIndex" : true,
>> "isScoreParam" : false },
>>"level" : { "type" : "DOUBLE", "isIndex" : true,
>> "isScoreParam" : false },
>>"location" : { "type" : "FACET", "isIndex" : true,
>> "isScoreParam" : false } },
>> "primaryKeys" : ["logFile", "level"]
>> }
>>
>>
>> *Equivalent new schema*
>>
>>
>> *{ "columns" : {   "logFile" : { "type" : "STRING", "isIndex"
>> : true, "isScoreParam" : false, **, isFacet : *false
>> * },   "*level*" : { "type" : "DOUBLE", "isIndex" : true,
>> "isScoreParam" : false, **, isFacet : *false* },*
>> *   "location" : { "type" : "*STRING*", "isIndex" : true,
>> "isScoreParam" : false, isF*acet : true
>> * } },//FACET field is removed "primaryKeys" : ["logFile", "*
>> level
>>
>>
>> *"] }*
>> --
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> <javascript:_e(%7B%7D,'cvml','Architecture@wso2.org');>
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Isuru Wijesinghe
> *Software Engineer*
> WSO2 inc : http://wso2.com
> lean.enterprise.middleware
> Mobile: 0710933706
> isur...@wso2.com <javascript:_e(%7B%7D,'cvml','isur...@wso2.com');>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [REST APIs][Analytics] GET request with a payload

2016-04-15 Thread Gimantha Bandara
Hi Frank,

Yes, I saw that and I read the document and it is very helpful. These APIs
were created and released last year. So we cannot change these APIs until
we do a major release. Of course the new changes in these REST APIs and new
REST APIs will be created adhering to those REST API Guidelines.

On Fri, Apr 15, 2016 at 10:54 PM, Frank Leymann <fr...@wso2.com> wrote:

> We built a REST API Guideline document a few weeks ago.  Do you (Gimantha)
> see that?  Its purpose is to help in designing REST APIs.  If you think it
> would be helpful for you and your team, we can set up a call on this
>
>
> Best regards,
> Frank
>
> 2016-04-15 16:00 GMT+02:00 Manuranga Perera <m...@wso2.com>:
>
>> 1) So {to}, {from}, {start}, {count} are resources ? They are not. How
>> is this REST?
>> 2) What are you searching in POST /analytics/search.
>> tables, drilldown, everything? I can't see that in URL.
>> 3) is POST /analytics/drilldown creating a drilldown or getting one ? If
>> it's getting one, this is also wrong, should be get. If creating, why not
>> PUT?
>>
>>
>>
>>
>> On Mon, Apr 4, 2016 at 12:08 AM, Gimantha Bandara <giman...@wso2.com>
>> wrote:
>>
>>> Please note that "fields" is changed to "columns" for consistency as in
>>> APIs, "columns" is used.
>>>
>>> On Mon, Apr 4, 2016 at 9:08 AM, Gimantha Bandara <giman...@wso2.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> Thank you for your suggestions! We are going to use GET with the query
>>>> parameter "columns" to get the records filtered by a time range. So when
>>>> only a selected set of columns/fields are needed, Following format can be
>>>> used.
>>>>
>>>> *Get the records within a specific time range.*
>>>>
>>>> GET
>>>> /analytics/tables/{tableName}/{to}/{from}/{start}/{count}?fields=field1,field2,field3
>>>>
>>>> *Drilldown and Search APIs*
>>>>
>>>> Additional JSON element will be added as following
>>>>
>>>> POST /analytics/drilldown or POST /analytics/search
>>>>
>>>> {
>>>>   "tableName" : ,
>>>>
>>>>
>>>>   "fields: ["field1", field2", field3"]
>>>> }
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Mar 24, 2016 at 2:59 PM, Lahiru Sandaruwan <lahi...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> POST for filterings is not an issue for special cases, as document
>>>>> also clearly confirms.
>>>>>
>>>>> However, I think the decision has to be made on practical use cases.
>>>>> This use case doesn't looks like a complex one. As Ayoma mention, it is a
>>>>> good idea to implement two filters to include and exclude.
>>>>>
>>>>> Considering the practical use, if url length is not a problem(i.e.
>>>>> practically user will not have a requirement to use around 400 columns per
>>>>> search, if we average word length to 5), we should go for GET.
>>>>>
>>>>> Otherwise, we can go for POST.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> On Thu, Mar 24, 2016 at 9:01 AM, Sachith Withana <sach...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Gimantha,
>>>>>>
>>>>>> I think the point made by Udara is valid.
>>>>>> Anyways if the user wants to get a selected number of columns, the
>>>>>> chances are it won't exceed the url limit.
>>>>>> ( due to the that number being low).
>>>>>>
>>>>>> Thanks,
>>>>>> Sachith
>>>>>>
>>>>>> On Thu, Mar 24, 2016 at 2:21 PM, Gimantha Bandara <giman...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Sanjeewa,
>>>>>>>
>>>>>>> Thank you for the guidelines doc. The exact problem is discussed in
>>>>>>> 10.2 in the above document. We will be filtering the record values in 
>>>>>>> each
>>>>>>> records by providing the required columns, so only those column values 
>>>>>>> will
>>>>>>> be returned with each record. According 

Re: [Architecture] [Dev] [REST APIs][Analytics] GET request with a payload

2016-04-15 Thread Gimantha Bandara
1) I agree that they are not resources. We were discussing whether to use
query parameters or a POST with those parameters. These APIs were already
released in DAS 3.0.0 and we cant change now. We will change the APIs
according to the REST guide in the next major release.

2) /analytics/search [1] is used to search the records with a given Lucene
query, in a specific table and pagination information. Those information is
sent in the POST payload. It doesnt support drilldown.

3) /analytics/drilldown[2] supports Lucene facet based drilldown + search.
Using that, we can retrieve records. Here also we cant use a GET, because
it has a lots of parameters in the request like, tablename, facet field
name, categorypath, pagination information, lucene query..etc


[1]
https://docs.wso2.com/display/DAS300/Retrieving+All+Records+Matching+the+Given+Search+Query+via+REST+API
[2]
https://docs.wso2.com/display/DAS300/Retrieving+Specific+Records+through+a+Drill+Down+Search+via+REST+API


On Fri, Apr 15, 2016 at 7:30 PM, Manuranga Perera <m...@wso2.com> wrote:

> 1) So {to}, {from}, {start}, {count} are resources ? They are not. How is
> this REST?
> 2) What are you searching in POST /analytics/search.
> tables, drilldown, everything? I can't see that in URL.
> 3) is POST /analytics/drilldown creating a drilldown or getting one ? If
> it's getting one, this is also wrong, should be get. If creating, why not
> PUT?
>
>
>
>
> On Mon, Apr 4, 2016 at 12:08 AM, Gimantha Bandara <giman...@wso2.com>
> wrote:
>
>> Please note that "fields" is changed to "columns" for consistency as in
>> APIs, "columns" is used.
>>
>> On Mon, Apr 4, 2016 at 9:08 AM, Gimantha Bandara <giman...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> Thank you for your suggestions! We are going to use GET with the query
>>> parameter "columns" to get the records filtered by a time range. So when
>>> only a selected set of columns/fields are needed, Following format can be
>>> used.
>>>
>>> *Get the records within a specific time range.*
>>>
>>> GET
>>> /analytics/tables/{tableName}/{to}/{from}/{start}/{count}?fields=field1,field2,field3
>>>
>>> *Drilldown and Search APIs*
>>>
>>> Additional JSON element will be added as following
>>>
>>> POST /analytics/drilldown or POST /analytics/search
>>>
>>> {
>>>   "tableName" : ,
>>>
>>>
>>>   "fields: ["field1", field2", field3"]
>>> }
>>>
>>>
>>>
>>>
>>> On Thu, Mar 24, 2016 at 2:59 PM, Lahiru Sandaruwan <lahi...@wso2.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> POST for filterings is not an issue for special cases, as document also
>>>> clearly confirms.
>>>>
>>>> However, I think the decision has to be made on practical use cases.
>>>> This use case doesn't looks like a complex one. As Ayoma mention, it is a
>>>> good idea to implement two filters to include and exclude.
>>>>
>>>> Considering the practical use, if url length is not a problem(i.e.
>>>> practically user will not have a requirement to use around 400 columns per
>>>> search, if we average word length to 5), we should go for GET.
>>>>
>>>> Otherwise, we can go for POST.
>>>>
>>>> Thanks.
>>>>
>>>> On Thu, Mar 24, 2016 at 9:01 AM, Sachith Withana <sach...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Gimantha,
>>>>>
>>>>> I think the point made by Udara is valid.
>>>>> Anyways if the user wants to get a selected number of columns, the
>>>>> chances are it won't exceed the url limit.
>>>>> ( due to the that number being low).
>>>>>
>>>>> Thanks,
>>>>> Sachith
>>>>>
>>>>> On Thu, Mar 24, 2016 at 2:21 PM, Gimantha Bandara <giman...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Sanjeewa,
>>>>>>
>>>>>> Thank you for the guidelines doc. The exact problem is discussed in
>>>>>> 10.2 in the above document. We will be filtering the record values in 
>>>>>> each
>>>>>> records by providing the required columns, so only those column values 
>>>>>> will
>>>>>> be returned with each record. According to the document the POST can be
>>>&

Re: [Architecture] [Dev] [REST APIs][Analytics] GET request with a payload

2016-03-24 Thread Gimantha Bandara
Hi Sanjeewa,

Thank you for the guidelines doc. The exact problem is discussed in 10.2 in
the above document. We will be filtering the record values in each records
by providing the required columns, so only those column values will be
returned with each record. According to the document the POST can be used
either for updating/creating a resource or for initializing a processing
function. In our case we will be simply retrieving records but need to
provide a filter for the record values. So from users perspective, it will
be doing some processing and returning filtered records.

We can actually implement the following url, but we cannot exactly say if
it will exceed the url length limit.
GET /analytics/tables/{tableName}?columns=column1,column2]

Or we can implement something like below,

POST /analytics/tables/tableName/

{
  from:
  to:
  start:
  count:
  columns :[c1,c2,c3]
}

or

POST /analytics/

{
  tableName :
  from:
  to:
  start:
  count:
  columns :[c1,c2,c3]
}

Considering the url length limit, I think the second option is better. WDYT?

On Thu, Mar 24, 2016 at 12:33 PM, Sanjeewa Malalgoda <sanje...@wso2.com>
wrote:

> Hi Gimantha,
> Did you refer REST API guidelines document attached in this mail[1] in
> architecture mailing list.
> When we develop REST APIs please follow that document and if you see
> anything missed there please let us know.
>
> [1][Architecture] REST API Guidelines
>
>
> Thanks,
> sanjeewa.
>
> On Wed, Mar 23, 2016 at 8:01 PM, Gimantha Bandara <giman...@wso2.com>
> wrote:
>
>> Hi all,
>>
>>
>> We have a REST API in DAS to retrieve records in a specific table. It
>> supports GET method with the following url format.
>>
>> /analytics/tables/{tableName}/{from}/{to}/{start}/{count}
>>
>> Sending a GET request to above url will give the records between given
>> "from", "to" time range starting from index "start" with  "count"  page
>> size.
>>
>> Now we need to change the API, so that the user can define the record
>> columns/fields he wants. Current API will return the records with all the
>> values/columns. To do that, we can allow the user to define the columns he
>> needs, in the payload. But it seems that having a payload with a GET is not
>> the convention/the best practice.
>>
>> POST can be used to send the column names as a payload, but here we are
>> not making any updates to {tableName} resource. We will be just retrieving
>> records using a POST. So it also seems not the convention/the best practice.
>>
>> The only solution I can think of is, having a different resource path to
>> get the records with only specified fields/columns. Are there any other
>> solutions?
>>
>> Thanks,
>> Gimantha
>>
>>
>> ___
>> Dev mailing list
>> d...@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> <http://sanjeewamalalgoda.blogspot.com/>blog
> :http://sanjeewamalalgoda.blogspot.com/
> <http://sanjeewamalalgoda.blogspot.com/>
>
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DAS Lucene query AllowLeadingWildcard

2016-03-24 Thread Gimantha Bandara
Yes. It can be any special character*. *But we use : to match a field to a
value. Like,

 : 

On Thu, Mar 24, 2016 at 12:03 PM, Damith Wickramasinghe <dami...@wso2.com>
wrote:

> Hi Gimantha,
>
> Great. One question. Does it can be any special character. ?
>
> Regards,
> Damith.
>
> On Thu, Mar 24, 2016 at 11:49 AM, Gimantha Bandara <giman...@wso2.com>
> wrote:
>
>> Hi Damith,
>>
>> If the "roles" field contains comma separated values, you can simply
>> search for the specific role using the following query
>>
>> "roles : role1"
>>
>>
>> Lucene has an analysis process which takes place before indexing. So the
>> field values will be tokenized into terms(Text fields are split removing
>> special characters) , stop words.. etc. In your case, the whole string
>> "role1, role2, role3" will be tokenized into "role1", "role2" and "role3".
>> So you can perform a usual search query as I mentioned above.
>>
>> On Thu, Mar 24, 2016 at 11:35 AM, Damith Wickramasinghe <dami...@wso2.com
>> > wrote:
>>
>>> Hi all,
>>>
>>> I have a column which contains roles as a comma separated string. eg:-
>>> role1,role2,role3
>>>
>>> I need to find records which matches to specific role. As I checked
>>> theres no String contains function. But there is wildcard support[1]. To be
>>> able to work for my usecase wildcard should be of type *role1*. But leading
>>> wild cards are not supported. But as per the [2] Lucene 2.1, they can
>>> be enabled by calling QueryParser.setAllowLeadingWildcard( true ). May
>>> I know whether there is a configuration in DAS to enable this. Also even
>>> this can be achieved I think this will be an expensive operation. If so is
>>> there a best way to achieve this? eg:-custom UDF
>>>
>>> [1]http://www.lucenetutorial.com/lucene-query-syntax.html
>>> [2]https://wiki.apache.org/lucene-java/LuceneFAQ
>>>
>>> Thanks,
>>> Damith.
>>>
>>> --
>>> Software Engineer
>>> WSO2 Inc.; http://wso2.com
>>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>>> lean.enterprise.middleware
>>>
>>> mobile: *+94728671315 <%2B94728671315>*
>>>
>>>
>>
>>
>> --
>> Gimantha Bandara
>> Software Engineer
>> WSO2. Inc : http://wso2.com
>> Mobile : +94714961919
>>
>
>
>
> --
> Software Engineer
> WSO2 Inc.; http://wso2.com
> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
> lean.enterprise.middleware
>
> mobile: *+94728671315 <%2B94728671315>*
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DAS Lucene query AllowLeadingWildcard

2016-03-24 Thread Gimantha Bandara
Hi Damith,

If the "roles" field contains comma separated values, you can simply search
for the specific role using the following query

"roles : role1"


Lucene has an analysis process which takes place before indexing. So the
field values will be tokenized into terms(Text fields are split removing
special characters) , stop words.. etc. In your case, the whole string
"role1, role2, role3" will be tokenized into "role1", "role2" and "role3".
So you can perform a usual search query as I mentioned above.

On Thu, Mar 24, 2016 at 11:35 AM, Damith Wickramasinghe <dami...@wso2.com>
wrote:

> Hi all,
>
> I have a column which contains roles as a comma separated string. eg:-
> role1,role2,role3
>
> I need to find records which matches to specific role. As I checked theres
> no String contains function. But there is wildcard support[1]. To be able
> to work for my usecase wildcard should be of type *role1*. But leading wild
> cards are not supported. But as per the [2] Lucene 2.1, they can be
> enabled by calling QueryParser.setAllowLeadingWildcard( true ). May I
> know whether there is a configuration in DAS to enable this. Also even this
> can be achieved I think this will be an expensive operation. If so is there
> a best way to achieve this? eg:-custom UDF
>
> [1]http://www.lucenetutorial.com/lucene-query-syntax.html
> [2]https://wiki.apache.org/lucene-java/LuceneFAQ
>
> Thanks,
> Damith.
>
> --
> Software Engineer
> WSO2 Inc.; http://wso2.com
> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
> lean.enterprise.middleware
>
> mobile: *+94728671315 <%2B94728671315>*
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [REST APIs][Analytics] GET request with a payload

2016-03-24 Thread Gimantha Bandara
Thank you for your suggestions.
We cannot exactly say that the number of columns/fields user will need. It
depends on how many fields a table has and how many the user want to get
from that table. So the url length might exceed. If so, we will have to go
with a new method with POST as it seems the only option.

On Wed, Mar 23, 2016 at 9:34 PM, Lahiru Sandaruwan <lahi...@wso2.com> wrote:

> Yes, if it is possible to put the columns names in the url as Ayoma
> mentioned, we must use that(First i thought it is a complex payload you
> want to send).
>
> Unless there are limitations, like column list doesn't exceed the url
> length limits, we should use GET.
>
> Thanks.
>
> On Wed, Mar 23, 2016 at 3:54 PM, Ayoma Wijethunga <ay...@wso2.com> wrote:
>
>> Hi,
>>
>> It is true that using GET request with a payload is not the best option.
>> Even though it is not strictly prohibited in specs, it can be confusing
>> [1]. REST architecture is very open about how we use HTTP methods, but
>> thinking in terms of REST architecture, I do not think using POST is also
>> the correct approach here [2] (maybe it is just the personal preference).
>>
>> Let me summaries few examples on how others have addressed the same
>> requirement with GET requests.
>>
>> Facebook Graph API is using "field" query parameter for this [3]. For
>> example :
>>
>> Following Graph API call 
>> *https://graph.facebook.com/bgolub?fields=id,name,picture
>>> <https://graph.facebook.com/bgolub?fields=id,name,picture>* will only
>>> return the id, name, and picture in Ben's profile
>>>
>>
>> SharePoint syntax is not very eye candy [4][5], but it goes like :
>>
>>
>>> http://server/siteurl/_vti_bin/listdata.svc/DocumentsOne?$select=MyDocumentType,Title,Id&$expand=MyDocumentType
>>>
>>
>> YouTube API has the same in below form [6] :
>>
>> Example 1: Retrieve number of items in feed, index of
>>> first item in result set, and all entries in the feed:
>>> fields=openSearch:totalResults,openSearch:startIndex,entry
>>>
>>
>> LinkedIn has the same [7]
>>
>>
>>> https://api.linkedin.com/v1/people-search:(people:(id,first-name,last-name,positions:(id,title,summary,start-date,end-date,is-current,company:(id,name,type,size,industry,ticker))
>>>
>>
>> IMO Facebook Graph API has the cleanest mechanism.
>>
>> I believe that if we use a similar format we will not have to introduce
>> new resource paths. Instead we'll be able to provide all the columns,
>> unless user specifically request limited set of fields with a query
>> parameter. WDYT?
>>
>> [1]
>> http://stackoverflow.com/questions/5216567/is-this-statement-correct-http-get-method-always-has-no-message-body
>> [2] https://spring.io/understanding/REST
>> [3]
>> https://developers.facebook.com/docs/graph-api/using-graph-api#fieldexpansion
>> [4]
>> http://sharepoint.stackexchange.com/questions/118633/how-to-select-and-filter-list-items-lookup-column-with-sharepoint-2013-rest-feat
>> [5]
>> http://platinumdogs.me/2013/03/14/sharepoint-adventures-with-the-rest-api-part-1/
>> [6]
>> https://developers.google.com/youtube/2.0/developers_guide_protocol_partial#Fields_Formatting_Rules
>> [7] https://developer.linkedin.com/docs/fields?u=0
>>
>> Best Regards,
>> Ayoma.
>>
>> On Wed, Mar 23, 2016 at 8:13 PM, Lahiru Sandaruwan <lahi...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I think using a POST with a body, for retrieving information is fine
>>> considering the requirement. GET with body is not recommended.
>>>
>>> Thanks.
>>>
>>> On Wed, Mar 23, 2016 at 2:31 PM, Gimantha Bandara <giman...@wso2.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>>
>>>> We have a REST API in DAS to retrieve records in a specific table. It
>>>> supports GET method with the following url format.
>>>>
>>>> /analytics/tables/{tableName}/{from}/{to}/{start}/{count}
>>>>
>>>> Sending a GET request to above url will give the records between given
>>>> "from", "to" time range starting from index "start" with  "count"  page
>>>> size.
>>>>
>>>> Now we need to change the API, so that the user can define the record
>>>> columns/fields he wants. Current API will return the records with all the
>>>> values/columns. To do that, we 

[Architecture] [REST APIs][Analytics] GET request with a payload

2016-03-23 Thread Gimantha Bandara
Hi all,


We have a REST API in DAS to retrieve records in a specific table. It
supports GET method with the following url format.

/analytics/tables/{tableName}/{from}/{to}/{start}/{count}

Sending a GET request to above url will give the records between given
"from", "to" time range starting from index "start" with  "count"  page
size.

Now we need to change the API, so that the user can define the record
columns/fields he wants. Current API will return the records with all the
values/columns. To do that, we can allow the user to define the columns he
needs, in the payload. But it seems that having a payload with a GET is not
the convention/the best practice.

POST can be used to send the column names as a payload, but here we are not
making any updates to {tableName} resource. We will be just retrieving
records using a POST. So it also seems not the convention/the best practice.

The only solution I can think of is, having a different resource path to
get the records with only specified fields/columns. Are there any other
solutions?

Thanks,
Gimantha
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Analytics] Improvements to Lucene based Aggregate functions (Installing Aggregates as OSGI components)

2016-03-21 Thread Gimantha Bandara
Hi Chathura,

In previous reply, I misunderstood your suggestion. Yes. We can move
aggregateFields to object level. So the user can pass the aggregateFields
by another interface method or by the object constructor. Thank you for
your suggestion!

@Anjana, WDYT? Shall we add another method to pass the fields instead
passing fields every time a record is processed. User will have to assign
the fields to an object level variable of the object level and use that
variable inside the process method.

On Mon, Mar 21, 2016 at 11:39 AM, Chathura Ekanayake <chath...@wso2.com>
wrote:

> Hi Gimantha,
>
> Yes, hard-coding aggregateFields imposes a limitation. But, if I
> understood correct.. once we do a REST API call, we have to execute
> process(...) method for each record in the store. But for a single REST API
> call aggregateFields is a constant. So we can set those fields (at the
> object level) after receiving a REST API call (before executing the
> process() method for the first record).
>
> Regards,
> Chathura
>
>
> On Mon, Mar 21, 2016 at 11:12 AM, Gimantha Bandara <giman...@wso2.com>
> wrote:
>
>> Hi Chathura,
>>
>> AggregateFields are the fields which are needed to calculate the
>> aggregate values. For example, If you want to calculate SUM of
>> "subject_mark" of records representing students, then subject_marks will be
>> the only element in aggregateFields, since calculating SUM only involves
>> one field. AggregateFields are constant for a specific AggregateFunction.
>> But the name of the field can be changed. AggregateFields are defined using
>> REST APIs or JS APIs as shown below.
>>
>> *Calling custom aggregate functions through Javascript API*
>>
>> var queryInfo = {
>> tableName:"Students", //table name on which the aggregation is
>> performed
>> searchParams : {
>> groupByField:"location", //grouping field if any
>> query : "Grade:10" //additional filtering query
>> aggregateFields:[
>> {
>>* fields:["Height", "Weight"],* //fields necessary for
>> aggregate function
>> aggregate:"CUSTOM_AGGREGATE", //unique name of the aggregate
>> function, this is what we return using "getAggregateName" method above.
>> alias:"aggregated_result" //Alias for the result of the
>> aggregate function
>> }]
>> }
>> }
>>
>> client.searchWithAggregates(queryInfo, function(data) {
>>   console.log (data["message"]);
>> }, function(error) {
>>   console.log("error occured: " + error["message"]);
>> });
>>
>>
>>
>> *Aggregates REST APIs*This is as same as the Javascript API.
>>
>> POST https://localhost:9443/analytics/aggregates
>> {
>>  "tableName":"Students",
>>  "groupByField":"location",
>>  "aggregateFields":[
>>{
>>  *"fields":["Height", "Weight"],*
>>  "aggregate":"CUSTOM_AGGREGATE",
>>  "alias":"aggregated_result"
>>}]
>> }
>>
>> Once you define, the aggregateFields using REST APIs or JS APIs, those
>> fields will be available as aggregateFields array in the process method.
>>
>> On Mon, Mar 21, 2016 at 10:18 AM, Chathura Ekanayake <chath...@wso2.com>
>> wrote:
>>
>>>
>>>
>>>
>>> This method will get called for all the records which need to be
>>>> aggregated. RecordValuesContext will contain the record values of the
>>>> current record being processed. aggregateFields will contain an array of
>>>> fields which will be used for the aggregation. The order of the
>>>> aggregateFields will matter when we implement the logic. For example, lets
>>>> say we are going to implement SUM aggregate. Then we know that only one
>>>> field will be required to calculate SUM and always aggregateFields will
>>>> contain one field name in it, which is the name of the field being SUMed.
>>>>
>>>> public void process(RecordValuesContext ctx, String[] aggregateFields)
>>>> throws AnalyticsException {
>>>> if (aggregateFields == null || aggregateFields.length == 0) {
>>>> throw new AnalyticsException("Field to be aggregated, is
>>>> missing");
>>>> }
>>>> Object value = ctx.getValue(aggregateFields[0]);
&g

Re: [Architecture] [Analytics] Improvements to Lucene based Aggregate functions (Installing Aggregates as OSGI components)

2016-03-20 Thread Gimantha Bandara
Hi Chathura,

AggregateFields are the fields which are needed to calculate the aggregate
values. For example, If you want to calculate SUM of "subject_mark" of
records representing students, then subject_marks will be the only element
in aggregateFields, since calculating SUM only involves one field.
AggregateFields are constant for a specific AggregateFunction. But the name
of the field can be changed. AggregateFields are defined using REST APIs or
JS APIs as shown below.

*Calling custom aggregate functions through Javascript API*

var queryInfo = {
tableName:"Students", //table name on which the aggregation is performed
searchParams : {
groupByField:"location", //grouping field if any
query : "Grade:10" //additional filtering query
aggregateFields:[
{
   * fields:["Height", "Weight"],* //fields necessary for aggregate
function
aggregate:"CUSTOM_AGGREGATE", //unique name of the aggregate
function, this is what we return using "getAggregateName" method above.
alias:"aggregated_result" //Alias for the result of the
aggregate function
}]
}
}

client.searchWithAggregates(queryInfo, function(data) {
  console.log (data["message"]);
}, function(error) {
  console.log("error occured: " + error["message"]);
});



*Aggregates REST APIs*This is as same as the Javascript API.

POST https://localhost:9443/analytics/aggregates
{
 "tableName":"Students",
 "groupByField":"location",
 "aggregateFields":[
   {
 *"fields":["Height", "Weight"],*
 "aggregate":"CUSTOM_AGGREGATE",
 "alias":"aggregated_result"
   }]
}

Once you define, the aggregateFields using REST APIs or JS APIs, those
fields will be available as aggregateFields array in the process method.

On Mon, Mar 21, 2016 at 10:18 AM, Chathura Ekanayake <chath...@wso2.com>
wrote:

>
>
>
> This method will get called for all the records which need to be
>> aggregated. RecordValuesContext will contain the record values of the
>> current record being processed. aggregateFields will contain an array of
>> fields which will be used for the aggregation. The order of the
>> aggregateFields will matter when we implement the logic. For example, lets
>> say we are going to implement SUM aggregate. Then we know that only one
>> field will be required to calculate SUM and always aggregateFields will
>> contain one field name in it, which is the name of the field being SUMed.
>>
>> public void process(RecordValuesContext ctx, String[] aggregateFields)
>> throws AnalyticsException {
>> if (aggregateFields == null || aggregateFields.length == 0) {
>> throw new AnalyticsException("Field to be aggregated, is
>> missing");
>> }
>> Object value = ctx.getValue(aggregateFields[0]);
>> if (value == null) {
>> throw new AnalyticsException("Error while calculating SUM:
>> value of the field, " +
>>  aggregateFields[0] + " is null");
>> }
>> if (value instanceof Number) {
>> sum += ((Number)value).doubleValue();
>> } else {
>> throw new AnalyticsException("Error while calculating
>> Average: Value '" + value.toString() +
>>  "', being aggregated is not
>> numeric.");
>> }
>> }
>>
>>
>>
> Hi Gimantha
>
> Is aggregateFields parameter a constant for an AggregateFunction object?
> In that case, can we set it just after instantiating an object?
>

I assume that you are asking if we can define the aggregateFields inside
AggregateFunction object itself, without passing them using REST API or JS
API. Yes, we can do that also. In that case, we will be hard-coding the
aggregateFields names inside the process method and your custom aggregate
function cannot be used for any other field.

>
> Regards,
> Chathura
>
>
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Analytics] Improvements in search APIs in AnalyticsDataService

2016-03-19 Thread Gimantha Bandara
at you wants records to be sorted by, need to
be indexed beforehand.

[1]
https://docs.wso2.com/display/DAS301/Retrieving+Specific+Records+through+a+Drill+Down+Search+via+REST+API
[2]
https://docs.wso2.com/display/DAS301/Retrieving+All+Records+Matching+the+Given+Search+Query+via+REST+API
[3]
https://docs.wso2.com/display/DAS301/Retrieving+All+Records+Matching+the+Given+Search+Query+via+JS+API
[4]
https://docs.wso2.com/display/DAS301/Retrieving+Specific+Records+through+a+Drill+Down+Search+via+JS+API

-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Analytics] Improvements to spark UDFs (Installing UDFs as OSGI components)

2016-03-19 Thread Gimantha Bandara
Hi all,

In DAS, we can define spark UDFs[1] (User defined functions) which can be
used in spark scripts. Using DAS 3.0.0 or DAS 3.0.1, we can define UDFs in
our own java classes, then deploy those jar files into
/repository/components/lib and use the class methods which we created as
spark UDFs.


Other than that, users now can create Spark UDFs as OSGI components by
registering their custom UDFs with the following Interface from
carbon-analytics/analytics-processors.

*org.wso2.carbon.analytics.spark.core.udf.CarbonUDF*

This interface does not contain any method. The interface is solely used to
identify the registered classes as UDFs. By using this approach UDFs can be
installed as features into DAS without configuring the UDFs configuration
files in DAS also can be installed via p2 repo.

[1]
https://docs.wso2.com/display/DAS301/Creating+Spark+User+Defined+Functions

-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Analytics] Improvements to Lucene based Aggregate functions (Installing Aggregates as OSGI components)

2016-03-19 Thread Gimantha Bandara
Hi all,

DAS 3.0.0 and DAS 3.0.1 has 5 built-in Aggregates[1][2] namely MIN, MAX,
SUM, AVG and COUNT which can be used in interactive analytics. Sometimes
some scenarios may require custom aggregate functions for aggregate
calculations. For this we have introduced a new interface from
carbon-analytics/analytics-core, which has three methods which need to be
implemented.

Interface :
org.wso2.carbon.analytics.dataservice.core.indexing.aggregates.AggregateFunction

Following are the three methods that need to be implemented.

*public void process(RecordValuesContext ctx, String[] aggregateFields)
throws AnalyticsException*;

This method will get called for all the records which need to be
aggregated. RecordValuesContext will contain the record values of the
current record being processed. aggregateFields will contain an array of
fields which will be used for the aggregation. The order of the
aggregateFields will matter when we implement the logic. For example, lets
say we are going to implement SUM aggregate. Then we know that only one
field will be required to calculate SUM and always aggregateFields will
contain one field name in it, which is the name of the field being SUMed.

public void process(RecordValuesContext ctx, String[] aggregateFields)
throws AnalyticsException {
if (aggregateFields == null || aggregateFields.length == 0) {
throw new AnalyticsException("Field to be aggregated, is
missing");
}
Object value = ctx.getValue(aggregateFields[0]);
if (value == null) {
throw new AnalyticsException("Error while calculating SUM:
value of the field, " +
 aggregateFields[0] + " is null");
}
if (value instanceof Number) {
sum += ((Number)value).doubleValue();
} else {
throw new AnalyticsException("Error while calculating Average:
Value '" + value.toString() +
 "', being aggregated is not
numeric.");
}
}




*public Object finish() throws AnalyticsException*
Once the aggregation is complete for a group or for whole record set, this
method will be called. This will return the final result of the aggregate
function. for example, the SUM aggregate,

public Number finish() throws AnalyticsException {
return sum;
}



*public String getAggregateName()*
This method is used to identify the Aggregate function uniquely among other
Aggregate functions. You can return a preferred string inside this method.

*Calling custom aggregate functions through Javascript API*

var queryInfo = {
tableName:"Students", //table name on which the aggregation is performed
searchParams : {
groupByField:"location", //grouping field if any
query : "Grade:10" //additional filtering query
aggregateFields:[
{
fields:["Height", "Weight"], //fields necessary for aggregate
function
aggregate:"CUSTOM_AGGREGATE", //unique name of the aggregate
function, this is what we return using "getAggregateName" method above.
alias:"aggregated_result" //Alias for the result of the
aggregate function
}]
}
}

client.searchWithAggregates(queryInfo, function(data) {
  console.log (data["message"]);
}, function(error) {
  console.log("error occured: " + error["message"]);
});


*Note that the order  elements in attribute "fields" will be the same order
of aggregateFields parameter's element order in above process method. That
is Height will be aggregateFields[0] and Weight will be aggregateFields[1]
in process method. Based on that order, "CUSTOM_AGGREGATE" should be
implemented.*



*Aggregates REST APIs*This is as same as the Javascript API.

POST https://localhost:9443/analytics/aggregates
{
 "tableName":"Students",
 "groupByField":"location",
 "aggregateFields":[
   {
 "fields":["Height", "Weight"],
 "aggregate":"CUSTOM_AGGREGATE",
 "alias":"aggregated_result"
   }]
}

[1]
https://docs.wso2.com/display/DAS301/Retrieving+Aggregated+Values+of+Given+Records+via+REST+API
[2]
https://docs.wso2.com/display/DAS301/Retrieving+Aggregated+Values+of+Given+Records+via+JS+API
-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [DAS][APIM] DAS REST API evaluation for use with APIM

2015-09-28 Thread Gimantha Bandara
@Rukshan, I think you have to call group by on "api" field also, otherwise
you will loose some api values. As per the current implementation, we group
by the given field and aggregate numeric values. So for the current API to
work properly there should be some numeric values to process. you can use a
dummy field for that and see if it works, but it is unnecessary processing
and useless.
@Anjana In our current implementation, we don't provide a way to group the
records unless we are going to use aggregate functions, but actually one
might want to group by a field without using aggregate functions. We can
change the the API logic so that if the user has not provided the aggregate
function, we will simply group the records by the given field and return a
set of records with the grouped values. WDYT?

On Mon, Sep 28, 2015 at 3:42 AM, Rukshan Premathunga <ruks...@wso2.com>
wrote:

> Hi Gimantha,
>
> i could group by multiple attribute as you explained and it working well.
>
> Other thing is we need group by query to find the distinct attribute of
> the result set.
>
> sql ex:
> select api from myTable group by user.
>
> Here api is not a int value and it is string. So does group by should
> support arithmetic operations at all?
>
> can we have a query to select distinct api group by user?
>
> Thanks and Regards.
>
>
> On Mon, Sep 28, 2015 at 9:00 AM, Gimantha Bandara <giman...@wso2.com>
> wrote:
>
>> Hi Rukshan,
>>
>> Can you please explain how the user and the time can be related with the
>> sum of requestCounts?
>>
>> If you want to group by multiple field you can use multi element facet
>> field (In your case your "groupByField" field will have values in the
>> format [, ] and set aggregateLevel as 1. By setting
>> aggregateLevel as 1, the records will be grouped at the 2nd element's
>> position (0th based index and 2nd element is api_version, if you set
>> aggregateLevel to 0 then the records will be grouped at api_name's position
>> only. They will not be grouped by api_version) of the facet field.
>>
>> On Fri, Sep 25, 2015 at 12:19 AM, Rukshan Premathunga <ruks...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> With the new changes it is possible to do Aggregate function by grouping
>>> according to this document[1].
>>>
>>> It was worked well with the given scenarios. But we are facing new
>>> issues with above API because we cannot do aggregate functions, grouping by
>>> multiple Attribute. The next issue is that we cannot define other extra
>>> attribute value we need with Aggregate values.
>>>
>>> Let's consider this example,
>>>
>>> say we have SQL statement like this,
>>>
>>> Select api,Sum(requestCount) from MyTable group by api.
>>>
>>> So the equivalent REST API request is,
>>>
>>> {
>>>  "tableName":"MyTable",
>>>  "groupByField":"api",
>>>  "aggregateFields":[
>>>{
>>>  "fieldName":"requestCount",
>>>  "aggregate":"SUM",
>>>  "alias":"sum_req"
>>>}
>>>  ]
>>> }
>>>
>>> Next consider this SQL query.
>>>
>>> Select api,version,time,user,Sum(requestCount) from MyTable group by
>>> api,version
>>>
>>> Here we need extract attributes time and user. And we group by both api
>>> and version attributes.
>>> But the issue is we cannot request extra attribute time and user with
>>> aggregated value. And not possible to group by multiple attributes.
>>>
>>> I tried with  "groupByField":"api,version" but did not work.
>>>
>>> So can you have a looking to the above issue and provide some
>>> suggestions?
>>>
>>>
>>> [1]
>>> https://docs.wso2.com/display/DAS300/Retrieving+Aggregated+Values+of+Given+Records+via+REST+API
>>>
>>> Thanks and Regards.
>>>
>>> On Wed, Aug 5, 2015 at 12:51 PM, Anjana Fernando <anj...@wso2.com>
>>> wrote:
>>>
>>>> Hi Rukshan,
>>>>
>>>> Let's have a F2F chat on this and decide what we can do. We need to
>>>> start from the requirements and go from there.
>>>>
>>>> Cheers,
>>>> Anjana.
>>>>
>>>> On Tue, Aug 4, 2015 at 10:21 PM, Rukshan Premathunga <ruks...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>

Re: [Architecture] [DAS][APIM] DAS REST API evaluation for use with APIM

2015-07-28 Thread Gimantha Bandara
Searching and getting summation by grouping is possible. Note: Even in
grouping, if we consider your example,

SELECT api,version,apiPublisher,context,SUM(total_fault_count) as
total_fault_count FROM  `table` WHERE time BETWEEN fromDate AND toDate
GROUP BY api,version,apiPublisher,context;

api, version, apiPublisher, context is a multi-field grouping. So in one
api value there can be several versions.. so on. By simply using your
given SQL query, we can get all the records grouped by executing a single
query. Since facets are basically designed to implement drill-down
functionalities, you cannot get SUMs of all the groups at once. You will
first get SUM (total_fault_count) for all the apis, then you drilldown
one more group down (lets say you selected  api called api1, there you
will have several versions under api1. Now you will see
SUM(total_fault_count) for all the version and so on. If  you want to get
the summation by groups for drilldown, then facets can be used. Facets
features cannot be used to get SUM(total_fault_count) for all the groupings
at once. You will have to call facet apis multiple times for that.


On Tue, Jul 28, 2015 at 1:44 PM, Rukshan Premathunga ruks...@wso2.com
wrote:

 Hi all,

 Thanks for the feedbacks.

 We cannot say these queries are bounded. In further these queries are tend
 to be optimised by introducing more constraints.
 Also, is there any possibilities that searching and getting Summation by
 grouping?

 Thanks and Regards.


 On Tue, Jul 28, 2015 at 1:24 PM, Sinthuja Ragendran sinth...@wso2.com
 wrote:

 Hi Gimantha,

 Thanks for the clarification.

 AFAIU APIM spark queries will be anyhow executed and summarized into DAS
 tables, and the above mentioned sample query is something going to be
 executed against the summarized data dynamically according to user's input.
 For example, if the use slides the time-range between some values in APIM,
 the the gadgets in the APIM dashboard needs to be updated with SUM of
 requests, AVG, etc by filtering the summarized data further for the given
 time range. I don't think issuing an SparkSQL query dynamically for such
 dynamic changes is viable solution.

 Thanks,
 Sinthuja.

 On Tue, Jul 28, 2015 at 1:15 PM, Gimantha Bandara giman...@wso2.com
 wrote:

 [Adding Niranda]

 Hi Sinduja/Rukshan,

 Yes this can be achieved using facets, There are several facets related
 APIs. One is to get the Facet record count. Default behavior is to return
 the number of matching records given the Facet array. We can override the
 default behavior by defining a score function. So each record will
 represent the value of the score function. So id we query for the facet
 count, it will simply return the sum of score function values. This
 behavior is similar to SUM aggregate. But for Aggregates like AVG we need
 the current number of records to calculate the average. We will need to
 make two REST calls ( to get the record count with default score function
 1 and to get the SUM another record count with the score function
 aggregating field) since aggregates like AVG are not supported.

 As Gihan said, I think Spark SQL will more suitable for this scenario.
 It support basic aggregates and if a custom function required, we can write
 a UDF.

 Thanks,

 On Tue, Jul 28, 2015 at 11:27 AM, Sinthuja Ragendran sinth...@wso2.com
 wrote:

 Hi,

 One of the main feature of DAS is being decoupled from underline
 datastore, basically an user should be able to configure RDBMS, cassandra,
 Hbase/HDFS datastore, the client applications nor analyzation scripts are
 not necessarily need to change.

 In APIM 1.9.x and before, we stored the summarized data back into RDBMS
 to be supported with both BAM and DAS. But when we are moving forward with
 APIM, IMHO we need to optimize the design based on DAS.

 On Tue, Jul 28, 2015 at 11:02 AM, Rukshan Premathunga ruks...@wso2.com
  wrote:

 Hi all,

 We hope to implement a REST API for WSO2 API manager to expose APIM
 statistic data. For that we evaluated WSO2 DAS REST API to check whether 
 we
 can use it.
 Here are the use cases of the APIM REST API and evaluation result of
 DAS REST API.

 *Motivation*:

 Currently we use DAS/BAM to generate summarised data and they are
 stored in an RDBMS.
 Next, APIM fetch data from the RDBMS and display it on the dashboard.
 APIM has implemented Client which fetch the statistic data from the RDBMS,
 which is used by the UI to show statistics on the dashboard.

 But we have a new requirement to providing these statistics for a bill
 generating application. Since application is separate, we are facing
 problems of expose these statistic data to the application.
 As a solution we suggest implementing REST API, which can be used to
 support above two scenarios (i.e. Current APIM dashboard and the billing
 application).

 Other advantages.


- Implementing stat dashboard using REST API
   - Client side data processing getting reduced
   - Possibilities to expose statistic

Re: [Architecture] [DAS][APIM] DAS REST API evaluation for use with APIM

2015-07-28 Thread Gimantha Bandara
 Chathuranga.
 Software Engineer.
 WSO2, Inc.

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 *Sinthuja Rajendran*
 Associate Technical Lead
 WSO2, Inc.:http://wso2.com

 Blog: http://sinthu-rajan.blogspot.com/
 Mobile: +94774273955





-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [DAS] Changing the name of Message Console

2015-07-12 Thread Gimantha Bandara
+1 for data explorer.
 Isn't spark console more suitable than query analyzer?

On Sun, Jul 12, 2015 at 9:49 AM, Niranda Perera nira...@wso2.com wrote:

 @suho +1

 @iranga I think you are referring to the spark console. do you think we
 should change the name from 'spark console' to 'query analyzer'?

 On Sun, Jul 12, 2015 at 8:28 AM, Sriskandarajah Suhothayan s...@wso2.com
 wrote:

 +1 for data explorer and I believe it's inline with the primary
 usecase of that.

 Suho

 On Sat, Jul 11, 2015 at 10:46 PM, Iranga Muthuthanthri ira...@wso2.com
 wrote:

 +1, Since the category mostly falls under interactive analytics and more
 related about  to querying  data , suggest Query Analyzer

 On Sun, Jul 12, 2015 at 7:59 AM, Niranda Perera nira...@wso2.com
 wrote:

 Hi all,

 DAS currently ships a UI component named 'message console'. it can be
 used to browse data inside the DAS tables.
 IMO this name message console, is misleading. for a person who's new to
 DAS would not know the exact use of it just by reading the name.

 I suggest a more self-explanatory name such as, 'data explorer', 'data
 navigator' etc

 WDYT?

 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 Thanks  Regards

 Iranga Muthuthanthri
 (M) -0777-255773
 Team Product Management


 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --

 *S. Suhothayan*
 Technical Lead  Team Lead of WSO2 Complex Event Processor
  *WSO2 Inc. *http://wso2.com
 * http://wso2.com/*
 lean . enterprise . middleware


 *cell: (+94) 779 756 757 %28%2B94%29%20779%20756%20757 | blog:
 http://suhothayan.blogspot.com/ http://suhothayan.blogspot.com/twitter:
 http://twitter.com/suhothayan http://twitter.com/suhothayan | linked-in:
 http://lk.linkedin.com/in/suhothayan http://lk.linkedin.com/in/suhothayan*

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] WSO2 BAM 3.0 M3 Released!

2015-03-30 Thread Gimantha Bandara
The WSO2 BAM team is pleased to announce the third milestone release of
WSO2 BAM v3.0. The distribution is available at [1].
Improvements

   - [BAM-1970 https://wso2.org/jira/browse/BAM-1970] - New Message
   Console Improvements

New Features

   - [BAM-1968 https://wso2.org/jira/browse/BAM-1968] - HBase Analytics
   Datasource for BAM 3.0.0
   - [BAM-1969 https://wso2.org/jira/browse/BAM-1969] - Feature Rich
   Dashboard Support

Tasks

   - [BAM-1967 https://wso2.org/jira/browse/BAM-1967] - Create Analytics
   Service API which can be switched between local and remote mode based on
   the configuration.


The documentation for BAM v3.0 can be found at [2]. Your feedback is most
welcome, and any issues can be reported to the project at [3].

[1] https://svn.wso2.org/repos/wso2/people/gokul/bam3/bam3-m3/
https://www.google.com/url?q=https%3A%2F%2Fsvn.wso2.org%2Frepos%2Fwso2%2Fpeople%2Fgokul%2Fbam3%2Fbam3-m3%2Fsa=Dsntz=1usg=AFQjCNEczZ7qUwGcjhDsYJc3BHcYBk8M_Q
[2]
https://docs.wso2.com/display/BAM300/WSO2+Business+Activity+Monitor+Documentation
[3] https://wso2.org/jira/browse/BAM

- WSO2 BAM Team

-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Analytics Facets APIs in AnalyticsDataService

2015-03-19 Thread Gimantha Bandara
Hi all,

Analytics facets APIs provide indexing capabilities for hierarchical
categorization of table entries in New analytics data service (Please refer
to [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing /
Search for more information). Using facet APIs, an user can define
multiple categories as indices for a table and later can be used to search
table entries based on categories. These APIs will be generic, so the user
can assign a weight for each category when indexing, combine a mathematical
function to calculate weights,

*Facet Counts*

As an example in log analysis, consider the following
E.g. log-time : 2015/mar/12/ 20:30:23, 2015/jan/16 13:34:76, 2015/jan/11
01:34:76 ( in 3 different log lines)

In the above example the log time can be defined as a hierarchical facet as
year/month/date. Later if the user wants to get the counts of log entries
by year/month, API would return

2015/jan  - Count :2
2015/mar  - Count 1

If the user wants to get the total count of log entries by year, API
would return

2015 - Count :3

If the user wants to get the count of log entries by year/month/date, API
returns,

2015/jan/11 - Count :1
2015/jan/16 -  Count :1
2015/mar/12 - Count : 1

*Drill-Down capabilities*

Dill down capabilities are provided by Facets APIs. User can drill down
through the facet hierarchy of the index and search table entries. User
also can combine a search query so he can filter out the table entries. As
an example, in above example, User queries for the total count of log lines
in 2015/jan/11 ( he gets 1 as the count) and then he wants to view the
other attributes of the log line ( TID, Component name, log level, ..etc).


*REST APIs for Facets*

Users will be able to use facets API through REST APIs. Users can create
facets indices via the usual Analytics indexing REST APIs and insert
hierarchical category information through Analytics REST APIs, Following
are the updated Analytics REST APIs.

1. Drill-down through a facets hierarchy

/analytics/drilldown or /analytics/drilldown-count

{
   tableName :
   categories : [{
  name : hierarchy name  e.g. Publish date
  categoryPath : [ ], hierarchy as an array e.g.
[2001, March, 02]
  }],
   language :  lucene or regex
   query  :  lucene query or regular expression
   scoreFunction : Javascript function to define scoring function
   scoreParams : [] Array of docvalue fields used as parameters for
scoring function
}


2. Querying for Ranges (Additional to facets)

/analytics/searchrange or /analytics/rangecount

 {
   tableName : sample-table-name,
   ranges : [{
label:
from :
to:
minInclusive:
maxInclusive:
 }],
language :
query :
}


In addition to the existing index types two more are introduced. They are
FACET and SCOREPARAM. FACET is used to define a hierarchical facet
field and SCOREPARAM is used to define scoring parameters for score
function.

*Adding Facet fields and score fields*

*to a table/tables*
Facet fields and score fields need to be defined using indexing APIs.

/analytics/tables/table-name/indices

{
  field : STRING,
  facetField : FACET,
  scoreField : SCOREPARAM
}

Later user can add facet and score fields using POST to,

 /analytics/tables/table-name
[
  {
 values : {
   field : value,
   facetField : {
weight :
categoryPath : [ ]
  },
scoreField : numeric-value
   }
  }
]

or /analytics/records

[
  {
 tableName :
 values : {
   field : value,
   facetField : {
weight :
categoryPath : [ ]
  },
scoreField : numeric-value
   }
  }
]

Feedback and suggestions are appreciated.

-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-02-17 Thread Gimantha Bandara
Hi Harshan,

Thank you for the reply. What if we do not provide an API to check if the
table exists or not, and user can use other APIs as usual. If the table
really does not exist, then he get the response saying the table doesn't
exist with the status code 403?

Thanks,

On Tue, Feb 17, 2015 at 10:38 AM, Harshan Liyanage hars...@wso2.com wrote:

 Hi Gimantha,

 I think Frank has explained everything about the usage of *GET
 /analytics/tables/{tableName}*. Anyway what my point is that if we have
 used *GET* */analytics/tables/{tableName} *to check the existence of the
 table it would work nice if the table does not exist. But if the table
 exists, all the table data will be retrieved from the database  sent over
 the network instead of a simple Boolean *TRUE* result. This will
 introduce unnecessary  performance drawback in your backend server 
 unnecessary network traffic.

 So I'm +1 on using  below API for that purpose.

 *GET /analytics/table_exists?tableName=table-name*

 Thanks,

 Lakshitha Harshan
 Software Engineer
 Mobile: *+94724423048*
 Email: hars...@wso2.com
 Blog : http://harshanliyanage.blogspot.com/
 *WSO2, Inc. :** wso2.com http://wso2.com/*
 lean.enterprise.middleware.

 On Mon, Feb 16, 2015 at 1:21 PM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi Frank,

 Thanks you for your explanation!. What we currently do is, using *GET
 /analytics/tables/{tableName} *get the whole table, if the table does
 not exist, HTTP status will be 404 and the response body will have a
 message saying that the table does not exist.

 On Sun, Feb 15, 2015 at 9:03 PM, Frank Leymann fr...@wso2.com wrote:

 Hi Gimantha,

 it depends on the scenario:  if you want to check existence of resource
 it's fine to use a GET on this resource and receive a 404 Not Found.

 But the subtlety is that Not Found according to HTTP is statement in
 time: you cannot infer that the resource does not exist, all that 404
 says is that it cannot be found at this point in time, i.e. it maybe found
 later. If your scenario doesn't care about that you are fine.

 Furthermore, in case the resource in fact is found, the GET on this
 resource will return the complete table. This might not be acceptable if
 you only want to get an indicator that the exists. The signal for existence
 shouldn't the possibly huge table itself.

 Thus, depending on your scenario you may consider a corresponding
 function.  By the way, this is completely compliant to the REST style that
 foresees such processing function resources.


 Best regards,
 Frank

 2015-02-14 15:01 GMT+01:00 Gimantha Bandara giman...@wso2.com:

 Hi Manuranga,

 Already *GET /analytics/tables/{tableName} *returns 404 if the
 table doesn't exists. So we will not need a separate API. Thanks for your
 feedback.

 On Sat, Feb 14, 2015 at 12:22 PM, Manuranga Perera m...@wso2.com
 wrote:

 there shouldn't be a separate end point for is-exists

 *GET /analytics/tables/{tableName}* - Will return table informing if
 it exists and if not it should return 404

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 Gimantha Bandara
 Software Engineer
 WSO2. Inc : http://wso2.com
 Mobile : +94714961919

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture



 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 Gimantha Bandara
 Software Engineer
 WSO2. Inc : http://wso2.com
 Mobile : +94714961919

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture



 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-02-15 Thread Gimantha Bandara
Hi Frank,

Thanks you for your explanation!. What we currently do is, using *GET
/analytics/tables/{tableName} *get the whole table, if the table does not
exist, HTTP status will be 404 and the response body will have a message
saying that the table does not exist.

On Sun, Feb 15, 2015 at 9:03 PM, Frank Leymann fr...@wso2.com wrote:

 Hi Gimantha,

 it depends on the scenario:  if you want to check existence of resource
 it's fine to use a GET on this resource and receive a 404 Not Found.

 But the subtlety is that Not Found according to HTTP is statement in
 time: you cannot infer that the resource does not exist, all that 404
 says is that it cannot be found at this point in time, i.e. it maybe found
 later. If your scenario doesn't care about that you are fine.

 Furthermore, in case the resource in fact is found, the GET on this
 resource will return the complete table. This might not be acceptable if
 you only want to get an indicator that the exists. The signal for existence
 shouldn't the possibly huge table itself.

 Thus, depending on your scenario you may consider a corresponding
 function.  By the way, this is completely compliant to the REST style that
 foresees such processing function resources.


 Best regards,
 Frank

 2015-02-14 15:01 GMT+01:00 Gimantha Bandara giman...@wso2.com:

 Hi Manuranga,

 Already *GET /analytics/tables/{tableName} *returns 404 if the
 table doesn't exists. So we will not need a separate API. Thanks for your
 feedback.

 On Sat, Feb 14, 2015 at 12:22 PM, Manuranga Perera m...@wso2.com wrote:

 there shouldn't be a separate end point for is-exists

 *GET /analytics/tables/{tableName}* - Will return table informing if it
 exists and if not it should return 404

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 Gimantha Bandara
 Software Engineer
 WSO2. Inc : http://wso2.com
 Mobile : +94714961919

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture



 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-02-14 Thread Gimantha Bandara
Hi Manuranga,

Already *GET /analytics/tables/{tableName} *returns 404 if the
table doesn't exists. So we will not need a separate API. Thanks for your
feedback.

On Sat, Feb 14, 2015 at 12:22 PM, Manuranga Perera m...@wso2.com wrote:

 there shouldn't be a separate end point for is-exists

 *GET /analytics/tables/{tableName}* - Will return table informing if it
 exists and if not it should return 404

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-02-13 Thread Gimantha Bandara
@Roshan,

These APIs are tightly coupled with the product. If user wants to migrate
to a newer version, then the user has to use the newer APIs that come with
the new product version. We have started the separate mail thread to
discuss this.

@Harshan

Again *GET /analytics/tables/is-exists?tableName=table-name *will
conflict with API #5, since each path parameter is optional. So I changed
the API #3 like this,,
*GET /analytics/table_exists?tableName=table-name*

On Fri, Feb 13, 2015 at 11:41 AM, Harshan Liyanage hars...@wso2.com wrote:

 Hi Gimantha,

 I think it's better if you could change the API #3 to something more
 meaningful like *GET /analytics/tables/is-exists?tableName=table-name*.
 When you have the following format for that API, it's more like you are
 trying to fetch the table instead of checking for its existence. Anyway
 that's a minor issue and may be not-applicable to your domain.

 3. Check if a table exists
 *GET /analytics/tables?tableName=table-name*
 ; table-name - The name of the table being checked.

 Thanks,

 Lakshitha Harshan
 Software Engineer
 Mobile: *+94724423048*
 Email: hars...@wso2.com
 Blog : http://harshanliyanage.blogspot.com/
 *WSO2, Inc. :** wso2.com http://wso2.com/*
 lean.enterprise.middleware.

 On Fri, Feb 13, 2015 at 9:49 AM, Roshan Wijesena ros...@wso2.com wrote:

 Hi Gimantha,

 Don't we need to maintain a version with an API?

 Regards
 Roshan

 On Fri, Feb 13, 2015 at 3:08 AM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi all,
 I changed the APIs so they follow the convention and i tested if there
 are conflicts between some APIs. JAX RS treats *GET
 /analytics/tables/{tableName}/{from} *, *GET
 /analytics/tables/{tableName}/recordcount *and *GET
 /analytics/tables/{tableName}/indices* as different APIs, so there will
 not be any conflict.

 Changed APIs :

 1. Create a table
 *POST /analytics/tables*
 {
tableName : table-name
 }
  ; table-name - The name of the table to be created.

 2. Delete a table.
 *DELETE /analytics/tables*/*{tableName}*

 ; tableName - The name of the table to be deleted.

 3. Check if a table exists
 *GET /analytics/tables?tableName=table-name*
 ; table-name - The name of the table being checked.

 4. List All the tables
 *GET /analytics/tables*
 ;Response will be an JSON array of table names. e.g. [ table1 ,
 table2 , table3 ]

 5. Get the records from a table.
 *GET /analytics/tables/{tableName}/{from}/{to}/{start}/{count}*
 ; tableName - The name of the table from which the records are retrieved.
 ; from - The starting time to get records from.
 ; to - The ending time to get records to.
 ; start - The paginated index from value
 ; count - The paginated records count to be read
 ; response - takes the format of the request content of No.7

 7. Create records ( can be created in different tables or in the same )
 *POST /analytics/records*
 ; Content - A list of records in json format like in below.
 [
 {
 id: id1, (optional),
 tableName: tableName1,
 timestamp: long-value, (optional)
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
{
 tableName: tableName2,
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
 ]

 8. Delete records
 *DELETE /analytics/tables/{tableName}/{timeFrom}/{timeTo}*
 ; tableName - Name of the table from which the records are deleted.
 ; timeFrom - The starting time to delete records from.
 ; timeTo - The end time to delete records to.

 9. Update a set of records
 *PUT /analytics/records* ( PATCH will be more suitable, but jax rs does
 not give PATCH method out of the box so we have to implement it)
 ; Content - As same as the POST method for creating records

 10. Get the record count of table
 *GET /analytics/tables/{tableName}/recordcount*
 ; tableName - The name of the table

 11. Create Indices for a table
 *POST /analytics/tables/{tableName}/indices*
 ; tableName - The name of the table of which the indices are set
 ; Content - takes the following format. TYPE is one of INTEGER,
 BOOLEAN, DOUBLE, STRING, FLOAT, LONG
 {
 indexColumnName1 : TYPE1,
 indexColumnName2 : TYPE2
 }

 12. get the indices of a table
 *GET /analytics/tables/{tableName}/indices*
 ; tableName - The name of the table
 ; Response will be of the format of the previous POST request's Content.

 13. Clear the indices of a table
 *DELETE /analytics/tables/{tableName}/indices*
 ; tableName - The name of the table

 14. Search records of a table
 *POST /analytics/search*
 ; Content - takes the following format
 {
 tableName: sampleTableName,
 language: sampleLanguageName,
 query: sampleQuery,
 start: start-location-of-the-result,
 count: maximum-number-of-entries-to-return
 }

 On Wed, Jan 21, 2015 at 8:38 AM, Anjana Fernando anj...@wso2.com
 wrote:

 Hi Gimantha,

 You haven't really replied to Sinthuja's points there. We have to
 decide

Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-01-20 Thread Gimantha Bandara
Hi Harshan,

Thanks for the feedback. The list of IDs of the records to be retrieved can
be too long. So setting a list of ids as a query param is not convenient.
Even for the Search, we have to pass a query, which can be too long. Thats
why the post method is used for Search.

Thanks,

On Tue, Jan 20, 2015 at 12:52 PM, Sinthuja Ragendran sinth...@wso2.com
wrote:

 Hi Gimantha,

 I think following the conventions will be more intuitive to the users, and
 we should have a proper reason to not follow. And the post request is
 generally needs to be sent to create the object and the back end is going
 to decide where to create the tables resource, therefore it should be POST
 to '/analytics/tables' and message body should be having the table name. If
 you want to use /analytics/{tableName}, then you should use PUT rather
 POST [1]. But IMO POST is the most suitable operation for this context.

 And through out the below given URL patterns, in order to fetch the
 records from a table, you have used '/analytics/records/{tableName}' url
 pattern where 'records' is in the middle and it is not the correct data
 hierarchy and again I feel it's not the convention. Basically tables
 contains records and not records contains tables. Therefore if we use
 POST to '/analytics/tables' for create table, then you can use simply user
 'analytics/{tableName}' to GET/POST for the tables, IMHO the records is
 just an additional segment which is not needed to be here.

 [1] http://restcookbook.com/HTTP%20Methods/put-vs-post

 Thanks,
 Sinthuja.

 On Tue, Jan 20, 2015 at 1:29 AM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi Sinduja,

 Thank you for the feedback.

 On Mon, Jan 19, 2015 at 12:04 PM, Sinthuja Ragendran sinth...@wso2.com
 wrote:

 Hi gimantha,

 Please see the comments inline.

 On Sun, Jan 18, 2015 at 11:24 PM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi,
 Currently, I am working on $subject. Basically the methods in
 AnalyticsDataService will be exposed through REST APIs. Please refer to
 Architecture mail thread *[Architecture] BAM 3.0 Data Layer
 Implementation / RDBMS / Distributed Indexing / Search* for more
 Details. Following are the supported REST APIs.

 1. Create a table
 *POST /analytics/{tableName}*
  ; tableName - The name of the table to be created.


 IMHO the above should be POST to '/analytics/*tables*' and the request
 content should have the table name as given below.
 {
  tableName : Test
 }

 Since the POST takes only the table name, it is straightforward to use it
 as a path parameter.


 2. Delete a table
 *DELETE /analytics/{tableName} *
 ; tableName - The name of the table to be deleted.


 3. Check if a table exists
 *GET /analytics/{tableName} *
 ; tableName - The name of the table being checked.

 4. List All the tables
 *GET /analytics/tables*
 ;Response will be an JSON array of table names. e.g. [ table1 ,
 table2 , table3 ]

 5. Get the records from a table.
 *GET /analytics/records/{tableName}/{from}/{to}/{start}/{count} *
 ; tableName - The name of the table from which the records are
 retrieved.
 ; from - The starting time to get records from.
 ; to - The ending time to get records to.
 ; start - The paginated index from value
 ; count - The paginated records count to be read
 ; response - takes the format of the request content of No.7


 Do we need to have 'records' in the URL?  I think it's better to have  
 */analytics/{tableName}/{from}/{to}/{start}/{count}
 *


 records is there to avoid conflicts with other contexts. As an example,
 If we remove records, the URL will take the format
 /analytics/{tableName}, which is already a defined REST API.


 6. Get the records from a table (By IDs)
 *POST /analytics/records/{tableName}*
 ; tableName - The name of the table from which the records are
 retrieved.
 ; Content  - A List of IDs of the records to be retrieved in the
 following format.
 [ id1 , id2 , id3 ]
 ; response - takes the format of the request content of No.7


 Similarly can we have this as * /analytics/{tableName}?*


 7. Create records ( can be created in different tables or in the same )
 *POST /analytics/records*
 ; Content - A list of records in json format like in below.
 [
 {
 id: id1,
 tenantId: -1234,
 tableName: tableName1,
 timestamp: -mm-dd hh:mm:ss,
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
{
 id: id2,
 tenantId: -1234,
 tableName: tableName2,
 timestamp: -mm-dd hh:mm:ss,
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
 ]

 8. Delete records
 *DELETE /analytics/records/{tableName}/{timeFrom}/{timeTo}*
 ; tableName - Name of the table from which the records are deleted.
 ; timeFrom - The starting time to delete records from.
 ; timeTo - The end time to delete records to.


 Again do we need to have 'records' in the middle?  IMHO

Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-01-20 Thread Gimantha Bandara
Hi,
Goood point!. Initially the search was implemented such a way that it
returns a list of ids of records that match the query. Now the search
method is changed, so it returns the records. So +1 for removing the API
#6.

@Sinduja, @Harshan
   Thanks for the reference links and for the feedback.

On Tue, Jan 20, 2015 at 6:52 PM, Harshan Liyanage hars...@wso2.com wrote:

 Hi Gimantha,

 Could you please explain the use-case for the API  #6? IMO there should
 not be any operation to fetch a list of records by ids. Instead there could
 be an API which sends a list of records as the response for a query.

 For the API #14 you can still use GET method with query strings. I think
 you have included start  count parameters to control the pagination. For
 that you can use the HTTP range-header [1] or link headers as mentioned in
 [2].

 [1]. http://otac0n.com/blog/2012/11/21/range-header-i-choose-you.html
 [2]. https://developer.github.com/v3/#pagination

 Regards,

 Lakshitha Harshan
 Software Engineer
 Mobile: *+94724423048*
 Email: hars...@wso2.com
 Blog : http://harshanliyanage.blogspot.com/
 *WSO2, Inc. :** wso2.com http://wso2.com/*
 lean.enterprise.middleware.

 On Tue, Jan 20, 2015 at 6:34 PM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi Harshan,

 Thanks for the feedback. The list of IDs of the records to be retrieved
 can be too long. So setting a list of ids as a query param is not
 convenient. Even for the Search, we have to pass a query, which can be too
 long. Thats why the post method is used for Search.

 Thanks,

 On Tue, Jan 20, 2015 at 12:52 PM, Sinthuja Ragendran sinth...@wso2.com
 wrote:

 Hi Gimantha,

 I think following the conventions will be more intuitive to the users,
 and we should have a proper reason to not follow. And the post request is
 generally needs to be sent to create the object and the back end is going
 to decide where to create the tables resource, therefore it should be POST
 to '/analytics/tables' and message body should be having the table name. If
 you want to use /analytics/{tableName}, then you should use PUT rather
 POST [1]. But IMO POST is the most suitable operation for this context.

 And through out the below given URL patterns, in order to fetch the
 records from a table, you have used '/analytics/records/{tableName}' url
 pattern where 'records' is in the middle and it is not the correct data
 hierarchy and again I feel it's not the convention. Basically tables
 contains records and not records contains tables. Therefore if we use
 POST to '/analytics/tables' for create table, then you can use simply
 user 'analytics/{tableName}' to GET/POST for the tables, IMHO the records
 is just an additional segment which is not needed to be here.

 [1] http://restcookbook.com/HTTP%20Methods/put-vs-post

 Thanks,
 Sinthuja.

 On Tue, Jan 20, 2015 at 1:29 AM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi Sinduja,

 Thank you for the feedback.

 On Mon, Jan 19, 2015 at 12:04 PM, Sinthuja Ragendran sinth...@wso2.com
  wrote:

 Hi gimantha,

 Please see the comments inline.

 On Sun, Jan 18, 2015 at 11:24 PM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi,
 Currently, I am working on $subject. Basically the methods in
 AnalyticsDataService will be exposed through REST APIs. Please refer to
 Architecture mail thread *[Architecture] BAM 3.0 Data Layer
 Implementation / RDBMS / Distributed Indexing / Search* for more
 Details. Following are the supported REST APIs.

 1. Create a table
 *POST /analytics/{tableName}*
  ; tableName - The name of the table to be created.


 IMHO the above should be POST to '/analytics/*tables*' and the
 request content should have the table name as given below.
 {
  tableName : Test
 }

 Since the POST takes only the table name, it is straightforward to use
 it as a path parameter.


 2. Delete a table
 *DELETE /analytics/{tableName} *
 ; tableName - The name of the table to be deleted.


 3. Check if a table exists
 *GET /analytics/{tableName} *
 ; tableName - The name of the table being checked.

 4. List All the tables
 *GET /analytics/tables*
 ;Response will be an JSON array of table names. e.g. [ table1 ,
 table2 , table3 ]

 5. Get the records from a table.
 *GET /analytics/records/{tableName}/{from}/{to}/{start}/{count} *
 ; tableName - The name of the table from which the records are
 retrieved.
 ; from - The starting time to get records from.
 ; to - The ending time to get records to.
 ; start - The paginated index from value
 ; count - The paginated records count to be read
 ; response - takes the format of the request content of No.7


 Do we need to have 'records' in the URL?  I think it's better to have  
 */analytics/{tableName}/{from}/{to}/{start}/{count}
 *


 records is there to avoid conflicts with other contexts. As an
 example, If we remove records, the URL will take the format
 /analytics/{tableName}, which is already a defined REST API.


 6. Get the records from a table (By IDs)
 *POST /analytics/records/{tableName

Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-01-19 Thread Gimantha Bandara
Hi Sinduja,

Thank you for the feedback.

On Mon, Jan 19, 2015 at 12:04 PM, Sinthuja Ragendran sinth...@wso2.com
wrote:

 Hi gimantha,

 Please see the comments inline.

 On Sun, Jan 18, 2015 at 11:24 PM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi,
 Currently, I am working on $subject. Basically the methods in
 AnalyticsDataService will be exposed through REST APIs. Please refer to
 Architecture mail thread *[Architecture] BAM 3.0 Data Layer
 Implementation / RDBMS / Distributed Indexing / Search* for more
 Details. Following are the supported REST APIs.

 1. Create a table
 *POST /analytics/{tableName}*
  ; tableName - The name of the table to be created.


 IMHO the above should be POST to '/analytics/*tables*' and the request
 content should have the table name as given below.
 {
  tableName : Test
 }

Since the POST takes only the table name, it is straightforward to use it
as a path parameter.


 2. Delete a table
 *DELETE /analytics/{tableName} *
 ; tableName - The name of the table to be deleted.


 3. Check if a table exists
 *GET /analytics/{tableName} *
 ; tableName - The name of the table being checked.

 4. List All the tables
 *GET /analytics/tables*
 ;Response will be an JSON array of table names. e.g. [ table1 ,
 table2 , table3 ]

 5. Get the records from a table.
 *GET /analytics/records/{tableName}/{from}/{to}/{start}/{count} *
 ; tableName - The name of the table from which the records are retrieved.
 ; from - The starting time to get records from.
 ; to - The ending time to get records to.
 ; start - The paginated index from value
 ; count - The paginated records count to be read
 ; response - takes the format of the request content of No.7


 Do we need to have 'records' in the URL?  I think it's better to have  
 */analytics/{tableName}/{from}/{to}/{start}/{count}
 *


records is there to avoid conflicts with other contexts. As an example,
If we remove records, the URL will take the format
/analytics/{tableName}, which is already a defined REST API.


 6. Get the records from a table (By IDs)
 *POST /analytics/records/{tableName}*
 ; tableName - The name of the table from which the records are retrieved.
 ; Content  - A List of IDs of the records to be retrieved in the
 following format.
 [ id1 , id2 , id3 ]
 ; response - takes the format of the request content of No.7


 Similarly can we have this as * /analytics/{tableName}?*


 7. Create records ( can be created in different tables or in the same )
 *POST /analytics/records*
 ; Content - A list of records in json format like in below.
 [
 {
 id: id1,
 tenantId: -1234,
 tableName: tableName1,
 timestamp: -mm-dd hh:mm:ss,
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
{
 id: id2,
 tenantId: -1234,
 tableName: tableName2,
 timestamp: -mm-dd hh:mm:ss,
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
 ]

 8. Delete records
 *DELETE /analytics/records/{tableName}/{timeFrom}/{timeTo}*
 ; tableName - Name of the table from which the records are deleted.
 ; timeFrom - The starting time to delete records from.
 ; timeTo - The end time to delete records to.


 Again do we need to have 'records' in the middle?  IMHO
 /analytics/{tableName}/{timeFrom}/{timeTo} is better.

 9. Update records
 *PUT /analytics/records*
 ; Content - As same as the POST method for creating records

 10. Get the record count of table
 *GET /analytics/count/{tableName}*
 ; tableName - The name of the table

 11. Create Indices for a table
 *POST /analytics/indices/{tableName}*
 ; tableName - The name of the table of which the indices are set
 ; Content - takes the following format. TYPE is one of INTEGER,
 BOOLEAN, DOUBLE, STRING, FLOAT, LONG
 {
 indexColumnName1 : TYPE1,
 indexColumnName2 : TYPE2
 }


 12. get the indices of a table
 *GET /analytics/indices/{tableName}*
 ; tableName - The name of the table
 ; Response will be of the format of the previous POST request's Content.

 13. Clear the indices of a table
 *DELETE /analytics/indices/{tableName}*
 ; tableName - The name of the table

 14. Search records of a table
 *POST /analytics/search*
 ; Content - takes the following format
 {
 tableName: sampleTableName,
 language: sampleLanguageName,
 query: sampleQuery,
 start: start-location-of-the-result,
 count: maximum-number-of-entries-to-return
 }


 IMHO this should be a GET request.


Here the problem is, that the search method in AnalyticsDataService takes
few parameters. If we are to implement it as a GET, then we will have to
put all the parameters in the URL itself.



 Thanks,
 Sinthuja.


 If a method does not have a specific response mentioned above, the
 response will take the following format.

 {
 status : statusValue,
 message : sampleMessage
 }

 Suggestions and feedbacks are appreciated.

 Thanks

[Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-01-18 Thread Gimantha Bandara
Hi,
Currently, I am working on $subject. Basically the methods in
AnalyticsDataService will be exposed through REST APIs. Please refer to
Architecture mail thread *[Architecture] BAM 3.0 Data Layer Implementation
/ RDBMS / Distributed Indexing / Search* for more Details. Following are
the supported REST APIs.

1. Create a table
*POST /analytics/{tableName}*
 ; tableName - The name of the table to be created.

2. Delete a table
*DELETE /analytics/{tableName} *
; tableName - The name of the table to be deleted.

3. Check if a table exists
*GET /analytics/{tableName} *
; tableName - The name of the table being checked.

4. List All the tables
*GET /analytics/tables*
;Response will be an JSON array of table names. e.g. [ table1 , table2
, table3 ]

5. Get the records from a table.
*GET /analytics/records/{tableName}/{from}/{to}/{start}/{count} *
; tableName - The name of the table from which the records are retrieved.
; from - The starting time to get records from.
; to - The ending time to get records to.
; start - The paginated index from value
; count - The paginated records count to be read
; response - takes the format of the request content of No.7

6. Get the records from a table (By IDs)
*POST /analytics/records/{tableName}*
; tableName - The name of the table from which the records are retrieved.
; Content  - A List of IDs of the records to be retrieved in the following
format.
[ id1 , id2 , id3 ]
; response - takes the format of the request content of No.7

7. Create records ( can be created in different tables or in the same )
*POST /analytics/records*
; Content - A list of records in json format like in below.
[
{
id: id1,
tenantId: -1234,
tableName: tableName1,
timestamp: -mm-dd hh:mm:ss,
values:
{
columnName1: value1,
columnName2: value2
}
},
   {
id: id2,
tenantId: -1234,
tableName: tableName2,
timestamp: -mm-dd hh:mm:ss,
values:
{
columnName1: value1,
columnName2: value2
}
},
]

8. Delete records
*DELETE /analytics/records/{tableName}/{timeFrom}/{timeTo}*
; tableName - Name of the table from which the records are deleted.
; timeFrom - The starting time to delete records from.
; timeTo - The end time to delete records to.

9. Update records
*PUT /analytics/records*
; Content - As same as the POST method for creating records

10. Get the record count of table
*GET /analytics/count/{tableName}*
; tableName - The name of the table

11. Create Indices for a table
*POST /analytics/indices/{tableName}*
; tableName - The name of the table of which the indices are set
; Content - takes the following format. TYPE is one of INTEGER,
BOOLEAN, DOUBLE, STRING, FLOAT, LONG
{
indexColumnName1 : TYPE1,
indexColumnName2 : TYPE2
}

12. get the indices of a table
*GET /analytics/indices/{tableName}*
; tableName - The name of the table
; Response will be of the format of the previous POST request's Content.

13. Clear the indices of a table
*DELETE /analytics/indices/{tableName}*
; tableName - The name of the table

14. Search records of a table
*POST /analytics/search*
; Content - takes the following format
{
tableName: sampleTableName,
language: sampleLanguageName,
query: sampleQuery,
start: start-location-of-the-result,
count: maximum-number-of-entries-to-return
}

If a method does not have a specific response mentioned above, the response
will take the following format.

{
status : statusValue,
message : sampleMessage
}

Suggestions and feedbacks are appreciated.

Thanks,

-- 
Gimanthaa Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] BAM 3.0 REST APIs for AnalyticsDataService / Indexing / Search

2015-01-18 Thread Gimantha Bandara
Please note that the request content for No. 7 will take the following
format.

[
{
id: id1,
tableName: tableName1,
timestamp: -mm-dd hh:mm:ss,
values:
{
columnName1: value1,
columnName2: value2
}
},
   {
id: id2,
tableName: tableName2,
timestamp: -mm-dd hh:mm:ss,
values:
{
columnName1: value1,
columnName2: value2
}
},
]

On Sun, Jan 18, 2015 at 11:24 PM, Gimantha Bandara giman...@wso2.com
wrote:

 Hi,
 Currently, I am working on $subject. Basically the methods in
 AnalyticsDataService will be exposed through REST APIs. Please refer to
 Architecture mail thread *[Architecture] BAM 3.0 Data Layer
 Implementation / RDBMS / Distributed Indexing / Search* for more
 Details. Following are the supported REST APIs.

 1. Create a table
 *POST /analytics/{tableName}*
  ; tableName - The name of the table to be created.

 2. Delete a table
 *DELETE /analytics/{tableName} *
 ; tableName - The name of the table to be deleted.

 3. Check if a table exists
 *GET /analytics/{tableName} *
 ; tableName - The name of the table being checked.

 4. List All the tables
 *GET /analytics/tables*
 ;Response will be an JSON array of table names. e.g. [ table1 , table2
 , table3 ]

 5. Get the records from a table.
 *GET /analytics/records/{tableName}/{from}/{to}/{start}/{count} *
 ; tableName - The name of the table from which the records are retrieved.
 ; from - The starting time to get records from.
 ; to - The ending time to get records to.
 ; start - The paginated index from value
 ; count - The paginated records count to be read
 ; response - takes the format of the request content of No.7

 6. Get the records from a table (By IDs)
 *POST /analytics/records/{tableName}*
 ; tableName - The name of the table from which the records are retrieved.
 ; Content  - A List of IDs of the records to be retrieved in the following
 format.
 [ id1 , id2 , id3 ]
 ; response - takes the format of the request content of No.7

 7. Create records ( can be created in different tables or in the same )
 *POST /analytics/records*
 ; Content - A list of records in json format like in below.
 [
 {
 id: id1,
 tenantId: -1234,
 tableName: tableName1,
 timestamp: -mm-dd hh:mm:ss,
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
{
 id: id2,
 tenantId: -1234,
 tableName: tableName2,
 timestamp: -mm-dd hh:mm:ss,
 values:
 {
 columnName1: value1,
 columnName2: value2
 }
 },
 ]

 8. Delete records
 *DELETE /analytics/records/{tableName}/{timeFrom}/{timeTo}*
 ; tableName - Name of the table from which the records are deleted.
 ; timeFrom - The starting time to delete records from.
 ; timeTo - The end time to delete records to.

 9. Update records
 *PUT /analytics/records*
 ; Content - As same as the POST method for creating records

 10. Get the record count of table
 *GET /analytics/count/{tableName}*
 ; tableName - The name of the table

 11. Create Indices for a table
 *POST /analytics/indices/{tableName}*
 ; tableName - The name of the table of which the indices are set
 ; Content - takes the following format. TYPE is one of INTEGER,
 BOOLEAN, DOUBLE, STRING, FLOAT, LONG
 {
 indexColumnName1 : TYPE1,
 indexColumnName2 : TYPE2
 }

 12. get the indices of a table
 *GET /analytics/indices/{tableName}*
 ; tableName - The name of the table
 ; Response will be of the format of the previous POST request's Content.

 13. Clear the indices of a table
 *DELETE /analytics/indices/{tableName}*
 ; tableName - The name of the table

 14. Search records of a table
 *POST /analytics/search*
 ; Content - takes the following format
 {
 tableName: sampleTableName,
 language: sampleLanguageName,
 query: sampleQuery,
 start: start-location-of-the-result,
 count: maximum-number-of-entries-to-return
 }

 If a method does not have a specific response mentioned above, the
 response will take the following format.

 {
 status : statusValue,
 message : sampleMessage
 }

 Suggestions and feedbacks are appreciated.

 Thanks,

 --
 Gimanthaa Bandara
 Software Engineer
 WSO2. Inc : http://wso2.com
 Mobile : +94714961919




-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] BAM Log monitoring for Cloud

2014-09-26 Thread Gimantha Bandara
Hi all,

Thank you for your suggestions on improvement. I ll update you on the
progress

Thanks,

On Thu, Sep 25, 2014 at 4:38 PM, Thomas Wieger developer.wie...@gmail.com
wrote:



 Am Donnerstag, 25. September 2014 schrieb Manjula Rathnayake :

 Hi Gimantha,

 If we can correlate log event based on timestamp range from all
 services(AS,BPS,AF,etc) that is really useful when identifying issues.


  I would recommend that you take a look on the Google Dapper paper
 http://research.google.com/pubs/pub36356.html regarding distributed
 tracing. I think this would perfectly solve all correlation issues. For
 implementation you should have a look on Twitters Zipkin
 http://twitter.github.io/zipkin/ or Brave
 https://github.com/kristofa/brave.

 regards,

 Thomas

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] BAM Log monitoring for Cloud

2014-09-24 Thread Gimantha Bandara
Hi All,


Currently I am working on the Front end. These were developed keeping
Kibana as the reference. Currently the UI supports the following tasks.

   1. Setting the refresh rate ( Refresh the logging graph and the log
   table / Log view )*
   2. Setting the Time range ( custom time range or in the format of ''Last
   5 min, Last 10 mins...etc) *
   3. Searchbox for searching ( Queries will be regex or Lucene )
   4. Filters (For searching)
   5. Log graph ( hits per time)
   6. Filters for log table/view
   7. Log table view (in progress)
   8. Micro panel which is similar to Kibana micro panel (in progress)

The graph is created using jqplot[1] library. So the graph support all the
features jqplot offers.
Other UIs are based on JQuery/JQueryUI[2]
The Micro Panel will be developed using Bootstrap[3].

Note that the theme used for the UI can be changed and the log table is
still in progress.
Currently the UI is integrated with ElasticSearch to view real log data.

Here are some screenshots of the current UIs.
​
 Update-24.09.2014
https://docs.google.com/a/wso2.com/folderview?id=0B7luxEF9AEBxSHc3aEI3YUxYRVkusp=drive_web
​

[1] http://www.jqplot.com/
[2] https://jquery.org/projects/
[3] http://getbootstrap.com/javascript/

-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] BAM Log monitoring for Cloud

2014-09-24 Thread Gimantha Bandara
Hi Dakshika,

There is no specific reason. It seems like both jqplot and Flot have the
same customization capabilities.
It will not be hard to change the graph libraries, if it is needed.

Thanks,

On Wed, Sep 24, 2014 at 3:28 PM, Dakshika Jayathilaka daksh...@wso2.com
wrote:

 Is there any specific reason for using jqplot?

 AFAIK platform wide, we are using flotchart lib[1] due to advance
 customization capabilities.

 1. http://www.flotcharts.org/

 *Dakshika Jayathilaka*
 Software Engineer
 WSO2, Inc.
 lean.enterprise.middleware
 0771100911

 On Wed, Sep 24, 2014 at 2:48 PM, Gimantha Bandara giman...@wso2.com
 wrote:

 Hi All,


 Currently I am working on the Front end. These were developed keeping
 Kibana as the reference. Currently the UI supports the following tasks.

1. Setting the refresh rate ( Refresh the logging graph and the log
table / Log view )*
2. Setting the Time range ( custom time range or in the format of
''Last 5 min, Last 10 mins...etc) *
3. Searchbox for searching ( Queries will be regex or Lucene )
4. Filters (For searching)
5. Log graph ( hits per time)
6. Filters for log table/view
7. Log table view (in progress)
8. Micro panel which is similar to Kibana micro panel (in progress)

 The graph is created using jqplot[1] library. So the graph support all
 the features jqplot offers.
 Other UIs are based on JQuery/JQueryUI[2]
 The Micro Panel will be developed using Bootstrap[3].

 Note that the theme used for the UI can be changed and the log table is
 still in progress.
 Currently the UI is integrated with ElasticSearch to view real log data.

 Here are some screenshots of the current UIs.
 ​
  Update-24.09.2014
 https://docs.google.com/a/wso2.com/folderview?id=0B7luxEF9AEBxSHc3aEI3YUxYRVkusp=drive_web
 ​

 [1] http://www.jqplot.com/
 [2] https://jquery.org/projects/
 [3] http://getbootstrap.com/javascript/

 --
 Gimantha Bandara
 Software Engineer
 WSO2. Inc : http://wso2.com
 Mobile : +94714961919

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture





-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Dev]Gettiing statistics from a JMX profile visualized in BAM dashboards

2014-05-31 Thread Gimantha Bandara
Hi,

For my training project, it was planned to create a toolbox and a data
agent for JBoss AS. Since BAM already has a JMX agent and JBoss exposes its
stats through a JMX server, it is not useful to create a separate toolbox
and a data agent for specifically for JBoss. After a discussion with
Jaminda, Anjana and Sagara, we planned to extend the functionalities of the
existing JMX agent. Existing JMX toolbox can be used to collect statistics
from OS, memory and threads and also it is possible to add other MBeans
exposed by the remote JMX server.

If I want to collect statistics from a specific JMX server, I can create a
JMX profile and select the MBeans and attributes, and then save that JMX
profile. How could these profiles be used to visualize JMX statistics on
BAM dashboards?

-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture