Re: [Architecture] Range Partition Queries in Siddhi

2018-02-23 Thread Srinath Perera
+1

reasons

   1. The mathematical definition of partition includes all elements in a
   set. https://en.wikipedia.org/wiki/Partition_of_a_set. IMO it is a bad
   idea to break this as it is a well-defined idea.
   2. Often this will happen due to error, and if we ignore it, it is very
   hard to debug
   3. If user wants to ignore a partition he can do it as post processing


On Fri, Feb 23, 2018 at 1:49 PM, Miyuru Dayarathna <miyu...@wso2.com> wrote:

> Hi,
>
> We had several offline discussions on the results returned by the
> following partition by range Siddhi query. This query simply drops events
> which are in the range 100-149 because those events do not fall into either
> small or large partitions. Ideally we should throw an error when parsing
> this Siddhi query since it does not include a partition for the (price>=100
> and price <150) case. However, currently Siddhi does not throw such error.
> We might think of such scenario as user may not really interested of the
> values between 100-149 and hence we should ignore that range. But it could
> lead to errors which are created unintentionally because user might
> actually forget to address the range 100-149. In such situation, throwing
> an error will make the user to address the range 100-149 properly.
> Therefore, we should throw an error if the user forget to cover the
> complete range of values in a partition by range Siddhi query. WDYT?
>
> @app:name('incrementalPersistenceTest10')
> define stream cseEventStreamOne (symbol string, price float,volume int);
> partition with (price<100 as 'small' or price>=150 as 'large' of
> cseEventStreamOne)
> begin @info(name = 'query1')
> from cseEventStreamOne#window.length(4)
> select symbol,sum(price) as price group by symbol insert into
> OutStockStream ;
> end
>
>
> --
> Thanks,
> Miyuru Dayarathna
> Senior Technical Lead
> Mobile: +94713527783 <+94%2071%20352%207783>
> Blog: http://miyurublog.blogspot.com
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Using Java Agent API for Latency and other Measurements

2017-11-10 Thread Srinath Perera
Sanjeewa, JVM include an Instrumentation API and I believe it is the same
API new relic uses.

Isuru, could you write an intern proposal and add it to the doc?

--Srinath

On Fri, Oct 6, 2017 at 9:21 PM, Sanjeewa Malalgoda <sanje...@wso2.com>
wrote:

> Hi,
> Are we going to use java tools API to pull JVM information or some other
> implementation?
> Or are we using new relic java agent API? I would like to know some more
> details.
>
> Thanks,
> sanjeewa.
>
> On Fri, Oct 6, 2017 at 1:29 PM, Isuru Perera <isu...@wso2.com> wrote:
>
>> Yes, I think we can also collect metrics from a Java Agent.
>>
>> I'll explore on this more and update this thread.
>>
>> Thanks!
>>
>> On Thu, Oct 5, 2017 at 7:20 AM, Srinath Perera <srin...@wso2.com> wrote:
>>
>>> Hi Isuru,
>>>
>>> Your idea of trying to use Java Agent ( JFR) APIs to tracing instead of
>>> manually adding code to trace can be a good one IMO). We need to figure out
>>> a way to find the key path from traces.
>>>
>>> Let's explore this more.
>>>
>>> --Srinath
>>>
>>> --
>>> 
>>> Srinath Perera, Ph.D.
>>>http://people.apache.org/~hemapani/
>>>http://srinathsview.blogspot.com/
>>>
>>
>>
>>
>> --
>> Isuru Perera
>> Technical Lead | WSO2, Inc. | http://wso2.com/
>> Lean . Enterprise . Middleware
>>
>> about.me/chrishantha
>> Contact: +IsuruPereraWSO2 <https://www.google.com/+IsuruPereraWSO2/about>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779 <+94%2071%20306%208779>
>
> <http://sanjeewamalalgoda.blogspot.com/>blog :http://sanjeewamalalgoda.
> blogspot.com/ <http://sanjeewamalalgoda.blogspot.com/>
>
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Using Java Agent API for Latency and other Measurements

2017-10-04 Thread Srinath Perera
Hi Isuru,

Your idea of trying to use Java Agent ( JFR) APIs to tracing instead of
manually adding code to trace can be a good one IMO). We need to figure out
a way to find the key path from traces.

Let's explore this more.

--Srinath

-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [IoT] Improvements to device grouping to allow shared users (non admin ) to control the devices

2017-05-09 Thread Srinath Perera
adding Prabath.

Don't we have this level of permission checks though identity components?

If we have to implement this, then if we can keep the model with only allow
actions, it will simplify the model.

--Srinath

On Sat, May 6, 2017 at 12:45 AM, Ayyoob Hamza <ayy...@wso2.com> wrote:

> @Sumedha,
> Yes, it does in the context of the API. we can use the same permissions in
> the feature. The issue is that the permission in the API does not get
> propagated to the device context.
>
> @Chathura
>
>> What does it mean if a role R1 has access to device group G1, but doesn't
>> have permission to any feature of devices in G1? One option is to allow
>> such roles to only get information+status of devices.
>>
>> +1, we could allow the user to view the basic device info and properties.
>
>
>> I think we have to maintain following mappings to support this permission
>>> model (most of them are currently in the DB):
>>>
>>> device -> device group
>>> feature -> permission (is this coming from the device type xml file?)
>>>
>> permission -> role
>>> user -> role
>>> device group -> role
>>>
>> Yes, everything except feature -> permission is currently stored in the
> db. The changes I proposed will rely on the standard device type plugin
> interfaces. The implementation of that interface is currently either based
> on the file/java code but we can make this information to be stored in the
> DB, We can do this with the changes that we are planning todo on the DAO
> layer to add device type through the api.
>
>
>>
>>> I think the permission evaluation model is to allow an action, if it is
>>> allowed by at least one path. i.e. if device D1 belongs to groups G1 and
>>> G2, and user U1 belongs to roles R1 and R2, if R2 is allowed to access G1,
>>> U1 is allowed to access D1 (although R1 is not allowed to access G2).
>>>
>> Yes, This is how the evaluation is done in [1] but its not been used.
>
>
>> We also have to decide whether we are only supporting allow action or we
>>> want support deny action as well (e.g. role R1 is denied from accessing
>>> device group G2). If we support that, we have to enforce a precedence among
>>> allow and deny actions. IMO not supporting deny action is ok as it will
>>> complicate the permission evaluation.
>>>
>> In the current grouping implementation we only support allow action
> capability but this implicitly tackle the deny action(e.g role R1 is not
> assigned to group G2). In some usecases it might be more practical to have
> only deny action rather than the allow action. But this would complicate
> the model as you have mentioned.
>
> [1] https://github.com/wso2/carbon-device-mgt/blob/master/
> components/device-mgt/org.wso2.carbon.device.mgt.core/
> src/main/java/org/wso2/carbon/device/mgt/core/authorization/
> DeviceAccessAuthorizationServiceImpl.java#L197
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Device Connectivity Graph for IoT Server & related concerns

2017-05-09 Thread Srinath Perera
+1 for auditing table

On Mon, May 8, 2017 at 2:45 PM, Ayyoob Hamza <ayy...@wso2.com> wrote:

> Hi Ruwan,
>
> +1 for this, Having the communication history through a common stream will
> help us to build an analytics solution for health status, anomaly detection
> .. etc.
>
> On Mon, May 8, 2017 at 12:35 PM, Ruwan Yatawara <ruw...@wso2.com> wrote:
>
>> Hi Everyone,
>>
>> I am working on $subject as part of the effort in trying to provide dig
>> down analytics for devices.
>>
>> The resulting graph would look something like the following (Please
>> disregard portion reading connected-unterminated), with the help of [1] and
>> will give an would indicate whether the device in question was able to get
>> back to IoT Server in a timely manner at the expected monitoring frequency
>> specified.
>> [image: Inline image 1]
>> *Source: [2]*
>>
> we can't infer whether the device is disconnected from the server using
> the raw history data points right?. We might need todo some form of a
> time series analytisis on the device to server communication (This will
> vary depend on the frequency of the communication for each device type) and
> then predict the device status. The communication history data will help us
> build a solution for the health status.
>
>
>> Whilst attempting this I noticed that we do not have an auditing
>> mechanism in place to record important activities, as yet. If you take this
>> flow, for example, the monitoring frequency is something configurable and
>> will change from time to time. We need to know at which point the
>> transition was made.
>>
>> Therefore I propose, that we come up with a central audit table to record
>> system activities, like updates to platform configurations, in a central
>> table. Each activity can have a logging code, and we can purge these
>> records from time to time, based on data growth.
>>
> +1 we do this only for operation, but we should have this for all the
> internal communication.
>
>>
>>
> I also propose that we, take out the option for users to enable/disable
>> data publishing from the agent side, and make it implicit. The agent by
>> default makes a call to the server to send device information, instead of
>> making the device make another call to DAS to provide location information,
>> we can derive the location information at the server side from the device
>> information payload and push it to DAS.
>>
> I think this is how its been implemented for the EMM Usecases, correct me
> if I am wrong. We publish from the IoT Core to DAS through thrift.
>
> *Ayyoob Hamza*
> *Senior Software Engineer*
> WSO2 Inc.; http://wso2.com
> email: ayy...@wso2.com cell: +94 77 1681010 <%2B94%2077%207779495>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM][C5] - Handling Broker request failure during Gateway event publishing

2017-05-09 Thread Srinath Perera
As Fazlan said, if we first publish and then only do API processing, it
will remove the need to rollback. If the API processing failed after
publishing, we can publish another event saying processing failed.

--Srinath





On Tue, May 9, 2017 at 11:06 AM, Fazlan Nazeem <fazl...@wso2.com> wrote:
>
>
>
> On Mon, May 8, 2017 at 7:33 PM, Lakmal Warusawithana <lak...@wso2.com>
wrote:
>>
>> Hi Thilini,
>>
>> We had an offline discussion. Please see following scenarios and flow.
Generally we need to add timestamp with the event. GW need to validate its
action with the API core with timestamp of the event. This is valid for all
event with relevant action. Believed these flow will solve the issue.
>>
>> Scenario 1:  Success MB connection & Success DB entry
>>
>> Publisher UI:
>>
>> Create API à API Core
>>
>>
>>
>> API Core:
>>
>> API Core à MB: Publish to topic API + Event + Timestamp
>>
>> Success
>>
>> API Core à DB
>>
>> Success
>>
>> API Core à API Publisher: API Create Success
>>
>>
>>
>> API GW:
>>
>> MB à GW: received API + Event + Timestamp
>>
>> GW à API Core: Service call to Core with API + Event +
Timestamp
>>
>> If matching with timestamp, retrieve the API
>>
>>
>>
>> Scenario 2:  Success MB connection & Failed DB entry
>>
>> Publisher UI:
>>
>> Create API à API Core
>>
>>
>>
>> API Core:
>>
>> API Core à MB: Publish to topic API + Event + Timestamp
>>
>> Success
>>
>> API Core à DB
>>
>> Failed
>>
>> API Core à API Publisher: Don’t allow save: ERROR
>>
>>
>>
>> API GW:
>>
>> MB à GW: received API + Event + Timestamp
>>
>> GW à API Core: Request API with API + Event + Timestamp
>>
>> If not matching with timestamp ignore the event. No action
>>
>>
>>
>> Scenario 3:  Failed MB connection
>>
>> Publisher UI:
>>
>> Create API à API Core
>>
>>
>>
>> API Core:
>>
>> API Core à MB: Publish to topic API + Event + Timestamp
>>
>> Failed
>>
>> API Core à API Publisher: Don’t allow save: ERROR Failed
>>
>>
>
>
> So in summary, what we are doing here is persist in API Core DB only if
publisihing to the MB is successful. First publish and then persist?
>
> isf so +1
>>
>>
>>
>>
>> On Mon, May 8, 2017 at 2:38 PM, Thilini Shanika <thili...@wso2.com>
wrote:
>>>
>>> Hi All,
>>>
>>> As per the APIM 3.0.0 architecture, the events such as APIM create,
update, delete, subscription create etc are notified to gateways through
JMS Topic in the broker. Thus, we need to smoothly handle the scenarios
like broker not available and APIM to Broker connection(network) failure,
since the flow cannot be completed without notifying the gateway (A
blocking call). Ideally, if the API action cannot be completed due to
broker connection failure, the users should be notified about the failure
and the action should be rolled back.
>>>
>>> But, we are facing some difficulties to handle topic publishing
failures and rollback API action(API create, API state change, API update,
API delete, subscription create, subscription block) since the API action
is getting persisted in APIM db layer prior to publishing to Gateway.
>>>
>>> For example, if an API create request is initiated from API core,
first, the API will be persisted in db layer. Then the API create event
will be published to Topic and the registered gateways will be notified.
But if the broker publishing step is failed, the gateways will not be
notified on the newly created API so that the API won't be published to
gateway. This might lead the API to go to an inconsistent/partially created
state (API is successfully created in db, but not pushed to gateway).
>>>
>>>
>>> Currently, we have not implemented any mechanism to
>>>
>>> Rollback the action, or
>>> Persist the inconsistent state as a flag in API so that the user is
aware of the inconsistent state
>>>
>>> What would be the best way to handle broker failures? Any suggestions?
>>>
>>> Thanks
>>> --
>>> Thilini Shanika
>>> Senior Software Engineer
>>> WSO2, Inc.; http://wso2.com
>>

[Architecture] A survey of secure middleware for the Internet of Things

2017-03-22 Thread Srinath Perera
Paul's paper

https://peerj.com/preprints/1241/

--Srinath

-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] tenant specific MQTT receivers in DAS , not listening to topics once tenant get unloaded

2017-03-20 Thread Srinath Perera
I agree with Sinthuja. Having a receiver per tenant will be too heavy and
complicated and having a super tenant receiver that receives and handle
data will simplify the design.

--Srinath

On Mon, Mar 20, 2017 at 11:02 AM, Sinthuja Ragendran <sinth...@wso2.com>
wrote:

> Hi,
>
> As the receiver configurations are deployable artefacts, those will be
> active when the tenant is loaded. One approach is to have all tenants
> loaded indefinitely. I think this will have high memory. And therefore we
> internally discussed below approach to handling this problem.
>
> Instead of having multiple MQTT receiver configurations per tenant to
> handle this, implement a specialised/privileged MQTT event receiver which
> could handle multiple subscriptions on behalf of tenants, and it's only
> deployable in the super tenant mode. In that case, this event receiver will
> have the topic URI with {tenantDomain} placeholder and it is used to
> subscribe to the specific tenanted topic. And then, based on which topic
> the event has arrived the tenant flow will be started and an event will be
> inserted into specific tenant space. By this way, only the tenants which
> are actively used/sending events will be loaded, and not all tenants are
> required to be loaded.
>
> Please share your thoughts on this. Also, AFAIR we had the similar
> requirement for Task execution. @Anjana, how are we handling that?
>
> Thanks,
> Sinthuja.
>
> On Mon, Mar 20, 2017 at 10:50 AM, Jasintha Dasanayake <jasin...@wso2.com>
> wrote:
>
>> HI All
>>
>> When DAS working in tenant mode and a particular tenant has MQTT
>> receivers, those cannot be activated once tenants get unloaded. For an
>> example , if I restart the DAS then those tenants specific MQTT receivers
>> are not loaded unless we explicitly load that particular tenant. IMO,
>> expected behavior would be, those receivers should be loaded and subscribed
>> to a particular topic without loading the tenants explicitly.
>>
>> Are there any known mechanism to address this particular problem ?
>>
>> Thanks and Regards
>> /jasintha
>>
>> --
>>
>> *Jasintha Dasanayake**Associate Technical Lead*
>>
>> *WSO2 Inc. | http://wso2.com <http://wso2.com/>lean . enterprise .
>> middleware*
>>
>>
>> *mobile :- 0711-368-118 <071%20136%208118>*
>>
>
>
>
> --
> *Sinthuja Rajendran*
> Technical Lead
> WSO2, Inc.:http://wso2.com
>
> Blog: http://sinthu-rajan.blogspot.com/
> Mobile: +94774273955 <+94%2077%20427%203955>
>
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Applying Machine Learning in Security - A Survey

2017-03-08 Thread Srinath Perera
Found from
https://www.oreilly.com/ideas/building-machine-learning-solutions-that-can-withstand-adversarial-attacks

Look very interesting

https://arxiv.org/abs/1611.03186

--Srinath



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Opentracing

2017-01-31 Thread Srinath Perera
They are trying to build an open standard ( or so they says).
It seem to come from zipkin
Having one would solve lot of problems.

   - http://opentracing.io/
   -
   
https://medium.com/opentracing/distributed-tracing-in-10-minutes-51b378ee40f1#.5rfk4tfwa
   -
   
https://medium.com/opentracing/towards-turnkey-distributed-tracing-5f4297d1736#.xiy7fet0j

Anjana, could you have a look? If it make sense, maybe we can support it.

-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Authentication and Authorization for Rest APIs in Carbon Products

2016-11-11 Thread Srinath Perera
horization requirements based on end
>>> user. Basically we trust the client that it has authenticated the user
>>> successfully and IS will only do the authorization for the user.
>>>
>>>
>>> Another two authenticators in the pipeline are "System User" and "Signed
>>> JWT" authenticator. "System User" authenticator means having couple of
>>> system wide usernames and passwords that is configured in a text file and
>>> secured, which client can authenticate with.
>>>
>>> For authorization we have provided another OSGi service API, which will
>>> also takes in a defined object model and returns the authorization result.
>>> The authorization rules can be configured in identity.xml as follows.
>>> 
>>> >> http-method="all">
>>> /permission/admin/login
>>> 
>>>>> http-method="put,post">
>>> /permission/admin/test
>>> 
>>> 
>>>
>>> Like with authentication clients of this authorization API can be a CXF
>>> filter, CXF interceptor, servlet filter or tomcat valve.
>>>
>>> In addition to the above there are two more broad authorization
>>> requirements which are are yet to implement.
>>>
>>> 1. Resource access across tenants -
>>>
>>> We called it "SaaS" mode in previous releases. It was available for
>>> certain admin SOAP APIs in IS. However this has come as a generic
>>> requirement for all admin APIs. However we faced some difficulties in
>>> implementing this for SOAP services because we don't have a standard model
>>> to represent the tenant and user which the resource belongs to.
>>>
>>> For Rest APIs this is possible. We can specify tenant domain as part of
>>> the URL. E.g. api/identity/user/t/wso2.com where "wso2.com" is the
>>> tenant.
>>>
>>> 2. "Self access" and "Delegated access" for the same resource
>>>
>>> Users can access/update their own resources without any authorization by
>>> just doing authentication only. This is known as "Self access".
>>> One user in a tenant should be able to do actions on another user's
>>> behalf. This is known as "Delegated access". This also was supported for
>>> certain admin SOAP APIs in previous IS. For this we specify a distinct
>>> permission for users to access/update resources of other users.
>>>
>>> To implement this requirement we need to specify two parameterized
>>> permissions in the authorization module. One for "self access" and one for
>>> "delegated access". To identify whether the request is a "self access" one
>>> or a "delegated access" one, we need to specify the user in the Rest URL.
>>> E.g. api/identity/user/t/wso2.com/hr/john where "wso2.com" is the
>>> tenant, "hr" is the userstore domain and "john" is the username.
>>>
>>> If you are going to secure Rest APIs in your product, please check this
>>> implementation and see if it satisfies your requirement and give us
>>> feedback where it can be improved.
>>>
>>> [1] https://github.com/wso2-extensions/identity-carbon-auth-rest
>>>
>>> --
>>> Thanks & Regards,
>>>
>>> *Johann Dilantha Nallathamby*
>>> Technical Lead & Product Lead of WSO2 Identity Server
>>> Governance Technologies Team
>>> WSO2, Inc.
>>> lean.enterprise.middleware
>>>
>>> Mobile - *+9476950*
>>> Blog - *http://nallaa.wordpress.com <http://nallaa.wordpress.com>*
>>>
>>
>>
>>
>> --
>> Nuwan Dias
>>
>> Software Architect - WSO2, Inc. http://wso2.com
>> email : nuw...@wso2.com
>> Phone : +94 777 775 729
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Storing configs in database for C5

2016-10-12 Thread Srinath Perera
On Wed, Oct 12, 2016 at 11:53 AM, Uvindra Dias Jayasinha <
>>>>>> uvin...@wso2.com> wrote:
>>>>>>
>>>>>>> Was wondering about $subject
>>>>>>>
>>>>>>> Traditionally we have stored our product configs, be it carbon.xml,
>>>>>>> api-manager.xml, identity.xml, etc. at file level. Some configs, such as
>>>>>>> "port offset" are inherently bound to the server startup so it makes 
>>>>>>> sense
>>>>>>> for them to be at file level, since they come into affect during the
>>>>>>> startup. But certain runtime configs actually get engaged only when a 
>>>>>>> given
>>>>>>> feature is used. But having those configs at file level require a 
>>>>>>> restart
>>>>>>> for the changes to take affect. In C4 API Manager avoided doing restarts
>>>>>>> for certain config changes, like adding mediation extensions, by storing
>>>>>>> them in the registry.
>>>>>>>
>>>>>>> For C5 a reusable implementation can exist at each node which
>>>>>>> periodically reads the table(say once a minute) and updates the config
>>>>>>> values in memory. Products communicate with this config library to get 
>>>>>>> the
>>>>>>> values of a given config. So eventually they will read the updated 
>>>>>>> value in
>>>>>>> a short time. If we were to store at least certain configs at DB level
>>>>>>> there are several advantages.
>>>>>>>
>>>>>>> 1. Eliminate need for a restart for changes to take affect. I
>>>>>>> realize in C5 a restart is relatively cheap so this might not be a big
>>>>>>> deal, but you still need someone to initiate the restart after the 
>>>>>>> config
>>>>>>> change.
>>>>>>>
>>>>>>> 2. Since the config DB table has a known structure a UI can be
>>>>>>> easily developed to do CRUD operations for config changes and used by 
>>>>>>> all
>>>>>>> products. This is a lot more user friendly than asking users to change
>>>>>>> files.
>>>>>>>
>>>>>>> 3. We can provide a REST API to allow config changes to be done on
>>>>>>> the DB table alternatively.
>>>>>>>
>>>>>>> 4. Simplify dev ops by eliminating complicated puppet config
>>>>>>> templates that need to constantly maintained with new releases.
>>>>>>>
>>>>>>> 5. Since configs are in a central DB its easy to manage them since
>>>>>>> all nodes will read from the same table.
>>>>>>>
>>>>>>> 6. Configs can be backed up by simply backing up the table
>>>>>>>
>>>>>>>
>>>>>>> Doing this makes sense for certain use cases of API Manger, I'm sure
>>>>>>> there maybe similar benefits for other products as well. It may not make
>>>>>>> sense for all configs but at least for some that govern feature
>>>>>>> functionality its great to have. WDYT?
>>>>>>>
>>>>>>> --
>>>>>>> Regards,
>>>>>>> Uvindra
>>>>>>>
>>>>>>> Mobile: 33962
>>>>>>>
>>>>>>> ___
>>>>>>> Architecture mailing list
>>>>>>> Architecture@wso2.org
>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Sajith Kariyawasam
>>>>>> *Associate Tech Lead*
>>>>>> *WSO2 Inc.; http://wso2.com <http://wso2.com/>*
>>>>>> *Committer and PMC member, Apache Stratos *
>>>>>> *AMIE (SL)*
>>>>>> *Mobile: 0772269575 <0772269575>*
>>>>>>
>>>>>> ___
>>>>>> Architecture mailing list
>>>>>> Architecture@wso2.org
>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Uvindra
>>>>>
>>>>> Mobile: 33962
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Lakmali Baminiwatta
>>>> Associate Technical Lead
>>>> WSO2, Inc.: http://wso2.com
>>>> lean.enterprise.middleware
>>>> mobile:  +94 71 2335936
>>>> blog : lakmali.com
>>>>
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> Nuwan Dias
>>>
>>> Software Architect - WSO2, Inc. http://wso2.com
>>> email : nuw...@wso2.com
>>> Phone : +94 777 775 729
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Lakmali Baminiwatta
>> Associate Technical Lead
>> WSO2, Inc.: http://wso2.com
>> lean.enterprise.middleware
>> mobile:  +94 71 2335936
>> blog : lakmali.com
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Few thoughts about IoT Analytics Pre Cooked Scenarios

2016-10-03 Thread Srinath Perera
Hi All,

Wanted to add some thoughts to my earlier blog

https://iwringer.wordpress.com/2015/10/15/thinking-deeply-about-iot-analytics/

Most IoT platform place a CEP engine in the center and a Batch engine (
which Forester called Data Historian), which we have.

However, as we discussed there are things such as Geo Dashboard that we can
provide on top of CEP/ DAS that make the user's life easier. Here I do not
mean to build a UI experience, rather documenting in detail in user guide (
and link from sites) how to do them and implement any gaps.

Following are few things that I thought of.


   1. Geo Dashboard ( this is mostly done I think)
   2. Anomaly Detection ( Seshi's work, most techniques has be tried out)
   3. Predictive Maintenance ( Roshan (intern) has done a full use case
   using a NASA data set which has given very good results, and I believe it
   can be generalized easily)
   4. Spatio-temporal analytics
  - Forecasts - this is what we did for TFL forecasts
  - Adding temporal support for Geo Dashboard so views can be replayed
   5. Out of order processing  ( regardless of what you do, events from
   sensors will come out of order, and you must processed based on event time
   nor arrival time). Miyuru had some a reordering operator already and there
   is an intern project to more complex algorithms)
   6. Ability to redefine streams ( this is a TODO, simple things supported
   via Siddhi language)
   7. Support for current Context ( we can build on Gopinath's paper, but
   will need more work)

I believe users will want to each of above, and we should document
concretely how to do them and fill any gaps.

Please comment. Happy to explain, defend as always

--Srinath




-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] How can we improve our profiles story?

2016-09-23 Thread Srinath Perera
I think this happen with ESB NIO transport and Servelt transport for
webapps. ( Nuwan, is there other examples?).

On Fri, Sep 23, 2016 at 9:42 AM, Kishanthan Thangarajah <kishant...@wso2.com
> wrote:

> Current issue is that all bundles and artifacts (conf files, webapps) are
> common to the server which are shared among all the profiles. We don't have
> a way to delete and modify them when starting up a profile.
>
> One other option is we could pack everything (profile specific artifacts)
> in the base distribution and provide a build script (ant) which create
> profile specific runtime a.
>
> We will check for the other alternatives along with this PoC and see.
>
> On Thu, Sep 22, 2016 at 12:27 PM, Afkham Azeez <az...@wso2.com> wrote:
>
>> We proposed an idea to build a pack based on a profile. That will contain
>> only the essential stuff. So rather than starting up a runtime and then
>> loading a profile, you build a pack that contains the bare minimum stuff
>> required. Perhaps we can have a descriptor which describes what non-OSGi
>> stuff are required for a profile and we can combine that with the OSGi
>> bundles.info to figure out exactly what is needed. Can someone in the
>> kernel team do a quick PoC?
>>
>> On Thu, Sep 22, 2016 at 11:26 AM, Srinath Perera <srin...@wso2.com>
>> wrote:
>>
>>> Smaeera, are these things we can fix?
>>>
>>> --Srinath
>>>
>>> On Thu, Sep 22, 2016 at 11:23 AM, Nuwan Dias <nuw...@wso2.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> This is to raise some concerns over the current server profiles.
>>>> Although we are able to control the bundles which are loaded to the runtime
>>>> based on the -Dprofile parameter, we still lack the ability of removing
>>>> files and modifying configuration files when the server starts on a
>>>> profile. And this is forcing us to start unnecessary bundles at startup.
>>>> Let me explain...
>>>>
>>>> API Manager has both webapps and a gateway in its distribution. The
>>>> synapse bundles are only required in the Gateway profiles. However since
>>>> the axis2.xml file of API Manager defines the http transport senders and
>>>> receivers based on the Synapse passthrough senders and receivers, the axis2
>>>> engine will try to load the synapse classes on startup. Ideally if we were
>>>> able to modify the axis2.xml on the Publisher, Store and Key Manager
>>>> profiles and replace the passthrough senders and receivers with our
>>>> standard http senders and receivers, we could avoid loading the synapse
>>>> bundles on the Publisher, Store and Key Manager.
>>>>
>>>> The same problem occurs for registry indexers and handlers. Since the
>>>> registry indexers and handlers are configured on the registry.xml, even
>>>> though these are only required in the publisher and store profiles, these
>>>> bundles will be activated and running even on the Gateway, Key Manager and
>>>> Traffic Manager. So unless we modify the registry.xml on those nodes
>>>> manually, we can't stop those bundles from running.
>>>>
>>>> Another problem we're facing is the inability to remove webapps. Since
>>>> all webapps in the repository/deployment/server/webapps and
>>>> repository/deployment/server/jaggeryapps are deployed into the
>>>> runtime, unless we remove these webapps manually there is no other way to
>>>> stop them from being deployed in unrelated profiles.
>>>>
>>>> I heard there is a discussion to bind a profile to a container. Which
>>>> would solve these problems. However it still won't help the "non-container"
>>>> deployments. Are there ways to overcome the above mentioned limitations and
>>>> enhance the efficiency of our profiles?
>>>>
>>>> Thanks,
>>>> NuwanD.
>>>>
>>>> --
>>>> Nuwan Dias
>>>>
>>>> Software Architect - WSO2, Inc. http://wso2.com
>>>> email : nuw...@wso2.com
>>>> Phone : +94 777 775 729
>>>>
>>>
>>>
>>>
>>> --
>>> 
>>> Srinath Perera, Ph.D.
>>>http://people.apache.org/~hemapani/
>>>http://srinathsview.blogspot.com/
>>>
>>
>>
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http:/

Re: [Architecture] Testing DAS 3.1.0 Performance on EC2 with HBase

2016-09-22 Thread Srinath Perera
please update the product docs ( if not already done).

Also we should write an article.

--Srinath

On Tue, Sep 13, 2016 at 2:08 PM, Gokul Balakrishnan <go...@wso2.com> wrote:

> Hi all,
>
> The objective of this mail is to summarise the results of the recently
> conducted performance test round for DAS 3.1.0.
>
> These tests were  intended to measure the throughput of the batch and
> interactive analytics capabilities of DAS under different conditions;
> namely data persistence, Spark analytics job execution and indexing. For
> this purpose, we've used DAS 3.1.0 RC3 instances backed by an Apache HBase
> cluster running on HDFS as the data store, tuned for writes.
>
> This test round was conducted on Amazon EC2 nodes, in the following
> configuration:
>
> 3 DAS nodes (variable roles: publisher, receiver, analyzer and indexer):
> c4.2xlarge
> 1 HBase master + Hadoop Namenode: c3.2xlarge
> 9 HBase Regionservers + Hadoop Datanodes: c3.2xlarge
>
> *1. Persisting 1 billion events from the Smart Home DAS sample*
>
> This test was designed to test the data layer during sustained event
> publication. During testing, the TPS was around the 150K mark, and the
> HBase cluster's memstore flush (which suspends all writes) and minor
> compaction operations brought it down somewhat in bursts. Overall, we were
> able to achieve a mean of 96K TPS, but a steady rate of around 100-150K TPS
> as is achievable, as opposed to the current no-flow-control situation.
>
> The published data took around 950GB on the Hadoop filesystem, taking
> HDFS-level replication into account.
>
>
> ​
>
> Events 10
> Time (s) 10391.768
> Mean TPS 96230.01591
> ​
> *2. Analyzing 1 billion events through Spark*
>
> Spark queries from the Smart Home DAS sample were executed against the
> published data, and the analyzer node count was kept 2 and 3 respectively
> for 2 separate tests. We'd given 6 processor cores and 12GB dedicated
> memory for the Spark JVM during this test, and were able to get a
> throughput of over 1M TPS on Spark for 2 analyzers and about 1.3M TPS for 3
> analyzers.
>
> DAS read operations from the HBase cluster also leverage HBase data
> locality, which would have made the read process more efficient compared to
> random reads.
>
> The mean throughput readings from 3 tests at each case with a query
> involving aggregate functions and GROUP BY are as follows:
>
> INSERT OVERWRITE TABLE cityUsage SELECT metro_area, avg(power_reading) AS
> avg_usage,
> min(power_reading) AS min_usage, max(power_reading) AS max_usage FROM
> smartHomeData GROUP BY metro_area ;
>
>
> 2 Analyzer Nodes 3 Analyzer Nodes
>
> Records 10 10
> Time (s) 958.802 741.152
> Mean TPS 1042968.204 1349250.896
> *3. Persisting the entire Wikipedia corpus*
>
> This test involved publishing the entirety of the Wikipedia dataset, where
> a single event comprises of one Wiki article (16.8M articles in total).
> Events vary greatly in size, with the mean being ~3.5KB; hence, the
> throughput also varies greatly as expected. Here, we were able to see a
> mean throughput of around 9K TPS:
>
>
>
> Events 16753779
> Time (s) 1862.901
> Mean TPS 8993.381291
> ​​
> *4. Indexing the full Wikipedia dataset*
>
> In this test, the data from the Wikipedia dataset was indexed, whereby the
> articles would support full text search through Lucene. The index worker
> counts of 2 and 4 were tested, and 2 dedicated indexer nodes were used in
> the test to run the indexing jobs independently to each other.
>
> The TPS v time graph of the first indexer node with 4 dedicated index
> worker threads is as below:
>
>
> ​
> The overall results from both indexer nodes can be summarised as below:
>
> Records 16753779
> Node 2 Worker threads 4 worker threads
> Indexer 1 2198.66 TPS
> 2268.62 TPS
> Indexer 2 4230.75 TPS
>  3048.91 TPS
>
>
> *5. Analyzing the Wikipedia dataset*
>
> Similar to the Smart Home dataset, Spark queries were run against the
> published Wikipedia dataset, using analyzer clusters of 2 and 3 nodes
> respectively. The results of one of these tests are as follows:
>
> INSERT INTO TABLE wikiContributorSummary SELECT contributor_username,
> COUNT(*) as page_count FROM wiki GROUP BY contributor_username;
>
> Records 16753779
> Node 2 Analyzer Nodes 4 Analyzer Nodes
>
> Time (s)
> 236.107
> 181.419
> TPS 70958.41716
> 92348.53571
>
>
> The full findings of this test may be found in the attached Spreadsheet.
>
> Best regards,
> ​
>  Testing DAS 3.1.0 Performance on a 10-node HBas...
> <https://docs.google.com/a/wso2.com/spreadsheets/d/1Ng7pTR0MpSg3Asn02idBIZaq8AhdNd24HySKp37G

Re: [Architecture] How can we improve our profiles story?

2016-09-21 Thread Srinath Perera
Smaeera, are these things we can fix?

--Srinath

On Thu, Sep 22, 2016 at 11:23 AM, Nuwan Dias <nuw...@wso2.com> wrote:

> Hi,
>
> This is to raise some concerns over the current server profiles. Although
> we are able to control the bundles which are loaded to the runtime based on
> the -Dprofile parameter, we still lack the ability of removing files and
> modifying configuration files when the server starts on a profile. And this
> is forcing us to start unnecessary bundles at startup. Let me explain...
>
> API Manager has both webapps and a gateway in its distribution. The
> synapse bundles are only required in the Gateway profiles. However since
> the axis2.xml file of API Manager defines the http transport senders and
> receivers based on the Synapse passthrough senders and receivers, the axis2
> engine will try to load the synapse classes on startup. Ideally if we were
> able to modify the axis2.xml on the Publisher, Store and Key Manager
> profiles and replace the passthrough senders and receivers with our
> standard http senders and receivers, we could avoid loading the synapse
> bundles on the Publisher, Store and Key Manager.
>
> The same problem occurs for registry indexers and handlers. Since the
> registry indexers and handlers are configured on the registry.xml, even
> though these are only required in the publisher and store profiles, these
> bundles will be activated and running even on the Gateway, Key Manager and
> Traffic Manager. So unless we modify the registry.xml on those nodes
> manually, we can't stop those bundles from running.
>
> Another problem we're facing is the inability to remove webapps. Since all
> webapps in the repository/deployment/server/webapps and
> repository/deployment/server/jaggeryapps are deployed into the runtime,
> unless we remove these webapps manually there is no other way to stop them
> from being deployed in unrelated profiles.
>
> I heard there is a discussion to bind a profile to a container. Which
> would solve these problems. However it still won't help the "non-container"
> deployments. Are there ways to overcome the above mentioned limitations and
> enhance the efficiency of our profiles?
>
> Thanks,
> NuwanD.
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Progress] Use Machine Learning for Predictive Maintenance

2016-09-19 Thread Srinath Perera
Suho Sumedha, this is a highly relevant IoT scenario and use case would
work on most time series equipment data with fault details.

Need to figure out how and where to use it in IoT story.

--Srinath



On Mon, Sep 19, 2016 at 6:05 PM, Supun Sethunga <sup...@wso2.com> wrote:

> Hi,
>
> What was the response variable and classes, in the classification scenario?
>
> Regards,
> Supun
>
> On Mon, Sep 19, 2016 at 2:19 PM, Roshan Alwis <rosh...@wso2.com> wrote:
>
>> Hi all,
>>
>> I am an intern who is currently working on the project "Use machine
>> learning for predictive maintenance", which is a use case of IOT. I have
>> been working with the "Turbofan Engine Degradation Simulation Data Set"
>> published by NASA as my core data set. The goal of this experiment is to
>> predict the remaining useful lifetime of turbofan engines based on their
>> sensor readings. I have built both classification and regression models for
>> this scenario.
>>
>> Link to the data set.
>> https://ti.arc.nasa.gov/tech/dash/pcoe/prognostic-data-repos
>> itory/#turbofan.
>>
>> The algorithms that I have used and their test results are,
>> *Random Forest Classification* [Classification]
>> Result : https://docs.google.com/a/wso2.com/document/d/1EiPbEA5cq4823
>> qTB5TAq80ItywzQwPdo8dn9iSQig4s/edit?usp=sharing
>>
>> *Random Forest Regression* [Regression]
>> Result : https://docs.google.com/a/wso2.com/document/d/1HKW84QyJyAwK6
>> WchRpGwmXVwDRdTEIAmdD1Bz5hVxjs/edit?usp=sharing
>>
>> *H2O Deep Learning* [Regression]
>> Result : https://docs.google.com/a/wso2.com/document/d/1SncvDlVsxsSfU
>> Ij11rAwWQWWU6uWW41pPSJ92iHuV-c/edit?usp=sharing
>>
>> *These results are only valid for following data sets,*
>> Training   : train_FD001.txt
>> Testing: test_FD001.txt
>> Validating : RUL_FD001.txt
>>
>> If you have any suggestions for improvements, please let me know. For
>> further information, contact through the email.
>>
>> Best Regards,
>> --
>> Roshan Alwis
>> *Trainee Software Engineer*
>> *WSO2*
>> m: 0715894672
>> a: No 3/56, Aluthgama Rd, Elpitiya.
>> e: rosh...@wso2.com
>> <https://web.facebook.com/alwisroshan>   <https://twitter.com/rm_alwis>
>> <https://lk.linkedin.com/in/roshanalwis>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Supun Sethunga*
> Senior Software Engineer
> WSO2, Inc.
> http://wso2.com/
> lean | enterprise | middleware
> Mobile : +94 716546324
> Blog: http://supunsetunga.blogspot.com
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Training ML Models with Python on DAS/Analytics Tables

2016-09-14 Thread Srinath Perera
Thanks Supun! I believe this would work.

I assume same will work with python scripts ( without notebooks as well).

--Srinath



On Wed, Sep 14, 2016 at 6:47 PM, Supun Sethunga <sup...@wso2.com> wrote:

> Hi Srinath/Nirmal
>
> I managed to get the $subject working. Here I connected iPython/Jupyter
> Notebook to pyspark, and pyspark submits the job to the remote spark
> cluster (created by DAS).  One of the advantages of using Notebook is that
> a user can load the data in DAS tables as spark dataframe, and can
> interactively work on it.
>
> But it also have the following limitations:
>
>- Client side need a spark distribution. (To use pyspark)
>- Have to limit the cores allocated to the Spark App used by DAS
>(CarbonAnalytics), so that the Spark App created by pySpark can run in
>parallel.
>- Have to set the spark-classpath at the client side, with the jars
>used by DAS, so that the once the job is submitted, spark-executor knows
>where to look for the classes.
>
>
> *Training Models:*
>
> As we discussed offline, for large datasets, we can directly use
> algorithms in spark's mllib and ml. This is very straight forward, as the
> data we get from DAS is a spark-dataframe, and hence can train models on
> top of the dataframe (or can convert it to rdd).
> And for small and medium datasets, we can convert the spark-dataframe to
> pandas-dataframe using df.toPandas(), which will load all the to memory,
> and then train sklearn algorithms on top of that.
>
> A sample python script can b found at [1].
>
> [1] https://github.com/SupunS/play-ground/blob/master/
> pyspark/PySpark-Sample.ipynb
>
> --
> *Supun Sethunga*
> Senior Software Engineer
> WSO2, Inc.
> http://wso2.com/
> lean | enterprise | middleware
> Mobile : +94 716546324
> Blog: http://supunsetunga.blogspot.com
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Multidimensional Space Search with Lucene 6 - Possible Scenarios and the API

2016-09-13 Thread Srinath Perera
gned for Locations search it has more
>> useful queries than General Multidimensional Points.
>>
>> *Possible Queries for the API*
>>
>>- Search for the K-nearest points from a given location. (return the
>>given number of points)
>>- Search for the Points within a given radius from a given point.
>>- Sort them by the distance from a given location.
>>- Points inside a polygon.(Polygons are geometric shapes on the
>>surface of the planet. Example: map of a country)
>>- Get the number of points inside a polygon.*
>>- Get the number of points in each bucket where buckets are specified
>>as polygons.
>>- Get the number of points in each bucket where buckets are specified
>>by the distance from a given location.
>>
>> * Composite polygons are possible.
>> *Scenarios*
>>
>> *Airport Scenario *
>> If we index the set of airports in the world as GeoPoints. Following
>> queries are possible examples. (Here is the test code I implemented as
>> an example.)
>> <https://github.com/janakact/test_lucene/blob/master/src/test/java/TestMultiDimensionalQueries.java>
>>
>>- Find closest set of airports to a given town.
>>- Find the set of airports within a given radius from a particular
>>town.
>>- Find the set of airports inside a country. (Country can be given as
>>a polygon)
>>- Find the set of airports within a given range of Latitudes and
>>Longitudes. It is a Latitude, Longitude box query. (For a examples:
>>Airports closer to the equatorial)
>>- Find the set of airports closer to a given path. (Path can be
>>something like a road. Find the airports which are less than 50km away 
>> from
>>a given highway)
>>- Count the airports in each country by giving country maps as
>>polygons.
>>
>> *Indexing airplane paths*
>>
>>- It is possible to query for paths which goes through an interesting
>>area.
>>
>> Above example covers most of the functionalities that Lucene Space search
>> provides.
>> Here are some other examples,
>>
>>    - Number of television users a satellite can cover.(by indexing
>>receivers' locations)
>>- To find the number of stationary telescopes that can be used to
>>observe a solar eclipse. (by indexing telescope locations. Area the solar
>>eclipse is visible, can be represented as a polygon
>>http://eclipse.gsfc.nasa.gov/SEplot/SEplot2001/SE2016Sep01A.GIF
>><http://eclipse.gsfc.nasa.gov/SEplot/SEplot2001/SE2016Sep01A.GIF>)
>>
>> So, that's it.
>> Thank you.
>>
>> Regards,
>> Janaka Chathuranga
>>
>> --
>> Janaka Chathuranga
>> *Software Engineering Intern*
>> Mobile : *+94 (**071) 3315 725*
>> jana...@wso2.com
>>
>> <https://wso2.com/signature>
>>
>>
>
>
> --
> *Anjana Fernando*
> Associate Director / Architect
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-09-12 Thread Srinath Perera
Anusha, we should try Adaboost as Geesara mentioned ( when we done with
what we are doing).

--Srinath

On Sun, Sep 11, 2016 at 10:52 AM, Anusha Jayasundara <anus...@wso2.com>
wrote:

> Hi Sumedha,
>
> I just detect the face. I went through few articles about face
> recognition, and I have a sample code also, but it is not that much
> accurate.
>
> Thanks,
>
>
> On Fri, Sep 9, 2016 at 11:26 AM, Sumedha Rubasinghe <sume...@wso2.com>
> wrote:
>
>> On Fri, Sep 9, 2016 at 11:24 AM, Anusha Jayasundara <anus...@wso2.com>
>> wrote:
>>
>>> Hi Geesara,
>>>
>>> I used Haar full body cascade and HoG pedestrian detection cascade, In
>>> Haar full body cascade they have mentioned that, upper body detection,
>>> lower body detection and full body detection is there in the cascade. even
>>> thought it is there, once I tried to use separate upper body detection
>>> cascade with full body detection cascade. but when it is implemented system
>>> took long time to process even a simple video with two person.
>>> I'll upload my code to Github repo.
>>> I still didn't work with real-time CCTV videos ,but I was able to build
>>> a real-time face detection system using the web cam of my laptop and it had
>>> issues on processing as the machine couldn't handle it.
>>>
>>
>> Anusha,
>> Did you just detect the face or associated that with a name as well?
>>
>>
>>
>>> We thought of doing video processing out side of the CEP and send the
>>> process data in to the CEP.(i.e human count, time_stamp, frame rate
>>> ,etc..). For now I send those data into CEP as a Json POST request.
>>>
>>>
>>> Thank You,
>>>
>>>
>>>
>>>
>>> On Wed, Sep 7, 2016 at 11:57 PM, Geesara Prathap <gees...@wso2.com>
>>> wrote:
>>>
>>>> Hi Anusha,
>>>>
>>>> A few suggestions to improve your implementation.
>>>> Haar and HoG  are used to get visual descriptors which can be used to
>>>> describe an image. Then both of them are using boosting classification like
>>>> AdaBoost to tune up its performance. When you are using haar-like feature
>>>> extraction method you need to use more that one model in order to make the
>>>> final decision. Let's say you are using  full body classifier for human
>>>> detection. Along with this classifier,  can’t detect  upper body properly.
>>>> When haar-like feature extraction is used you may have to use more that one
>>>> classifier and the final decision will be taken aggregation or composition
>>>> of those results. Next important thing is pre-processing. It may be
>>>> composed of color balancing, gamma correction , changing color space and
>>>> some of the factors which unique to  the environment which you're trying
>>>> out. Processing model is also more important since this is to be done in
>>>> real time. If you can explain your algorithm we will able to provide some
>>>> guidance in order to improve your algorithm to get a better result.
>>>>
>>>> Since the main intention of this project is to facilitate support for
>>>> images process in the WSO2 Platform. I am just curious to know, how do you
>>>> process the video stream in real-time with the help of CEP. Since you are
>>>> using CCTV feeds which might be using RTSP or RTMP, where do you process
>>>> the incoming video stream? Are you to develop RTSP or RTMP input adapters
>>>> so as to get input stream into CEP?
>>>>
>>>> Thanks,
>>>> Geesara
>>>>
>>>> On Wed, Aug 31, 2016 at 8:16 PM, Anusha Jayasundara <anus...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> The Progress of the video processing project is described in the
>>>>> attached pdf.
>>>>>
>>>>> On Wed, Aug 31, 2016 at 11:39 AM, Srinath Perera <srin...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Anusha has the people counting from video working through CEP and
>>>>>> have a dashboard. ( Anusha can u send an update with screen shots?). We
>>>>>> will also setup a meeting.
>>>>>>
>>>>>> Also seems new Camaras automatically do human detection etc and add
>>>>>> object codes to videos, and if we can extract them, we can do some 
>>>>>> analysis
>>&g

Re: [Architecture] Multidimensional Space Search with Lucene 6 - Possible Scenarios and the API

2016-09-12 Thread Srinath Perera
ah loot=look

On Tue, Sep 13, 2016 at 10:37 AM, Srinath Perera <srin...@wso2.com> wrote:

> Hi Suho,
>
> Please loot this, and some of the scenarios should be handy for IoT
>
> --Srinath
>
> On Mon, Sep 12, 2016 at 9:52 AM, Janaka Thilakarathna <jana...@wso2.com>
> wrote:
>
>> Hi all,
>>
>> I am currently working on the project Multidimensional Space Search with
>> Lucene 6 for DAS. Here is a list of possible functionalities (and some
>> example scenarios) that we can provide using Lucene 6.1 's multi
>> dimensional space search.
>> (This is a brief description on collected details, to see more info and
>> the test codes, please visit my git repo
>> <https://github.com/janakact/test_lucene>.)
>>
>> Multidimensional space points in Lucene 6 can be categorized into two
>> types of space points depending on their query types and distribution of
>> points in the space.
>>
>>1. General Multidimensional space points.
>>2. Locations on the planet surface.
>>
>>
>> *1. General Multidimensional Space Points*
>> This is the generic type of multi dimensional space points. Those are
>> actually vector spaces. Space has a dimension K. In a K-dimensional space a
>> point is represented by a K number of numeric values.
>>
>> For an example 3D point will be represented by 3 float/double values.
>> (because distance is measured as a floating point number.)
>>
>> *Possible Queries for API*
>>
>>- Search for points with exact given values.
>>- Search for points which has one of the value from a given set of
>>values.
>>- Search for points within a given range.
>>- Get the number of points which has exact point.
>>- Get the number of points within a given range. (Ranges are
>>multidimensional ranges. In 3D, they are boxes.)
>>- Divide points into range-buckets and get the count in each buckets.
>>(Range bucket is a range which has a label in it)
>>
>> *Scenarios*
>>
>> Since this is a more general space definition this could have many
>> general applications. These dimensions can be used to represent any numeric
>> field. They can be used to represent locations in a 3D space if we use 3
>> dimensional points. We can use it to represent both space and time if we
>> use 4 dimensions. (X, Y, Z, time all are numeric fields. Double can be
>> used).
>>
>> *Independent Parameters*
>>
>> It can be used to represent completely independent parameters. Think
>> there is a set of employees. A multi dimensional space can be used to
>> represent different parameters of them.
>>
>> *Example:* Age, salary, height, average number of leaves per month.
>> These are 4 numeric fields which are completely independent. It can be
>> represented as a 4 Dimensional Space. Each person will be represented as a
>> point in this space. Then the user can use Lucene to query  on these
>> people.
>>
>>- What is the number of people who's age is 25-30 years, Height is
>>160cm to 180cm, Salary is 50,000 to 75,000 and take 1-5 average number of
>>leaves per month?
>>- Or user can divide people into different buckets and count them
>>depending on the ranges for each parameter.
>>
>> (Of course this can be done by indexing those parameters separately and
>> query using 'AND' keyword, but indexing them together as a multidimensional
>> space will make searching more efficient)
>>
>> *2. Locations on the planet surface. (Latitude, Longitude)*
>> Here points represents locations on top of the planet surface. This is a
>> more specific type of search provided by Lucene to index and search
>> geographical locations.
>>
>> These points are created using only the latitude and longitude values of
>> locations.
>> **Please consider altitude is not yet supported by Lucene.*
>>
>> Since this is specifically designed for Locations search it has more
>> useful queries than General Multidimensional Points.
>>
>> *Possible Queries for the API*
>>
>>- Search for the K-nearest points from a given location. (return the
>>given number of points)
>>- Search for the Points within a given radius from a given point.
>>- Sort them by the distance from a given location.
>>- Points inside a polygon.(Polygons are geometric shapes on the
>>surface of the planet. Example: map of a country)
>>- Get the number of points inside a polygon.*
>>- Get the number of points in each bucket where buckets are specified
>>

Re: [Architecture] Multidimensional Space Search with Lucene 6 - Possible Scenarios and the API

2016-09-12 Thread Srinath Perera
ies.java>
>
>- Find closest set of airports to a given town.
>- Find the set of airports within a given radius from a particular
>town.
>- Find the set of airports inside a country. (Country can be given as
>a polygon)
>- Find the set of airports within a given range of Latitudes and
>Longitudes. It is a Latitude, Longitude box query. (For a examples:
>Airports closer to the equatorial)
>- Find the set of airports closer to a given path. (Path can be
>something like a road. Find the airports which are less than 50km away from
>a given highway)
>- Count the airports in each country by giving country maps as
>polygons.
>
> *Indexing airplane paths*
>
>- It is possible to query for paths which goes through an interesting
>area.
>
> Above example covers most of the functionalities that Lucene Space search
> provides.
> Here are some other examples,
>
>- Number of television users a satellite can cover.(by indexing
>receivers' locations)
>- To find the number of stationary telescopes that can be used to
>observe a solar eclipse. (by indexing telescope locations. Area the solar
>eclipse is visible, can be represented as a polygon
>http://eclipse.gsfc.nasa.gov/SEplot/SEplot2001/SE2016Sep01A.GIF
><http://eclipse.gsfc.nasa.gov/SEplot/SEplot2001/SE2016Sep01A.GIF>)
>
> So, that's it.
> Thank you.
>
> Regards,
> Janaka Chathuranga
>
> --
> Janaka Chathuranga
> *Software Engineering Intern*
> Mobile : *+94 (**071) 3315 725*
> jana...@wso2.com
>
> <https://wso2.com/signature>
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Avoiding Sticky Sessions in APIM Distributed Setup?

2016-09-06 Thread Srinath Perera
Hi Shammi,

Reason we need sticky session is that otherwise we need very fast session
replication across nodes, which is very hard and unscalable problem ( which
is another instance of "reliable totally ordered multicast").

Otherwise, when a node hit a node and come back, he might be sent to a
different node, and session would not be valid.

If we are to remove sticky session we have to solve above problem.

One solution may be to make the session time limited, and force the client
to recreate the session time to time ( e.g. once every 10 mins). Then load
get redistributed

--Srinath

On Wed, Sep 7, 2016 at 7:05 AM, Sumedha Rubasinghe <sume...@wso2.com> wrote:

> Resending my reply as it got blocked.
>
> On Wed, Sep 7, 2016 at 2:34 AM, Shammi Jayasinghe <sha...@wso2.com> wrote:
>
>> Hi,
>>
>> WSO2 always asks to enable sticky sessions in a distributed setup which
>> has multiple key manager nodes. According to my understanding, We needs to
>> have this since there are multiple calls going back and forth between
>> Gateway and the Key manager node when it comes to do key generation and
>> other token related stuff.
>>
>> Having sticky sessions always needs to be enabled always, Introduces
>> another problem when balancing the load as in following example.
>>
>> Eg: We have 5 GW servers in a cluster. There are 1000 users using this
>> system. We declare 1 GW server can handle 1000 TPS as max. So, the total
>> Max Capacity of the system, We say as 5000 TPS.
>>
>> In a particular time, There are 5 users generating traffic like 800 TPS
>> and other 995 users generates only 1 TPS. So, as the total it is under the
>> Max capacity,
>>
>> 800 x 5 + 995 = 4995
>>
>> But, with the Sticky sessions, if these 5 top users made the initial
>> session with a single GW node, the load to that server will be 4000 TPS
>> which exhausting the resources.
>>
>> So, Having Sticky sessions is introducing a problem with load balancing.
>>
>
>
>>
>>
> Shammi,
> I don't see a generic way of dealing with this unless we use a custom load
> balancer logic.
>
> But for this to happen (in a RR with 5 G/Ws), the top 5 users need to
> initiate the calls in a very specific point. Isn't it?
> i.e
> 1st top user connects and get sticked to GW1
> 4 other users connect to GW 2,3,4,5 respectively
> 2nd top user connects and RR directs him to GW1
>
> Is this a common occurrence  @ at customer deployment?
>
> Solution I see is,
> One the system is in equilibrium (after warm up), G/W's talking to each
> other and balancing load. But this requires talking back to LB to inform
> the new routing logic after balancing.
>
>
>
>
>>
>> Is there any possibility to avoid this sticky session requirement by
>> introducing a way to do token related operations in a single call from GW
>> to KM ?
>>
>>
>> Thanks
>> shammi
>>
>> --
>> Best Regards,
>>
>> *  Shammi Jayasinghe*
>>
>>
>> *Technical Lead*
>> *WSO2, Inc.*
>> *+1-812-391-7730 <%2B1-812-391-7730>*
>> *+1-812-327-3505 <%2B1-812-327-3505>*
>>
>> *http://shammijayasinghe.blogspot.com
>> <http://shammijayasinghe.blogspot.com>*
>>
>>
>
>
> --
> /sumedha
> m: +94 773017743
> b :  bit.ly/sumedha
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DB event listener for ESB

2016-08-31 Thread Srinath Perera
+1 .. we can say that we only monitor one table and recommend to setup
triggers if they need to detect lot of conditions.

On Wed, Aug 31, 2016 at 3:12 AM, Chamila De Alwis <chami...@wso2.com> wrote:

> One hybrid solution would be to have db triggers adding records to a
> single "monitor" table in which a polling inbound endpoint can periodically
> look check for changes [1]. Based on the new records, the consequent
> sequence can decide which actions to execute.
>
> [1] - http://stackoverflow.com/questions/6153330/can-a-sql-
> trigger-call-a-web-service
>
> Regards,
> Chamila de Alwis
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer | WSO2
> Blog: https://medium.com/@chamilad
>
>
>
> On Tue, Aug 30, 2016 at 4:52 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Malaka,
>>
>> If it is done using triggers, it can be done without us doing anything. I
>> assume trigger can hit a URL in the ESB that trigger processing.
>>
>> Adding a DB listener as an inbound endpoint is OK.
>>
>> I suggest we only do DB listener.
>>
>> --Srinath
>>
>> On Tue, Aug 30, 2016 at 3:13 PM, Malaka Silva <mal...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> ​There are requirements to ​do additional operations when there are
>>> changes done to organization data.
>>>
>>> One way to do this is to create triggers at database level. However
>>> there are limitations on actions users can perform using triggers.
>>>
>>> So if we implement custom inbound endpoint we can cover most of the use
>>> cases.
>>> [image: Inline image 1]
>>>
>>>
>>>
>>> There are several ways to do that. But we already know using JDBC is
>>> impossible at the moment. One way to achieve this is implementing a polling
>>> inbound to monitor the changes in the database object (such as a table in
>>> the database). If any change occurred, that inbound can invoke a sequence.
>>> But this is not a good practice. What if your database has more than ten
>>> tables? Then users have to create ten threads for each table and that would
>>> be a great mess regarding to the performance.
>>>
>>> There are also vendor specific solutions provided. [1] [2]
>>>
>>> JPA also provide this capability [3] However with this users need to
>>> create entities for there environment and using those with ESB is complex.
>>>
>>> Using Hibernate we can do the same and maintain the configuration in XML.
>>>
>>> Thoughts about this inbound are welcome?
>>>
>>> [1] http://stackoverflow.com/questions/12618915/how-to-imple
>>> ment-a-db-listener-in-java
>>> [2] http://www.ibm.com/support/knowledgecenter/SSSHYH_5.1.1/
>>> com.ibm.netcoolimpact.doc5.1.1/solution/imsg_db_listeners_da
>>> tabase_listener_overview_c.html
>>> [3] https://docs.jboss.org/hibernate/entitymanager/3.6/reference
>>> /en/html/listeners.html​
>>> [4] https://dunithd.wordpress.com/2009/10/27/create-database-tri
>>> ggers-like-features-using-hibernate-events/
>>>
>>>
>>> Best Regards,
>>>
>>> Malaka Silva
>>> Senior Technical Lead
>>> M: +94 777 219 791
>>> Tel : 94 11 214 5345
>>> Fax :94 11 2145300
>>> Skype : malaka.sampath.silva
>>> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
>>> Blog : http://mrmalakasilva.blogspot.com/
>>>
>>> WSO2, Inc.
>>> lean . enterprise . middleware
>>> https://wso2.com/signature
>>> http://www.wso2.com/about/team/malaka-silva/
>>> <http://wso2.com/about/team/malaka-silva/>
>>> https://store.wso2.com/store/
>>>
>>> Don't make Trees rare, we should keep them with care
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-08-31 Thread Srinath Perera
Anusha has the people counting from video working through CEP and have a
dashboard. ( Anusha can u send an update with screen shots?). We will also
setup a meeting.

Also seems new Camaras automatically do human detection etc and add object
codes to videos, and if we can extract them, we can do some analysis
without heavy processing as well. Will explore this too.

Also Facebook opensourced their object detection code called FaceMask
https://code.facebook.com/posts/561187904071636. Another to look at.

--Srinath



On Mon, Aug 15, 2016 at 4:14 PM, Sanjiva Weerawarana <sanj...@wso2.com>
wrote:

> Looks good!
>
> In terms of test data we can take the video cameras in the LK Palm Grove
> lobby as an input source to play around with people analysis. For vehicles
> we can plop a camera pointing to Duplication Road and get plenty of data
> :-).
>
> I guess we should do some small experiments to see how things work.
>
> Sanjiva.
>
> On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Attached document list some of the initial ideas about the topic. Anusha
>> is exploring some of the ideas as an intern project.
>>
>> Please comment and help ( specially if you have worked on this area or
>> has tried out things)
>>
>>
>> Thanks
>> Srinath
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>
>
>
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
> Lean . Enterprise . Middleware
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DB event listener for ESB

2016-08-30 Thread Srinath Perera
Hi Malaka,

If it is done using triggers, it can be done without us doing anything. I
assume trigger can hit a URL in the ESB that trigger processing.

Adding a DB listener as an inbound endpoint is OK.

I suggest we only do DB listener.

--Srinath

On Tue, Aug 30, 2016 at 3:13 PM, Malaka Silva <mal...@wso2.com> wrote:

> Hi,
>
> ​There are requirements to ​do additional operations when there are
> changes done to organization data.
>
> One way to do this is to create triggers at database level. However there
> are limitations on actions users can perform using triggers.
>
> So if we implement custom inbound endpoint we can cover most of the use
> cases.
> [image: Inline image 1]
>
>
>
> There are several ways to do that. But we already know using JDBC is
> impossible at the moment. One way to achieve this is implementing a polling
> inbound to monitor the changes in the database object (such as a table in
> the database). If any change occurred, that inbound can invoke a sequence.
> But this is not a good practice. What if your database has more than ten
> tables? Then users have to create ten threads for each table and that would
> be a great mess regarding to the performance.
>
> There are also vendor specific solutions provided. [1] [2]
>
> JPA also provide this capability [3] However with this users need to
> create entities for there environment and using those with ESB is complex.
>
> Using Hibernate we can do the same and maintain the configuration in XML.
>
> Thoughts about this inbound are welcome?
>
> [1] http://stackoverflow.com/questions/12618915/how-to-imple
> ment-a-db-listener-in-java
> [2] http://www.ibm.com/support/knowledgecenter/SSSHYH_5.1.1/
> com.ibm.netcoolimpact.doc5.1.1/solution/imsg_db_listeners_d
> atabase_listener_overview_c.html
> [3] https://docs.jboss.org/hibernate/entitymanager/3.6/reference
> /en/html/listeners.html​
> [4] https://dunithd.wordpress.com/2009/10/27/create-database-tri
> ggers-like-features-using-hibernate-events/
>
>
> Best Regards,
>
> Malaka Silva
> Senior Technical Lead
> M: +94 777 219 791
> Tel : 94 11 214 5345
> Fax :94 11 2145300
> Skype : malaka.sampath.silva
> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
> Blog : http://mrmalakasilva.blogspot.com/
>
> WSO2, Inc.
> lean . enterprise . middleware
> https://wso2.com/signature
> http://www.wso2.com/about/team/malaka-silva/
> <http://wso2.com/about/team/malaka-silva/>
> https://store.wso2.com/store/
>
> Don't make Trees rare, we should keep them with care
>
> _______
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] Problem with integrating Event Stream OSGi Service

2016-08-23 Thread Srinath Perera
+1 actually keeping the event stream definitions in the client as well lead
to lot of complications when server and client versions does not match. We
had this problem with BAM and decided to keep it only in the server. ( I
can find and send the mailing thread if needed)

If clients need it, he should get it from the server.

--Srinath

On Wed, Aug 24, 2016 at 10:05 AM, Nuwan Dias <nuw...@wso2.com> wrote:

> Hi,
>
> I spoke with Srinath regarding this too. It would be ideal to have a
> global config which can override the per-stream configs so that we won't
> have to maintain the DAS configs in multiple places. In all cases we've
> come across so far we've only seen people use a single DAS (or cluster) and
> publish all streams to that DAS cluster. So at least as of now I don't see
> much value in having per stream config files.
>
> It would also be good if we can omit the necessity of having to define the
> streams on the client side (because they're already defined on the server).
> I see two main problems of having to define them on the clients.
>
> 1. When we have multiple event generators (a gateway cluster), the streams
> will have to be defined on all nodes. Do we use dep-sync to sync up the
> configs on the clients? Even if we do, I think its better to not depend on
> dep-sync for the new stuff we're developing because we're moving away from
> dep-sync soon.
>
> 2. Problems in upgrading to newer product versions. When upgrading the
> product version, if a schema change has occurred, currently we only have to
> worry about changing the schema on the server side. With the above
> limitation, we now have to worry about changing the clients too.
>
> Thanks,
> NuwanD.
>
> On Tue, Aug 23, 2016 at 10:16 PM, Malintha Amarasinghe <malint...@wso2.com
> > wrote:
>
>> Hi All,
>>
>> Currently APIM is using an internal APIM specific configuration resides
>> in api-manager.xml which include DAS server URL, username, password etc.
>> Those configuration are used to instantiate an org.wso2.carbon.databridge.
>> agent.DataPublisher object, which is then used to publish events
>> directly to streams in DAS.
>>
>> As it is an APIM specific configuration, it is not reusable by other
>> non-APIM features. Also org.wso2.carbon.databridge.agent.DataPublisher is
>> intended to be used in non-carbon environments, for carbon environments,
>> the recommended way is to use the Event Stream OSGi Service.
>>
>> So we were trying to re-structure APIM data publishing code to use new
>> Event Stream OSGi service explained in [1].
>>
>> 1. Create all the streams defined in DAS in APIM.
>> 2. Create an *event publisher per each stream *which takes data from the
>> stream and publish to DAS.
>> 3. In each event publisher we need to configure DAS specific
>> configuration.
>>
>> Ex:
>> 
>> http://wso2.org/carbon/eventpublisher;>
>>   
>>   
>>   
>> admin
>> thrift
>> non-blocking
>> 0
>> tcp://localhost:7611
>> 
>>   
>> 
>>
>> But we are facing a small problem here; in APIM there are many event
>> streams being used. If there's a change happen to DAS configuration, we
>> need to change it in all the event publishers which makes it difficult to
>> maintain. As per the offline chat with DAS team, this is a current
>> limitation.
>>
>> Are we going to move forward with the existing implementation?
>>
>> Thanks,
>> Malintha
>>
>> [1] [Dev] Common configuration for publishing events from carbon servers
>> to DAS/CEP
>> --
>> Malintha Amarasinghe
>> Software Engineer
>> *WSO2, Inc. - lean | enterprise | middleware*
>> http://wso2.com/
>>
>> Mobile : +94 712383306
>>
>
>
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC: Video and Image Processing Support in WSO2 Platform

2016-08-10 Thread Srinath Perera
Anusha can you send notes about what we did so far to this thread.

On Wed, Aug 10, 2016 at 3:02 PM, Srinath Perera <srin...@wso2.com> wrote:

> Attached document list some of the initial ideas about the topic. Anusha
> is exploring some of the ideas as an intern project.
>
> Please comment and help ( specially if you have worked on this area or has
> tried out things)
>
>
> Thanks
> Srinath
>
> --
> ====
> Srinath Perera, Ph.D.
>http://people.apache.org/~hemapani/
>http://srinathsview.blogspot.com/
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Arch] Adding CEP and ML samples to DAS distribution in a consistent way

2016-08-10 Thread Srinath Perera
ng it for this release ?
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Suho
>>>>>>>>
>>>>>>>> On Wed, Aug 3, 2016 at 6:31 PM, Ramith Jayasinghe <ram...@wso2.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I think we need to ship samples with product. otherwise, The first
>>>>>>>>> 5-minite experience of users will be negatively affected.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ___
>>>>>>>>> Architecture mailing list
>>>>>>>>> Architecture@wso2.org
>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> *S. Suhothayan*
>>>>>>>> Associate Director / Architect & Team Lead of WSO2 Complex Event
>>>>>>>> Processor
>>>>>>>> *WSO2 Inc. *http://wso2.com
>>>>>>>> * <http://wso2.com/>*
>>>>>>>> lean . enterprise . middleware
>>>>>>>>
>>>>>>>>
>>>>>>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>>>>>>> http://suhothayan.blogspot.com/ 
>>>>>>>> <http://suhothayan.blogspot.com/>twitter:
>>>>>>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | 
>>>>>>>> linked-in:
>>>>>>>> http://lk.linkedin.com/in/suhothayan 
>>>>>>>> <http://lk.linkedin.com/in/suhothayan>*
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Architecture mailing list
>>>>>>>> Architecture@wso2.org
>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Niranda Perera*
>>>>>>> Software Engineer, WSO2 Inc.
>>>>>>> Mobile: +94-71-554-8430
>>>>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>>>>> https://pythagoreanscript.wordpress.com/
>>>>>>>
>>>>>>> ___
>>>>>>> Architecture mailing list
>>>>>>> Architecture@wso2.org
>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Sinthuja Rajendran*
>>>>>> Technical Lead
>>>>>> WSO2, Inc.:http://wso2.com
>>>>>>
>>>>>> Blog: http://sinthu-rajan.blogspot.com/
>>>>>> Mobile: +94774273955
>>>>>>
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Architecture mailing list
>>>>>> Architecture@wso2.org
>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> W.G. Gihan Anuruddha
>>>>> Senior Software Engineer | WSO2, Inc.
>>>>> M: +94772272595
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> *S. Suhothayan*
>>>> Associate Director / Architect & Team Lead of WSO2 Complex Event
>>>> Processor
>>>> *WSO2 Inc. *http://wso2.com
>>>> * <http://wso2.com/>*
>>>> lean . enterprise . middleware
>>>>
>>>>
>>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>>>> http://lk.linkedin.com/in/suhothayan 
>>>> <http://lk.linkedin.com/in/suhothayan>*
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> *Niranda Perera*
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94-71-554-8430
>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>> https://pythagoreanscript.wordpress.com/
>>>
>>
>>
>>
>> --
>> *Dilini Muthumala*
>> Senior Software Engineer,
>> WSO2 Inc.
>>
>> *E-mail :* dil...@wso2.com
>> *Mobile: *+94 713-400-029
>>
>
>
>
> --
> *Dilini Muthumala*
> Senior Software Engineer,
> WSO2 Inc.
>
> *E-mail :* dil...@wso2.com
> *Mobile: *+94 713-400-029
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Smart Grid Project : ML Predictions

2016-07-25 Thread Srinath Perera
adding archtecture@

What tool did you used to train the regression? Is it WSO2 ML. Can you
share details about the process?

--Srinath

On Mon, Jul 25, 2016 at 2:04 PM, Sanjaya De Silva <sanja...@wso2.com> wrote:

> Hi all,
>
> Following are the files that I used to train and test.
>
> On Mon, Jul 25, 2016 at 1:59 PM, Nihla Akram <ni...@wso2.com> wrote:
>
>> Hello All,
>>
>>
>> The following are few attachments which was used to train and test the ML
>> values obtained for clearing price. Please note that the predicted values
>> weren't accurate.
>> *trainerData.csv* is the file used to train the model.
>> *resultData.csv* is the result file produced for predictions on*
>> testData.csv* file.
>>
>>
>> The configurations of the Model were as follows.
>> Algorithm : Linear Regression
>> Response variable : clearingprice
>> Train data fraction : 0.7
>>
>>
>>
>> Thanks,
>> Nihla
>>
>>
>> --
>> *Nihla Akram*
>> Software Engineering Intern
>>
>> +94 72 667 9482 <%2B94%2072%6679482>
>>
>>
>
>
> --
> Thank you
> Best Regards
>
> Sanjaya De Silva
> Trainee Software Engineer
> WSO2
> +94774181056
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Caching Real time analytics data

2016-07-08 Thread Srinath Perera
t;>>>> --
>>>>>>>>>>> Sachith Withana
>>>>>>>>>>> Software Engineer; WSO2 Inc.; http://wso2.com
>>>>>>>>>>> E-mail: sachith AT wso2.com
>>>>>>>>>>> M: +94715518127
>>>>>>>>>>> Linked-In: <http://goog_416592669>
>>>>>>>>>>> https://lk.linkedin.com/in/sachithwithana
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> *Sinthuja Rajendran*
>>>>>>>>>> Technical Lead
>>>>>>>>>> WSO2, Inc.:http://wso2.com
>>>>>>>>>>
>>>>>>>>>> Blog: http://sinthu-rajan.blogspot.com/
>>>>>>>>>> Mobile: +94774273955
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Sachith Withana
>>>>>>>>> Software Engineer; WSO2 Inc.; http://wso2.com
>>>>>>>>> E-mail: sachith AT wso2.com
>>>>>>>>> M: +94715518127
>>>>>>>>> Linked-In: <http://goog_416592669>
>>>>>>>>> https://lk.linkedin.com/in/sachithwithana
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> *S. Suhothayan*
>>>>>>>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>>>>>>>> *WSO2 Inc. *http://wso2.com
>>>>>>>> * <http://wso2.com/>*
>>>>>>>> lean . enterprise . middleware
>>>>>>>>
>>>>>>>>
>>>>>>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>>>>>>> http://suhothayan.blogspot.com/ 
>>>>>>>> <http://suhothayan.blogspot.com/>twitter:
>>>>>>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | 
>>>>>>>> linked-in:
>>>>>>>> http://lk.linkedin.com/in/suhothayan 
>>>>>>>> <http://lk.linkedin.com/in/suhothayan>*
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> *Tharik Kanaka*
>>>>>>>
>>>>>>> WSO2, Inc |#20, Palm Grove, Colombo 03, Sri Lanka
>>>>>>>
>>>>>>> Email: tha...@wso2.com | Web: www.wso2.com
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *V. Mohanadarshan*
>>>>>> *Associate Tech Lead,*
>>>>>> *Data Technologies Team,*
>>>>>> *WSO2, Inc. http://wso2.com <http://wso2.com> *
>>>>>> *lean.enterprise.middleware.*
>>>>>>
>>>>>> email: mo...@wso2.com
>>>>>> phone:(+94) 771117673
>>>>>>
>>>>>> ___
>>>>>> Architecture mailing list
>>>>>> Architecture@wso2.org
>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Software Engineer
>>>>> WSO2 Inc.; http://wso2.com
>>>>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>>>>> lean.enterprise.middleware
>>>>>
>>>>> mobile: *+94728671315 <%2B94728671315>*
>>>>>
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> *Tharik Kanaka*
>>>>
>>>> WSO2, Inc |#20, Palm Grove, Colombo 03, Sri Lanka
>>>>
>>>> Email: tha...@wso2.com | Web: www.wso2.com
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> Software Engineer
>>> WSO2 Inc.; http://wso2.com
>>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>>> lean.enterprise.middleware
>>>
>>> mobile: *+94728671315 <%2B94728671315>*
>>>
>>>
>>
>>
>> --
>>
>> *Tharik Kanaka*
>>
>> WSO2, Inc |#20, Palm Grove, Colombo 03, Sri Lanka
>>
>> Email: tha...@wso2.com | Web: www.wso2.com
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] How do we get DAS server location?

2016-06-29 Thread Srinath Perera
Resending as it hits a filter rule.

Gokul, please give an update on this?

--Srinath
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Monitor Logged In Users/Sessions

2016-06-27 Thread Srinath Perera
Please make sure we do not do the work twice. If what this do is something
IS analytics can use later, then OK. But if one is redundant at the end,
that is a waste.

--Srinath

--Srinath

On Tue, Jun 28, 2016 at 10:40 AM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

>
>
> On Tue, Jun 28, 2016 at 10:23 AM, Mohanadarshan Vivekanandalingam <
> mo...@wso2.com> wrote:
>
>> Hi Srinath,
>>
>> Above mentioned scenarios are not handled in the initial version of
>> session related analytics (and not in our future list as well).. The
>> scenarios that we are covering are mentioned in [1].. We are looking at
>> overall session analytics and not focusing on user specific sessions or
>> session specific user.
>>
>> But, as per Gayan's description it seems they are looking to do some
>> operations rather than just monitoring those sessions..  But anyway, we can
>> incorporate these session analytics for next version it it make sense..
>>
>> [1] Analytics IS - Session Analytics View
>>
>> Thanks,
>> Mohan
>>
>>
>> On Tue, Jun 28, 2016 at 8:26 AM, Srinath Perera <srin...@wso2.com> wrote:
>>
>>> Is this done as part of IS analytics?
>>>
>>> --Srinath
>>>
>>> On Mon, Jun 27, 2016 at 3:53 PM, Gayan Gunawardana <ga...@wso2.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> This feature will provide capability to admin users to monitor other
>>>> logged in users sessions and kill those sessions if necessary. Currently
>>>> logged in users sessions persist inside IDN_AUTH_SESSION_STORE table and
>>>> there is no mapping to authenticated user. We are planning to introduce new
>>>> table to maintain mapping between user and session id.
>>>>
>>>> CREATE TABLE IDN_USER_SESSION_DATA (
>>>>
>>>>   SESSION_ID VARCHAR (100) DEFAULT NULL,
>>>>
>>>>   USER_NAME VARCHAR(100) DEFAULT NULL,
>>>>
>>>>   USER_DOMAIN_NAME VARCHAR(100) DEFAULT NULL,
>>>>
>>>>   TENANT_NAME VARCHAR(100) DEFAULT NULL,
>>>>
>>>>   FOREIGN KEY (SESSION_ID) REFERENCES
>>>> IDN_AUTH_SESSION_STORE(SESSION_ID) ON DELETE CASCADE,
>>>>
>>>>   PRIMARY KEY (SESSION_ID, USER_NAME)
>>>>
>>>> );
>>>>
>>>> According to latest implementation of session data persistence, we can
>>>> consider particular SESSION ID is active if last record (sorted by time
>>>> descending order) for given SESSION ID is "STORE" operation. If there are
>>>> two store operations to IDN_AUTH_SESSION_STORE table there is a problem of
>>>> duplicating data in IDN_USER_SESSION_DATA. We need to find a way to handle
>>>> this situation.
>>>>
>>>> 1. Listing active session list for given user
>>>>
>>>> In back-end distinguish sessions will be identified by using SESSION_ID
>>>> but in front-end we cannot display SESSION_ID. In front-end unique sessions
>>>> will be displayed according to User-Agent.
>>>>
>>>> 2. Listing users who have active sessions
>>>>
>>>> Listing users who have at least one active session will be challenging.
>>>> Since IDN_AUTH_SESSION_STORE table is HUGE in a production system, and
>>>> doing a JOIN operation with it would be costly.
>>>>
>>>> The username in the USER_SESSION_DATA is picked from the authenticated
>>>> user attribute available in the session context. This attribute is set
>>>> after all processing done in the SequenceHandler hence the authenticated
>>>> user here actually subject identifier, rather than a real username.
>>>>
>>>> Hence the username in the USER_SESSION_DATA table can be one of
>>>> following,
>>>> i. A Local User : who use the actual username as the subject identifier
>>>> ii. A Local User : who use a claim as the subject identifier
>>>> iii. A Federated User : who use federated subject or a claim
>>>>
>>>> Only in first scenario, it can find proper user store domain from the
>>>> username. In the third scenario we can store federated IDP name for
>>>> USER_DOMAIN_NAME.
>>>>
>>>> Handling\Storing usernames is a common thing we need to decide (in
>>>> OAuth IDN_OAUTH2_ACCESS_TOKEN we have the same problem), so we should
>>>> figure out a general schema for IDN_USER_S

Re: [Architecture] Monitor Logged In Users/Sessions

2016-06-27 Thread Srinath Perera
Is this done as part of IS analytics?

--Srinath

On Mon, Jun 27, 2016 at 3:53 PM, Gayan Gunawardana <ga...@wso2.com> wrote:

> Hi All,
>
> This feature will provide capability to admin users to monitor other
> logged in users sessions and kill those sessions if necessary. Currently
> logged in users sessions persist inside IDN_AUTH_SESSION_STORE table and
> there is no mapping to authenticated user. We are planning to introduce new
> table to maintain mapping between user and session id.
>
> CREATE TABLE IDN_USER_SESSION_DATA (
>
>   SESSION_ID VARCHAR (100) DEFAULT NULL,
>
>   USER_NAME VARCHAR(100) DEFAULT NULL,
>
>   USER_DOMAIN_NAME VARCHAR(100) DEFAULT NULL,
>
>   TENANT_NAME VARCHAR(100) DEFAULT NULL,
>
>   FOREIGN KEY (SESSION_ID) REFERENCES
> IDN_AUTH_SESSION_STORE(SESSION_ID) ON DELETE CASCADE,
>
>   PRIMARY KEY (SESSION_ID, USER_NAME)
>
> );
>
> According to latest implementation of session data persistence, we can
> consider particular SESSION ID is active if last record (sorted by time
> descending order) for given SESSION ID is "STORE" operation. If there are
> two store operations to IDN_AUTH_SESSION_STORE table there is a problem of
> duplicating data in IDN_USER_SESSION_DATA. We need to find a way to handle
> this situation.
>
> 1. Listing active session list for given user
>
> In back-end distinguish sessions will be identified by using SESSION_ID
> but in front-end we cannot display SESSION_ID. In front-end unique sessions
> will be displayed according to User-Agent.
>
> 2. Listing users who have active sessions
>
> Listing users who have at least one active session will be challenging.
> Since IDN_AUTH_SESSION_STORE table is HUGE in a production system, and
> doing a JOIN operation with it would be costly.
>
> The username in the USER_SESSION_DATA is picked from the authenticated
> user attribute available in the session context. This attribute is set
> after all processing done in the SequenceHandler hence the authenticated
> user here actually subject identifier, rather than a real username.
>
> Hence the username in the USER_SESSION_DATA table can be one of following,
> i. A Local User : who use the actual username as the subject identifier
> ii. A Local User : who use a claim as the subject identifier
> iii. A Federated User : who use federated subject or a claim
>
> Only in first scenario, it can find proper user store domain from the
> username. In the third scenario we can store federated IDP name for
> USER_DOMAIN_NAME.
>
> Handling\Storing usernames is a common thing we need to decide (in OAuth
> IDN_OAUTH2_ACCESS_TOKEN we have the same problem), so we should figure out
> a general schema for IDN_USER_SESSION_DATA that can be used for all types
> of users.
>
> Thanks,
> Gayan
> --
> Gayan Gunawardana
> Software Engineer; WSO2 Inc.; http://wso2.com/
> Email: ga...@wso2.com
> Mobile: +94 (71) 8020933
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Incremental Processing Support in DAS

2016-06-15 Thread Srinath Perera
t;>> values of time windows.
>>>>
>>>> With the new introduced changes, incremental window start time is set
>>>> based on local time zone to avoid aforementioned inconsistencies. Syntax is
>>>> as follows.
>>>>
>>>> CREATE TEMPORARY TABLE T1 USING CarbonAnalytics options (tableName
>>>> "test",
>>>> incrementalProcessing "T1, *WindowUnit*, WindowOffset")*;*
>>>>
>>>>
>>>> WindowUnit concept is similar to the previous concept of TimeWindow,
>>>> but in here corresponding window time unit needs to be provided instead of
>>>> time in millis. The units supported are SECOND, MINUTE, HOUR, DAY, MONTH,
>>>> YEAR.
>>>>
>>>> For example, lets say server runs in IST time and last processed event
>>>> time is 146538669 (2016/04/20 05:48:58 PM IST). If the WindowUnit is
>>>> set as DAY and WindowOffset is set to 0 in the incremental table, next
>>>> script execution will read data starting from 146109060 (2016/04/20
>>>> 12:00:00 AM IST).
>>>>
>>>> [1] [Analytics] Using UTC across all temporal analytics functions
>>>>
>>>>
>>>>
>>>> On Fri, Mar 25, 2016 at 10:59 AM, Gihan Anuruddha <gi...@wso2.com>
>>>> wrote:
>>>>
>>>>> Actually currently in ESB-analytics also we are using a similar
>>>>> mechanism. In there we keep 5 tables for each time unites(second, minute,
>>>>> hour, day, month) and we have windowOffset in correspondent to it time
>>>>> unite like in minute-60, hour- 3600 etc. We are only storing min, max, sum
>>>>> and count values only. So at a given time if we need the average, we 
>>>>> divide
>>>>> the sum with count and get that.
>>>>>
>>>>> On Fri, Mar 25, 2016 at 8:47 AM, Inosh Goonewardena <in...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 24, 2016 at 9:56 PM, Sachith Withana <sach...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Inosh,
>>>>>>>
>>>>>>> We wouldn't have to do that IMO.
>>>>>>>
>>>>>>> We can persist the total aggregate value upto currentTimeWindow -
>>>>>>> WindowOffset, along with the previous time window aggregation meta data 
>>>>>>> as
>>>>>>> well ( count, sum in the average aggregation case).
>>>>>>>
>>>>>>
>>>>>> Yep. That will do.
>>>>>>
>>>>>>>
>>>>>>> The previous total wouldn't be calculated again, it's the last two
>>>>>>> time windows ( including the current one) that we need to recalculate 
>>>>>>> and
>>>>>>> add it to the previous total.
>>>>>>>
>>>>>>> It works almost the same way as the current incremental processing
>>>>>>> table, but keeping more meta_data on the aggregation related values.
>>>>>>>
>>>>>>> @Anjana WDYT?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Sachith
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Mar 25, 2016 at 7:07 AM, Inosh Goonewardena <in...@wso2.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Sachith,
>>>>>>>>
>>>>>>>>
>>>>>>>> *WindowOffset*:
>>>>>>>>>
>>>>>>>>> Events might arrive late that would belong to a previous processed
>>>>>>>>> time window. To account to that, we have added an optional parameter 
>>>>>>>>> that
>>>>>>>>> would allow to
>>>>>>>>> process immediately previous time windows as well ( acts like a
>>>>>>>>> buffer).
>>>>>>>>> ex: If this is set to 1, apart from the to-be-processed data, data
>>>>>>>>> related to the previously processed time window will also be taken for
>>>>>>>>> processing.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> This means, 

Re: [Architecture] Incremental Processing Support in DAS

2016-06-14 Thread Srinath Perera
ndow. To account to that, we have added an optional parameter 
>>>>>>> that
>>>>>>> would allow to
>>>>>>> process immediately previous time windows as well ( acts like a
>>>>>>> buffer).
>>>>>>> ex: If this is set to 1, apart from the to-be-processed data, data
>>>>>>> related to the previously processed time window will also be taken for
>>>>>>> processing.
>>>>>>>
>>>>>>
>>>>>>
>>>>>> This means, if window offset is set, already processed data will be
>>>>>> processed again. How does the aggregate functions works in this case?
>>>>>> Actually, what is the plan on supporting aggregate functions? Let's take
>>>>>> average function as an example. Are we going to persist sum and count
>>>>>> values per time windows and re-calculate whole average based on values of
>>>>>> all time windows? Is so, I would guess we can update previously processed
>>>>>> time windows sum and count values.
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 24, 2016 at 3:50 AM, Sachith Withana <sach...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Srinath,
>>>>>>>
>>>>>>> Please find the comments inline.
>>>>>>>
>>>>>>> On Thu, Mar 24, 2016 at 11:39 AM, Srinath Perera <srin...@wso2.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Sachith, Anjana,
>>>>>>>>
>>>>>>>> +1 for the backend model.
>>>>>>>>
>>>>>>>> Are we handling the case when last run was done, 25 minutes of data
>>>>>>>> is processed. Basically, next run has to re-run last hour and update 
>>>>>>>> the
>>>>>>>> value.
>>>>>>>>
>>>>>>>
>>>>>>> Yes. It will recalculate for that hour and will update the value.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> When does one hour counting starts? is it from the moment server
>>>>>>>> starts? That will be probabilistic when you restart. I think we need to
>>>>>>>> either start with know place ( midnight) or let user configure it.
>>>>>>>>
>>>>>>>
>>>>>>> In the first run all the data available are processed.
>>>>>>> After that it calculates the floor of last processed events'
>>>>>>> timestamp and gets the floor value (timestamp - timestamp%3600), that 
>>>>>>> would
>>>>>>> be used as the start of the time windows.
>>>>>>>
>>>>>>>>
>>>>>>>> I am bit concern about the syntax though. This only works for very
>>>>>>>> specific type of queries ( that includes aggregate and a group by). 
>>>>>>>> What
>>>>>>>> happen if user do this with a different query? Can we give clear error
>>>>>>>> message?
>>>>>>>>
>>>>>>>
>>>>>>> Currently the error messages are very generic. We will have to work
>>>>>>> on it to improve those messages.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Sachith
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> --Srinath
>>>>>>>>
>>>>>>>> On Mon, Mar 21, 2016 at 5:15 PM, Sachith Withana <sach...@wso2.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi all,
>>>>>>>>>
>>>>>>>>> We are adding incremental processing capability to Spark in DAS.
>>>>>>>>> As the first stage, we added time slicing to Spark execution.
>>>>>>>>>
>>>>>>>>> Here's a quick introduction into that.
>>>>>>>>>
>>>>>>>>> *Execution*:
>>>>>>>>>
>>>>>>>>> In the first run of the script, it will process all the data in
>>>>>>>>> the giv

Re: [Architecture] Securing communication between Hazelcast and WSO2 servers

2016-06-14 Thread Srinath Perera
I think it is only for enterprise version.

Right deployment architecture is to secure it through a firewall.
Basically, only designated nodes should be able to connect to the hazelcast
cluster.

--srinath



On Mon, Jun 13, 2016 at 10:36 PM, Hasitha Hiranya  wrote:

> Hi,
>
> When a cluster is setup out of WSO2 servers Hazelcast is configured to
>
> 1. Make cluster calls across nodes (push notifications)
> 2. Use as a distributed cache among the nodes
>
> Is there any possibility for a third party Hazelcast client to come and
> consume this data or push un-relevant notifications to the nodes? Also, are
> we storing sensitive data in HZ cache?
>
> Hazelcast communication is happening inside MZ. But yet some companies
> would like to make it secured.
>
> SSL can be configured at HZ client level as per [1], but not sure if it
> only for enterprise.
>
> [1]. http://docs.hazelcast.org/docs/3.5/manual/html/ssl.html
>
> Thanks
>
> --
> *Hasitha Abeykoon*
> Senior Software Engineer; WSO2, Inc.; http://wso2.com
> *cell:* *+94 719363063*
> *blog: **abeykoon.blogspot.com* 
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [DAS] Expose some debug level information through MBeans

2016-06-14 Thread Srinath Perera
Gihan and everyone

If you use metrics, you get other benefits such as some default charts,
ability to send it to DAS, log level control etc, configuration consistency
across platform etc.

If there is something missing in metrics, let's fix it. But we MUST not
create parallel things.

--Srinath


On Mon, Jun 13, 2016 at 5:32 PM, Gihan Anuruddha <gi...@wso2.com> wrote:

> ​IMO we can achieve high performance using AtomicLong. We will try to do a
> load test and see whether this cause a drastic performance degradation. If
> it's happening will try your suggestion.
>
> On Mon, Jun 13, 2016 at 12:34 PM, Maninda Edirisooriya <mani...@wso2.com>
> wrote:
>
>>
>> On Thu, Jun 9, 2016 at 5:00 PM, Gihan Anuruddha <gi...@wso2.com> wrote:
>>
>>> Hi All,
>>>
>>> We have added a couple of MBeans for DAS to expose some debug level
>>> information. These Mbeans will list under org.wso2.carbon.analytics
>>> subdomain.
>>>
>>> EVENT_COUNTER [int getCurrentCount()]- This will count all the events
>>> that received to DAS regardless of steam or tenant. We want to add this per
>>> tenant per stream. But in order to do that we need to add a Map to
>>> event persistence path and that might add extra delay to the event saving
>>> critical path. Because of that we thought of adding only a counter (
>>> AtomicLong).
>>>
>>
>> Will that atomic counter effect the concurrent performance of event
>> receiving, as all the threads has to wait to increment it in each event /
>> event chunk. May be we can maintain a thread local variable in each thread
>> and increment the atomic variable time to time which may anyway effect the
>> accuracy of the MBean.
>>
>>>
>>> RECEIVER_REMAINING_QUEUE_BUFFER_SIZE_IN_BYTES
>>> ​[
>>> long getRemainingBufferCapacityInBytes(int tenantId),
>>> int getCurrentQueueSize(int tenantId)]​
>>> ​ - ​You can use this Mbean to get remaining buffer size in disruptor and
>>> current queue size.
>>>
>>> LAST_PROCESSED_TIMESTAMP[long getLastProcessedTimestamp(int tenantId,
>>> String id, boolean primary)]
>>> ​ - This will return last saved incremental processed timestamp for
>>> given spark table.
>>>
>>> ANALYTICS_SCRIPT_LAST_EXECUTION_START_TIME[long
>>> getScriptLastExecutionStartTime(int tenantId, String scriptName)]
>>> ​ - This will return latest execution start time of the given Spark
>>> script.
>>>
>>> ​Please let me know any other information that you think better expose
>>> through Mbeans.
>>>
>>> Regards,
>>> Gihan
>>>
>>> --
>>> W.G. Gihan Anuruddha
>>> Senior Software Engineer | WSO2, Inc.
>>> M: +94772272595
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>
>
> --
> W.G. Gihan Anuruddha
> Senior Software Engineer | WSO2, Inc.
> M: +94772272595
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] What should be the default MySQL engine to be used in DAS?

2016-05-26 Thread Srinath Perera
Please also update the Docs to reflect this.

--Srinath

On Thu, May 26, 2016 at 12:29 PM, Inosh Goonewardena <in...@wso2.com> wrote:

> Hi Srinath,
>
> On Thu, May 26, 2016 at 12:09 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Inosh,
>>
>> Good catch!! I am +1. Can we do this just by configs or do we need a
>> patch? If so can we patch before we release?
>>
>
> We can do this by configuration change.
>
>
>>
>> Anjana, cannot we use HDFS for EVENT_STORE and used MySQL only for
>> processed data store? ( long term)
>>
>
> We can. This is the best approach to use without affecting receiver
> performance while running spark jobs in parallel.
>
>
>>
>> --Srinath
>>
>> On Wed, May 25, 2016 at 8:10 PM, Inosh Goonewardena <in...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> At the moment DAS support both MyISAM and InnoDB, but configured to use
>>> MyISAM by default.
>>>
>>> There are several differences between MYISAM and InnoDB, but what is
>>> most relevant with regard to DAS is the difference in concurrency.
>>> Basically, MyISAM uses table-level locking and InnoDB uses row-level
>>> locking. So, with MyISAM, if we are running Spark queries while publishing
>>> data to DAS, in higher TPS it can lead to issues due to the inability of
>>> obtaining the table lock by DAL layer to insert data to the table while
>>> Spark reading from the same table.
>>>
>>> However, on the other hand, with InnoDB write speed is considerably slow
>>> (because it is designed to support transactions), so it will affect the
>>> receiver performance.
>>>
>>> One option we have in DAS is, we can use two DBs to to keep incoming
>>> records and processed records, i.e., EVENT_STORE and PROCESSED_DATA_STORE.
>>>
>>> For ESB Analytics, we can configure to use MyISAM for EVENT_STORE and
>>> InnoDB for PROCESSED_DATA_STORE. It is because in ESB analytics,
>>> summarizing up to minute level is done by real time analytics and Spark
>>> queries will read and process data using minutely (and higher) tables which
>>> we can keep in PROCESSED_DATA_STORE. Since raw table(which data receiver
>>> writes data) is not being used by Spark queries, the receiver performance
>>> will not be affected.
>>>
>>> However, in most cases, Spark queries may written to read data directly
>>> from raw tables. As mentioned above, with MyISAM this could lead to
>>> performance issues if data publishing and spark analytics happens in
>>> parallel. So considering that I think we should change the default
>>> configuration to use InnoDB. WDYT?
>>>
>>> --
>>> Thanks & Regards,
>>>
>>> Inosh Goonewardena
>>> Associate Technical Lead- WSO2 Inc.
>>> Mobile: +94779966317
>>>
>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> Thanks & Regards,
>
> Inosh Goonewardena
> Associate Technical Lead- WSO2 Inc.
> Mobile: +94779966317
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] What should be the default MySQL engine to be used in DAS?

2016-05-26 Thread Srinath Perera
Hi Inosh,

Good catch!! I am +1. Can we do this just by configs or do we need a patch?
If so can we patch before we release?

Anjana, cannot we use HDFS for EVENT_STORE and used MySQL only for
processed data store? ( long term)

--Srinath

On Wed, May 25, 2016 at 8:10 PM, Inosh Goonewardena  wrote:

> Hi,
>
> At the moment DAS support both MyISAM and InnoDB, but configured to use
> MyISAM by default.
>
> There are several differences between MYISAM and InnoDB, but what is most
> relevant with regard to DAS is the difference in concurrency. Basically,
> MyISAM uses table-level locking and InnoDB uses row-level locking. So, with
> MyISAM, if we are running Spark queries while publishing data to DAS, in
> higher TPS it can lead to issues due to the inability of obtaining the
> table lock by DAL layer to insert data to the table while Spark reading
> from the same table.
>
> However, on the other hand, with InnoDB write speed is considerably slow
> (because it is designed to support transactions), so it will affect the
> receiver performance.
>
> One option we have in DAS is, we can use two DBs to to keep incoming
> records and processed records, i.e., EVENT_STORE and PROCESSED_DATA_STORE.
>
> For ESB Analytics, we can configure to use MyISAM for EVENT_STORE and
> InnoDB for PROCESSED_DATA_STORE. It is because in ESB analytics,
> summarizing up to minute level is done by real time analytics and Spark
> queries will read and process data using minutely (and higher) tables which
> we can keep in PROCESSED_DATA_STORE. Since raw table(which data receiver
> writes data) is not being used by Spark queries, the receiver performance
> will not be affected.
>
> However, in most cases, Spark queries may written to read data directly
> from raw tables. As mentioned above, with MyISAM this could lead to
> performance issues if data publishing and spark analytics happens in
> parallel. So considering that I think we should change the default
> configuration to use InnoDB. WDYT?
>
> --
> Thanks & Regards,
>
> Inosh Goonewardena
> Associate Technical Lead- WSO2 Inc.
> Mobile: +94779966317
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Comments about IoTS Docs

2016-05-17 Thread Srinath Perera
agreed, I think challenge it making it clear that power of a full blown
analytics platform is behind the IoT product. I think from the doc, and
site, it is not apparent.

Also, in future version, IoT tooling can carry Analytics tooling intergated
.

--Srinath

On Tue, May 17, 2016 at 3:47 PM, Sumedha Rubasinghe <sume...@wso2.com>
wrote:

> Srinath,
> We are already doing this.
>
> For device type analytics, you can include those queries as part of device
> plugin. We deploy them as CAR files into DAS.
> For example
> https://docs.wso2.com/display/IoTS100/Device+Manufacturer#DeviceManufacturer-WritingbatchanalyticsforConnectedCup
>
> For anything beyond (full blown scenarios where device type view is too
> low level), you can directly write it.
>
> On Tue, May 17, 2016 at 12:50 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Thanks Sumedha.
>>
>> Suho, are we going to support users to write queries CEP, Spark on top of
>> this data IoTS captured? ( or is it for future releases?) IMO that will add
>> lot of values to the story.
>>
>> --Srinath
>>
>> On Tue, May 17, 2016 at 10:59 AM, Sumedha Rubasinghe <sume...@wso2.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, May 17, 2016 at 10:45 AM, Srinath Perera <srin...@wso2.com>
>>> wrote:
>>>
>>>>
>>>>1. When you land on the doc, it is not clear where you should go
>>>>
>>>>  Planning to fix this Srinath.
>>>
>>>>
>>>>1. There is a quick start guide and "Getting started with IoTS"
>>>>server
>>>>2. in #2 "Start the Virtual Fire-Alarm", it does not tell you have
>>>>to go to device section to find "virtual fire alarm"
>>>>3. When I said srinathFirealarm, it does not like the name but did
>>>>not tell me why
>>>>
>>>> I used the same name (srinathFirealarm) and it worked. Did you try on a
>>> latest nightly build?
>>>
>>>>
>>>>1. Download instructions and "Example: Navigate to the device agent
>>>>file that is in the /device_agents/virtual_firealarm 
>>>> directory."
>>>>does not match
>>>>
>>>> This is just giving an example directory to which you may have
>>> extracted the device agent.
>>>
>>>
>>>>1. When I click on buzzer, why should I pick a protocol and state?
>>>>Cannot we pick protocol automatically ( based on how device connected) 
>>>> and
>>>>state via a drop down box?
>>>>
>>>>
>>> We will fix this to be a drop down with default selected. This is
>>> because we have included both MQTT and XMPP support in this example.
>>>
>>>
>>>>
>>>>1. [image: Inline image 1]
>>>>2. Add end of "Device onwer" tutorial, can we point to how to write
>>>>a new device. Also can we have a mobile app that will make your phone a
>>>>device?
>>>>
>>>> Android Sense is this one.
>>> It's basically using mobile phone as a gateway with bunch of sensors
>>> connected.
>>>
>>>
>>>
>>>
>>>> BTW, demo is nice!!
>>>>
>>>> --Srinath
>>>>
>>>> --
>>>> 
>>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>>> Site: http://home.apache.org/~hemapani/
>>>> Photos: http://www.flickr.com/photos/hemapani/
>>>> Phone: 0772360902
>>>>
>>>
>>>
>>>
>>> --
>>> /sumedha
>>> m: +94 773017743
>>> b :  bit.ly/sumedha
>>>
>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> /sumedha
> m: +94 773017743
> b :  bit.ly/sumedha
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Comments about IoTS Docs

2016-05-17 Thread Srinath Perera
Thanks Sumedha.

Suho, are we going to support users to write queries CEP, Spark on top of
this data IoTS captured? ( or is it for future releases?) IMO that will add
lot of values to the story.

--Srinath

On Tue, May 17, 2016 at 10:59 AM, Sumedha Rubasinghe <sume...@wso2.com>
wrote:

>
>
> On Tue, May 17, 2016 at 10:45 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>>
>>1. When you land on the doc, it is not clear where you should go
>>
>>  Planning to fix this Srinath.
>
>>
>>1. There is a quick start guide and "Getting started with IoTS" server
>>2. in #2 "Start the Virtual Fire-Alarm", it does not tell you have to
>>go to device section to find "virtual fire alarm"
>>3. When I said srinathFirealarm, it does not like the name but did
>>not tell me why
>>
>> I used the same name (srinathFirealarm) and it worked. Did you try on a
> latest nightly build?
>
>>
>>1. Download instructions and "Example: Navigate to the device agent
>>file that is in the /device_agents/virtual_firealarm 
>> directory."
>>does not match
>>
>> This is just giving an example directory to which you may have extracted
> the device agent.
>
>
>>1. When I click on buzzer, why should I pick a protocol and state?
>>Cannot we pick protocol automatically ( based on how device connected) and
>>state via a drop down box?
>>
>>
> We will fix this to be a drop down with default selected. This is because
> we have included both MQTT and XMPP support in this example.
>
>
>>
>>1. [image: Inline image 1]
>>2. Add end of "Device onwer" tutorial, can we point to how to write a
>>new device. Also can we have a mobile app that will make your phone a
>>device?
>>
>> Android Sense is this one.
> It's basically using mobile phone as a gateway with bunch of sensors
> connected.
>
>
>
>
>> BTW, demo is nice!!
>>
>> --Srinath
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> /sumedha
> m: +94 773017743
> b :  bit.ly/sumedha
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Introducing Secure-Vault support to C5

2016-05-17 Thread Srinath Perera
uot; in Carbon.yml configuration. First we add a unique a
>>>> alias as the value to the Password as below [3]. Second we add that unique
>>>> alias with its plain text password to *secrets*.properties file[4].
>>>>
>>>> CipherTool will encrypt the plain-text password and replace the
>>>> plain-text password with the encrypted value. (In c4 we have added
>>>> plain-text passwords within square brackets. If not they are identified as
>>>> encrypted values).
>>>>
>>>> When loading the carbon.yml (or any other custom configuration file),
>>>> we read the secured values using secure-vault service. This secure vault
>>>> service will either return the password from the *secrets*.properties
>>>> file if the secret is not encrypted, OR return the encrypted value.
>>>>
>>>> [3]
>>>> 
>>>>
>>>>id: carbon-kernel   #Value to uniquely identify a server
>>>>name: WSO2 Carbon Kernel#Server Name
>>>>version: 5.1.0-SNAPSHOT  #Server Version
>>>>tenant: default  #Tenant Name
>>>>
>>>># Keystore used by this server
>>>>Security:
>>>>   Keystore
>>>>  Password: Carbon.Security.Keystore.Password
>>>>
>>>>  
>>>> ###
>>>>
>>>> [4]  Carbon.Security.Keystore.Password=[wso2carbon]
>>>>
>>>>
>>>> *New design decisions taken compared to C4 SecureVault implementation:*
>>>>
>>>>1. We have removed the usage of cipher-tool.properties file. (This
>>>>file was used to keep the alias, the location to the configuration file,
>>>>and the xpath to the secret element in the configuration file).
>>>>2. We can support any format of configuration file with this model
>>>>as we only care about the secret-key that we define in the 
>>>> *secrets*.properties
>>>>file and do not depend on the xpath to find the location of the secret
>>>>element.
>>>>
>>>> Thanks,
>>>> Nipuni
>>>>
>>>> --
>>>> Nipuni Perera
>>>> Software Engineer; WSO2 Inc.; http://wso2.com
>>>> Email: nip...@wso2.com
>>>> Git hub profile: https://github.com/nipuni
>>>> Blog : http://nipunipererablog.blogspot.com/
>>>> Mobile: +94 (71) 5626680
>>>> <http://wso2.com>
>>>>
>>>>
>>>
>>>
>>> --
>>> Sameera Jayasoma,
>>> Software Architect,
>>>
>>> WSO2, Inc. (http://wso2.com)
>>> email: same...@wso2.com
>>> blog: http://blog.sameera.org
>>> twitter: https://twitter.com/sameerajayasoma
>>> flickr: http://www.flickr.com/photos/sameera-jayasoma/collections
>>> Mobile: 0094776364456
>>>
>>> Lean . Enterprise . Middleware
>>>
>>>
>>
>> Regards,
>> Nira
>> --
>>
>> *Niranjan Karunanandham*
>> Senior Software Engineer - WSO2 Inc.
>> WSO2 Inc.: http://www.wso2.com
>>
>
>
>
> --
> With regards,
> *Manu*ranga Perera.
>
> phone : 071 7 70 20 50
> mail : m...@wso2.com
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Improving error handling at different layers in ESB

2016-05-17 Thread Srinath Perera
On Tue, May 17, 2016 at 11:28 AM, Isuru Udana <isu...@wso2.com> wrote:

> Hi Srinath,
>
> On Tue, May 17, 2016 at 11:13 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Kasun, what is the answer?
>>
>> Problem is that client does not know when  the result will come back.
>>
>>
>>1. If client call start only a one sequence, and it has failed, then
>>we can stop the client connection.
>>2. If client call has started multiple sequences, error handler
>>should decide what to do. We should provide killCurrentCall() kind of a
>>sequence to let error handler tell engine to cleanup.
>>3. If all fails, then timeout will handle that.
>>
>> WDYT?
>>
> I am not sure whether I got your idea correctly.
> IMO, irrespective of number of sequences in the message flow, we should
> invoke the fault handler for all errors occurring at different layers and
> at the fault handler we need to allow user to decide what's need to be
> done.
>

Two concerns in this case

1. In default case, where no error handler has been defined, scenario
should work and print correct errors.
2. When, multiple threads or sequences has been started, we need to provide
helper fucntion to stop cleanup other thread if user need to handle it.

I am trying to address above two.



> Currently for all the errors happens at the mediation layer, fault
> sequence is getting executed.
> But there can be situations where fault handler is not executed for errors
> happens at the lower layers at the transport sender side. That is the
> problem we need to test and address carefully.
>
Can we give java level callback for this case?


>
> Thanks.
>
>
>>
>> --Srinath
>>
>> On Tue, May 17, 2016 at 10:33 AM, Artur Reaboi <artur.rea...@egov.md>
>> wrote:
>>
>>> +1
>>>
>>>
>>>
>>> Are there any workarounds recommended for E3 type of errors on current
>>> WSO2 ESB 4.9.0?
>>>
>>>
>>>
>>> Best regards,
>>>
>>> *Artur Reaboi*
>>> *Enterprise Architect | e-Government Center*
>>>
>>> Tel +373 22 250 487 |Mob +373 79 440 064 |Skype reaboi.artur
>>>
>>>
>>>
>>> *From:* Architecture [mailto:architecture-boun...@wso2.org] *On Behalf
>>> Of *Kasun Indrasiri
>>> *Sent:* marți, 17 mai 2016 00:07
>>> *To:* architecture <architecture@wso2.org>
>>> *Subject:* [Architecture] Improving error handling at different layers
>>> in ESB
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> ESB has to deal with different types of errors occurring at multiple
>>> layers in ESB message flow. As depicted in the following diagram, there can
>>> be different places where an error could happen.
>>>
>>>
>>>
>>> At the moment, the current design doesn't seems to ensure that all
>>> errors are propagated back to the other layers. For instance, if something
>>> goes wrong at the target side (E3 type errors) there's no generic place
>>> that all such errors are caught or handled and client may be holding up the
>>> connection to ESB. In most cases, such errors should be propagated to the
>>> associated fault sequence and error handling happens in the fault sequence.
>>>
>>> Therefore we need to carefully evaluate the possible places that the
>>> errors can occur and handle them at an unified layer.
>>>
>>>
>>>
>>>
>>> ​Thanks,
>>>
>>> Kasun
>>>
>>>
>>>
>>> --
>>>
>>> Kasun Indrasiri
>>> Software Architect
>>> WSO2, Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> cell: +94 77 556 5206
>>> Blog : http://kasunpanorama.blogspot.com/
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>http://people.apache.org/~hemapani/
>>http://srinathsview.blogspot.com/
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Isuru Udana*
> Associate Technical Lead
> WSO2 Inc.; http://wso2.com
> email: isu...@wso2.com cell: +94 77 3791887
> blog: http://mytecheye.blogspot.com/
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Improving error handling at different layers in ESB

2016-05-16 Thread Srinath Perera
Kasun, what is the answer?

Problem is that client does not know when  the result will come back.


   1. If client call start only a one sequence, and it has failed, then we
   can stop the client connection.
   2. If client call has started multiple sequences, error handler should
   decide what to do. We should provide killCurrentCall() kind of a sequence
   to let error handler tell engine to cleanup.
   3. If all fails, then timeout will handle that.

WDYT?

--Srinath

On Tue, May 17, 2016 at 10:33 AM, Artur Reaboi <artur.rea...@egov.md> wrote:

> +1
>
>
>
> Are there any workarounds recommended for E3 type of errors on current
> WSO2 ESB 4.9.0?
>
>
>
> Best regards,
>
> *Artur Reaboi*
> *Enterprise Architect | e-Government Center*
>
> Tel +373 22 250 487 |Mob +373 79 440 064 |Skype reaboi.artur
>
>
>
> *From:* Architecture [mailto:architecture-boun...@wso2.org] *On Behalf Of
> *Kasun Indrasiri
> *Sent:* marți, 17 mai 2016 00:07
> *To:* architecture <architecture@wso2.org>
> *Subject:* [Architecture] Improving error handling at different layers in
> ESB
>
>
>
> Hi,
>
>
>
> ESB has to deal with different types of errors occurring at multiple
> layers in ESB message flow. As depicted in the following diagram, there can
> be different places where an error could happen.
>
>
>
> At the moment, the current design doesn't seems to ensure that all errors
> are propagated back to the other layers. For instance, if something goes
> wrong at the target side (E3 type errors) there's no generic place that all
> such errors are caught or handled and client may be holding up the
> connection to ESB. In most cases, such errors should be propagated to the
> associated fault sequence and error handling happens in the fault sequence.
>
> Therefore we need to carefully evaluate the possible places that the
> errors can occur and handle them at an unified layer.
>
>
>
>
> ​Thanks,
>
> Kasun
>
>
>
> --
>
> Kasun Indrasiri
> Software Architect
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> cell: +94 77 556 5206
> Blog : http://kasunpanorama.blogspot.com/
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Comments about IoTS Docs

2016-05-16 Thread Srinath Perera
   1. When you land on the doc, it is not clear where you should go
   2. There is a quick start guide and "Getting started with IoTS" server
   3. in #2 "Start the Virtual Fire-Alarm", it does not tell you have to go
   to device section to find "virtual fire alarm"
   4. When I said srinathFirealarm, it does not like the name but did not
   tell me why
   5. Download instructions and "Example: Navigate to the device agent file
   that is in the /device_agents/virtual_firealarm directory." does
   not match
   6. When I click on buzzer, why should I pick a protocol and state?
   Cannot we pick protocol automatically ( based on how device connected) and
   state via a drop down box?[image: Inline image 1]
   7. Add end of "Device onwer" tutorial, can we point to how to write a
   new device. Also can we have a mobile app that will make your phone a
   device?

BTW, demo is nice!!

--Srinath

-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [MB][Hazelcast] Minimum node count and network partition detection.

2016-05-16 Thread Srinath Perera
Hi Ramith,

+1. This minimum number of nodes is what you called a Quorum_
https://en.wikipedia.org/wiki/Quorum_(distributed_computing).

We can support a 2 nodes mode where we do not support partition tolerance.
However, IMO that is a bad idea as most users would not understand/ look at
the details and run in to trouble.

--Srinath

On Mon, May 16, 2016 at 1:41 PM, Ramith Jayasinghe  wrote:

> Hi All,
>
> Global state of the MB cluster becomes inconsistent, when the network
> becomes partitioned (split brains) in previous MB version(s). So as a
> solution we propose following,
>  1) a MB cluster cannot go below a defined number ( a.k.a: minimum cluster
> size)
>  2) During a network partition if node count (/size) of the particular
> partition is less than 'minimum cluster size' then that partition(s)
>  2.1) will stop accepting incoming traffic/connections
>  2.2) disconnect all active connections (
> publishers/subscribers)
>
> So idea is to let only a single partition ( which has the cluster size >=
> minimum cluster size) keep working while other(s) stop working.
> Therefore, choosing the number  'minimum cluster size' is important when
> deploying MB.
> otherwise user will have multiple network partitions ( where size >=
> minimum cluster size) working in parallel creating the problem we are
> trying to solve here.
>
> So here's the way to pick the number:
>
> | Cluster size   | Minimum Node Count |
> |---||
> | 2| 2  |
> | 3| 2  |
> | 4| 3  |
> | 5| 3  |
> | N   | (N / 2) + 1|
>
>
> So this will have a direct effect on minimum HAed deployment for MB which
> used to 2.
> why?
> suppose, users now deploy 2 node MB cluster with this feature enabled.
> then during a network partition both nodes will stop working. this may be
> fine since it will make MB cluster reliable but in users point of view its
> a complete outage (since none of the nodes except traffic).
>
> Therefore now minimum HAed node count for MB become 3.
> When cluster size is 3, it will be able to withstand 1 node being in a
> network partition (and other 2 nodes will work).
>
> thoughts?
>
>
> Jira: https://wso2.org/jira/browse/MB-1664
>
> --
> Ramith Jayasinghe
> Technical Lead
> WSO2 Inc., http://wso2.com
> lean.enterprise.middleware
>
> E: ram...@wso2.com
> P: +94 772534930
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] OOM Issues and HazelCast behaviour

2016-05-10 Thread Srinath Perera
+1, when server hits an issue it cannot recover and continue to work
meaningfully, it is best to shut down or restart. That approximate crash
failure behaviour ( which is a good thing).

On Tue, May 10, 2016 at 11:40 AM, Ramith Jayasinghe <ram...@wso2.com> wrote:

> HI Azeez/Sameera,
>
>  Having encountered some OOM issues (relating to HZ) I looked at what
> Hazelcast does in face of GC errors [1]. It will drop all connections,
> threads, and shuts it self down.
>
> Now, Think about one of our server's ( say MB) face a OOM and embedded HZ
> instance shuts it self down. I propose when this happens entire server
> should stop its processing intelligently ( if not shutting down).
> Rational is one of the sub-system in the server shutdown while other parts
> of the server don't care, which is wrong. As for MB, it tries to stop
> processing in a meaningful manner and I argue this should be something any
> server should do if they use HZ clustering.
>
> thoughts?
>
>
> [1]
> https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/java/com/hazelcast/instance/DefaultOutOfMemoryHandler.java
>
> --
> Ramith Jayasinghe
> Technical Lead
> WSO2 Inc., http://wso2.com
> lean.enterprise.middleware
>
> E: ram...@wso2.com
> P: +94 772534930
>
> ___
> Dev mailing list
> d...@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] [Hamming] Audit logs in C5.

2016-05-02 Thread Srinath Perera
+1 for doing this and using XDAS and then adding analytics based on this to
security analytics

Correlation like Prabath mentioned, we can do at DAS

--Srinath

On Tue, May 3, 2016 at 3:22 AM, Prabath Siriwardana 
wrote:

> I guess one more thing we miss in C4 logs is - how to correlate all the
> logs related to a given message enters into the server..
>
> Thanks & regards,
> -Prabath
>
> On Thu, Apr 28, 2016 at 1:39 AM, Sameera Jayasoma 
> wrote:
>
>> Hi All,
>>
>> Audit logs or Audit trails contain set of log entries which describe a
>> sequence of actions which have occurred over a time period. From audit
>> logs, it is possible to trace all the actions of a single user or all the
>> actions or changes introduced to a certain module in the system etc.  E.g.
>> It captures all the actions of a single user from the point he logs in to
>> the application.
>>
>> In previous versions of the Carbon platform, we only had a logger called
>> AUDIT and a separate appender which appends audit logs to separate log
>> file.
>>
>> The only drawback of this approach is that we don't have a proper way to
>> capture contextual information. In each and every audit log, we need to
>> capture logged in user details, IP address of client etc. In the previous
>> approach developers have to log this information with each and every audit
>> log attempt. This is suboptimal IMO, we need to implement a mechanism where
>> developers gives only the log message and system should append all the
>> other information to the log. I see few ways to implement this.
>>
>> 1) Write a custom appender which write audit logs to the file with
>> contextual information.
>> 2) Provide API to log audit logs. We can extract contextual information
>> from the CarbonContext in both of these methods.
>>
>> Any thoughts.
>>
>> Thanks,
>> Sameera.
>>
>> --
>> Sameera Jayasoma,
>> Software Architect,
>>
>> WSO2, Inc. (http://wso2.com)
>> email: same...@wso2.com
>> blog: http://blog.sameera.org
>> twitter: https://twitter.com/sameerajayasoma
>> flickr: http://www.flickr.com/photos/sameera-jayasoma/collections
>> Mobile: 0094776364456
>>
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Thanks & Regards,
> Prabath
>
> Twitter : @prabath
> LinkedIn : http://www.linkedin.com/in/prabathsiriwardena
>
> Mobile : +1 650 625 7950
>
> http://blog.facilelogin.com
> http://blog.api-security.org
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Implementing proper security model for dashboard server

2016-04-28 Thread Srinath Perera
Since gadgets are deployed as an artifact, shouldn't this information
defined at the Gadget level?

Have we thought about what kind of technologies we will support for
security? for example, maybe we can support data retrieval using an OAuth
token in addition to basic auth over HTTPS.

--Srinath


On Thu, Apr 28, 2016 at 1:00 AM, Geesara Prathap <gees...@wso2.com> wrote:

> *Requirement:*
> *When dashboard retrieving data from some REST APIs which are secured, we
> do require proper security model in place in order to identify who can
> access this dashboard and at which level should it be done. In addition,how
> can dashboard be going to communicate with respective REST API securely?*
>
>
>
>  Figure 01:
> Dashboard Server
>
>
> Data providers for gadgets need to communicate with DS securely. Most of
> the cases data providers are some REST APIs. There might be a situation
> which dashboard will be getting data from different data providers as well.
> In the DS perspective, there must be an effective way to tackle these
> security related issues up to some extent. Referring to figure 1, we are
> having three places where we can address these issues.
>
>- gadget level
>- per-dashboard level
>- dashboard server level
>
> What would be the proper place which we can address security concerns in a
> proper manner?  If we try to address this at gadget level, It will be too
> mush of  granularity which may be preventing the acceptable performance of
> data retrieval from data providers as well as too mush of load to DS
> itself.  Also having problems user authentication and authorization at this
> level as well as per dashboard level. Dashboard server level would be the
> ideal place which we can address all those  security concerns in a
> conventional manner. Any advice and suggestions will be greatly appreciated
> regarding this.
>
> Thanks,
> Geesara,
>
> --
> Geesara Prathap Kulathunga
> Software Engineer
> WSO2 Inc; http://wso2.com
> Mobile : +940772684174
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] API Manager Artifact Synchronization on Containerized Platforms

2016-04-27 Thread Srinath Perera
Hi Imesh,

Publishing to both gateways is not good because things fall apart when a
one publication failed. Then a one node will be out of sync and will
continue to be out of sync.

Also in C5, we are dropping depsync, and ask users to use  rsync ( or some
other method like NSF).

Can't we use rsync? (e.g. write a script that will get the list of docker
instances, and sync a give repo folder against all docker instances).

--Srinath


On Wed, Apr 27, 2016 at 11:11 AM, Imesh Gunaratne <im...@wso2.com> wrote:

> Hi Nuwan,
>
> On Wed, Apr 27, 2016 at 9:18 AM, Nuwan Dias <nuw...@wso2.com> wrote:
>
>>
>> On Wed, Apr 27, 2016 at 8:59 AM, Imesh Gunaratne <im...@wso2.com> wrote:
>>
>>>
>>> On Tue, Apr 26, 2016 at 5:37 PM, Nuwan Dias <nuw...@wso2.com> wrote:
>>>
>>>>
>>>> If there are more than 1 manager nodes, we can configure the publisher
>>>> to publish APIs to all of them.
>>>>
>>>
>>> That's interesting. If so we may not need a gateway manager at all. Can
>>> you please describe how that works? Do we need to specify IP addresses of
>>> each gateway node in publisher?
>>>
>>
>> This capability is intended to handle having multiple gateway clusters.
>> For example, if you have an internal Gateway cluster and an external
>> Gateway cluster, you can specify the url of the manager node of each
>> cluster on the api-manager.xml of the Publisher. Then from the Publisher
>> UI, you can publish an API to a selected Gateway manager or both (by
>> default it publishes to all).
>>
>
> Can you please point me to the code which handles this?
>
> We should be able to introduce an extension point here and add an
> implementation for each container cluster manager, similar to the
> clustering membership schemes we implemented. This would let us dynamically
> list the available gateway nodes in the Publisher and let the Publisher
> sends the APIs to all the available gateway nodes. Then we would not need a
> gateway manager and things would be much simple and straightforward.
>
> Thanks
>
>
>>
>>> Thanks
>>>
>>>>
>>>>>- API Gateway Manager pod needs to mount a volume to persist the
>>>>>Synapse APIs. This is vital for allowing the Gateway Manager pod to 
>>>>> auto
>>>>>heal without loosing the Synapse APIs on the filesystem.
>>>>>
>>>>>
>>>>>- This design does not depend on any native features of the
>>>>>container cluster management system.
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> --
>>>>> *Imesh Gunaratne*
>>>>> Senior Technical Lead
>>>>> WSO2 Inc: http://wso2.com
>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>> W: http://imesh.io TW: @imesh
>>>>> Lean . Enterprise . Middleware
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Nuwan Dias
>>>>
>>>> Technical Lead - WSO2, Inc. http://wso2.com
>>>> email : nuw...@wso2.com
>>>> Phone : +94 777 775 729
>>>>
>>>
>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Senior Technical Lead
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: http://imesh.io TW: @imesh
>>> Lean . Enterprise . Middleware
>>>
>>>
>>
>>
>> --
>> Nuwan Dias
>>
>> Technical Lead - WSO2, Inc. http://wso2.com
>> email : nuw...@wso2.com
>> Phone : +94 777 775 729
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io TW: @imesh
> Lean . Enterprise . Middleware
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Analytics REST API Naming conventions

2016-04-26 Thread Srinath Perera
Hi Tharik,

+1, please open a Jira.

--Srinath

On Sat, Apr 23, 2016 at 4:06 PM, Gihan Anuruddha <gi...@wso2.com> wrote:

> Hi Tharik,
>
> We implemented this API in the last year and at that time REST API
> guidelines were not available. We are planing to fix those according to the
> REST API guidance in a next major release.
>
> Regards,
> Gihan
>
> On Fri, Apr 22, 2016 at 10:15 AM, Tharik Kanaka <tha...@wso2.com> wrote:
>
>> Hi All,
>>
>> I have noticed multiple naming conventions have been followed by
>> Analytics REST API [1].
>>
>> /analytics/tables/{tableName}/indexData
>> /analytics/drillDownScoreCount
>>
>> /analytics/tables/{tableName}/keyed_records
>> /analytics/search_count
>>
>> /analytics/tables/{tableName}/recordcount
>> /analytics/rangecount
>>
>> According to [Architecture] REST API Guidelines, no underscore should be
>> used and camel case or other programming language related naming should be
>> avoided. Also multiple words should be separated by dashes. In that case
>> above scenarios of Analytics REST API should use dashes to separate
>> multiple words.
>>
>>
>> [1] https://docs.wso2.com/display/DAS301/Analytics+REST+API+Guide
>>
>> Regards,
>>
>> --
>>
>> *Tharik Kanaka*
>>
>> WSO2, Inc |#20, Palm Grove, Colombo 03, Sri Lanka
>>
>> Email: tha...@wso2.com | Web: www.wso2.com
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> W.G. Gihan Anuruddha
> Senior Software Engineer | WSO2, Inc.
> M: +94772272595
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Generating a single Event per Message from our Products

2016-04-22 Thread Srinath Perera
Dulitha, there are several problems

   1. for some use cases, we can aggregate at the client and reaggregate at
   the server. However, for some use cases, that is not possible. It depends
   on what we calculate.
   2. Also this add overhead at the client side (e.g. we need to run Siddhi
   at the client), which might be a problem
   3. We need to deploy and manage queries at the clients, which will add
   a lot of complexity.

--Srinath

On Thu, Apr 21, 2016 at 9:27 PM, Dulitha Wijewantha <duli...@wso2.com>
wrote:

> Hi Srinath,
>
> If we follow the model where some events would be aggregated and some
> events won't be aggregated - won't that be a problem in the processing side
> in DAS? IMO - Handling this event aggregation in the client-side (in an off
> the main thread manner) would significantly reduce the overhead on DAS
> side. My understanding was that we are using an event sink (maybe it was
> called data-bridge) to send events via thrift to DAS from APIM. In that
> case - before writing off to thrift, won't we be able to aggregate and
> combine events?
> ​
>
> ​Cheers~​
>
> On Tue, Mar 29, 2016 at 11:14 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Sanjiva,
>>
>> That might work, but we need to try it out with a workload. ( CEP joins
>> are bit slower than other operations, so have to see).
>>
>> DAS, when writing treat data as name-value pairs. It only tries to
>> understand it when it is processing data. So storage model should be OK.
>>
>> My belief is network load is not the bottleneck ( again have to verify).
>>
>> --Srinath
>>
>>
>> On Wed, Mar 30, 2016 at 8:19 AM, Sanjiva Weerawarana <sanj...@wso2.com>
>> wrote:
>>
>>> Srinath what if we come up with a way in the event receiver side to
>>> aggregate a set of events to one based on some correlation field? We can do
>>> this in an embedded Siddhi in the receiver ... basically keep like a 5 sec
>>> window to aggregate all events that carry the same correlation field into
>>> one, combine and then send forward for storage + processing. Sometimes we
>>> will miss but most of the time it won't. The storage model needs to be
>>> sufficiently flexible but HBase should be fine (?). The real time feed must
>>> not have this feature of course.
>>>
>>> With multiple servers firing events related to one interaction its not
>>> possible to do this from the source ends without distributed caching and
>>> that's not a good model.
>>>
>>> It does not address the network load issue of course.
>>>
>>> Sanjiva.
>>>
>>> On Tue, Mar 29, 2016 at 2:49 PM, Srinath Perera <srin...@wso2.com>
>>> wrote:
>>>
>>>> Nuwan, regarding Q1, we can setup such a way that we publisher auto
>>>> publisher the events after timeout or after N events are accumelated.
>>>>
>>>> Nuwan, Chathura ( regarding Q2),
>>>>
>>>> We already do event batching. Above numbers are after event batching.
>>>> There are two bottlenecks. One is sending events over the network and the
>>>> other is writing them to DB. Batching helps a lot in moving it over the
>>>> network, but does not help much when writing to DB.
>>>>
>>>> Regarding null, one option is to group event generated by a single
>>>> message together, which will avoid most nulls. I think our main concern is
>>>> single message triggering multiple events. We also need to write queries to
>>>> copy the values from single big events to different streams and use those
>>>> streams to write queries.
>>>>
>>>> e.g. We can copy values from Big stream to HTTPStream, using which we
>>>> will write HTTP analytics queries.
>>>>
>>>> --Srinath
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Mar 29, 2016 at 1:29 PM, Chathura Ekanayake <chath...@wso2.com>
>>>> wrote:
>>>>
>>>>> As we can reduce the number of event transfers with event batching, I
>>>>> think the advantage of using a single event stream is to reduce number of
>>>>> disk writes at DAS side. But as Nuwan mentioned, dealing with null fields
>>>>> can be a problem in writing analytics scripts.
>>>>>
>>>>> Regards,
>>>>> Chathura
>>>>>
>>>>> On Tue, Mar 29, 2016 at 10:40 AM, Nuwan Dias <nuw...@wso2.com> wrote:
>>>>>
>>>>>> Having to publish a single event aft

Re: [Architecture] Defining UX Pasona for the Platform

2016-04-18 Thread Srinath Perera
Any updates?

On Fri, Mar 25, 2016 at 10:57 AM, Dakshika Jayathilaka <daksh...@wso2.com>
wrote:

> Hi Srinath,
>
> We are planning to capture existing personas on each wso2 products, IMHO
> then we will be able to refine them easily. we'll start separate thread for
> that.
>
> Cheers,
>
> *Dakshika Jayathilaka*
> PMC Member & Committer of Apache Stratos
> Senior Software Engineer
> WSO2, Inc.
> lean.enterprise.middleware
> 0771100911
>
> On Fri, Mar 25, 2016 at 9:05 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Ping, any updates?
>>
>> On Fri, Feb 26, 2016 at 5:42 AM, Dakshika Jayathilaka <daksh...@wso2.com>
>> wrote:
>>
>>> Sure,
>>>
>>> We'll try to come up with main personas that we can use. IMHO there may
>>> be key personas + sub levels. I'll schedule a meeting once we complete
>>> first draft.
>>>
>>> Thank you,
>>>
>>> Regards,
>>>
>>> *Dakshika Jayathilaka*
>>> PMC Member & Committer of Apache Stratos
>>> Senior Software Engineer
>>> WSO2, Inc.
>>> lean.enterprise.middleware
>>> 0771100911
>>>
>>> On Thu, Feb 25, 2016 at 8:54 PM, Srinath Perera <srin...@wso2.com>
>>> wrote:
>>>
>>>> Hi Dakshika,
>>>>
>>>> As per our chat, most of the time, we will use a common set of Pasona's
>>>> for UX user stories (e.g. developer, DevOps Person, Business User etc). IMO
>>>> defining them before will save lot of time. Then for most use cases, we can
>>>> pick one from the list.
>>>>
>>>> Could UX team define Pasona's? When we have the first cut, lets do a
>>>> review.
>>>>
>>>> --Srinath
>>>>
>>>> --
>>>> 
>>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>>> Site: http://people.apache.org/~hemapani/
>>>> Photos: http://www.flickr.com/photos/hemapani/
>>>> Phone: 0772360902
>>>>
>>>
>>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Adding RNN to WSO2 Machine Learner

2016-04-08 Thread Srinath Perera
Thamali, how big is the data set you are using?  ( give me a link to the
data set as well).

Nirmal, shall we compare the accuracy of RNN vs. Upul's rolling window
method?

--Srinath

On Fri, Apr 8, 2016 at 9:23 AM, Thamali Wijewardhana <tham...@wso2.com>
wrote:

> Hi,
>
> I run the RNN algorithm using deeplearning4j library and the Keras python
> library. The dataset, hyper parameters, network architecture and the
> hardware platform are the same. Given below is the time comparison
>
> Deeplearning4j library-40 minutes per 1 epoch
> Keras library- 4 minutes per 1 epoch
>
> I also compared the accuracies[1]. The deeplearning4j library gives a low
> accuracy compared to Keras library.
>
> [1]
> https://docs.google.com/spreadsheets/d/1-EvC1P7N90k1S_Ly6xVcFlEEKprh7r41Yk8aI6DiSaw/edit#gid=1050346562
>
> Thanks
>
>
>
> On Fri, Apr 1, 2016 at 10:12 AM, Thamali Wijewardhana <tham...@wso2.com>
> wrote:
>
>> Hi,
>> I have organized a review on Monday (4th  of April).
>>
>> Thanks
>>
>> On Thu, Mar 31, 2016 at 3:21 PM, Srinath Perera <srin...@wso2.com> wrote:
>>
>>> Please setup a review. Shall we do it monday?
>>>
>>> On Thu, Mar 31, 2016 at 2:15 PM, Thamali Wijewardhana <tham...@wso2.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> we have created a spark program to prove the feasibility of adding the
>>>> RNN algorithm to machine learner.
>>>> This program demonstrates all the steps in machine learner:
>>>>
>>>> Uploading a dataset
>>>>
>>>> Selecting the hyper parameters for the model
>>>>
>>>> Creating a RNN model using data and training the model
>>>>
>>>> Calculating the accuracy of the model
>>>>
>>>> Saving the model(As a serialization object)
>>>>
>>>> predicting using the model
>>>>
>>>> This program is based on deeplearning4j and apache spark pipeline.
>>>> Deeplearning4j was used as the deep learning library for recurrent neural
>>>> network algorithm. As the program should be based on the Spark pipeline,
>>>> the main challenge was to use deeplearning4j library with spark pipeline.
>>>> The components used in the spark pipeline should be compatible with spark
>>>> pipeline. For other components which are not compatible with spark
>>>> pipeline, we have to wrap them with a org.apache.spark.predictionModel
>>>> object.
>>>>
>>>> We have designed a pipeline with sequence of stages (transformers and
>>>> estimators):
>>>>
>>>> 1. Tokenizer:Transformer-Split each sequential data to tokens.(For
>>>> example, in sentiment analysis, split text into words)
>>>>
>>>> 2. Vectorizer :Transformer-Transforms features into vectors.
>>>>
>>>> 3. RNN algorithm :Estimator -RNN algorithm which trains on a data frame
>>>> and produces a RNN model
>>>>
>>>> 4. RNN model : Transformer- Transforms data frame with features to data
>>>> frame with predictions.
>>>>
>>>> The diagrams below explains the stages of the pipeline. The first
>>>> diagram illustrates the training usage of the pipeline and the next diagram
>>>> illustrates the testing and predicting usage of a pipeline.
>>>>
>>>>
>>>> ​
>>>>
>>>>
>>>> ​
>>>>
>>>>
>>>> I also have tuned the RNN model for hyper parameters[1] and found the
>>>> values of hyper parameters which optimizes accuracy of the model.
>>>> Give below is the set of hyper parameters relevant to RNN algorithm and
>>>> the tuned values.
>>>>
>>>>
>>>> Number of epochs-10
>>>>
>>>> Number of iterations- 1
>>>>
>>>> Learning rate-0.02
>>>>
>>>> We used the aclImdb sentiment analysis data set for this program and
>>>> with the above hyper parameters, we could achieve 60% accuracy. And we are
>>>> trying to improve the accuracy and efficiency of our algorithm.
>>>>
>>>> [1]
>>>> https://docs.google.com/spreadsheets/d/1Wcta6i2k4Je_5l16wCVlH6zBMNGIb-d7USaWdbrkrSw/edit?ts=56fcdc9b#gid=2118685173
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> On Fri, Mar 25, 2016 at 10:18 AM, Thamali Wijewardhana <
>>>> tham...@wso2.com> wr

Re: [Architecture] Support Efficient Cross Tenant Analytics in DAS

2016-03-31 Thread Srinath Perera
Use case come from IoT and APIM, and may be others. And it will be a common
use case due to our cloud.

On Thu, Mar 31, 2016 at 3:55 PM, Anjana Fernando <anj...@wso2.com> wrote:

> Hi Srinath,
>
> I'm not sure if this is something we would have to "fix". It was a clear
> design decision we took in order to isolate the tenant data, in order for
> others not to access other tenant's data. So also in Spark virtual tables,
> it will directly map to their own analytics tables. If we allow, maybe the
> super tenant, to access other tenant's data, it can be seen as a security
> threat. The idea should be, no single tenant should have any special access
> to other tenant's data.
>
> So setting aside the physical representation (which has other
> complications, like adding another index for tenantId and so on, which
> should be supported by all data sources), if we are to do this, we need a
> special view for super tenant tables in Spark virtual tables, in order for
> them to have access to the "tenantId" property of that table. And in other
> tenant's tables, we need to hide this, and not let them use it of course.
> This looks like bit of a hack to implement a specific scenario we have.
>
> So this requirement as I know mainly came from APIM analytics, where its
> in-built analytics publishes all tenant's data to super tenant's tables and
> the data is processed from there. So if we are doing this, this data is
> only used internally, and cannot be shown to each respective tenants for
> their own analytics. If each tenant needs to do their own analytics, they
> should configure to get data for their tenant space, and write their own
> analytics scripts. This may at the end mean, some type of data duplication,
> but it should happen, because two different users are doing their different
> processing. And IMO, we should not try to share any possible common data
> they may have and hack the system.
>

Yes results need to go to super tenant space.


>
> At the end, the point is, we should not take lightly what we try to
> achieve in having multi-tenancy, and compromise its fundamentals. At the
> moment, the idea should be, each tenant would have their own data, its own
> analytics scripts, and if you need to scale accordingly, have separate
> hardware for those tenants. And running separate queries for different
> tenants does not necessarily make it very slow, since the data load will be
> divided between the tenants, and only extra processing would be possible
> ramp up times for query executions.
>

Multi-tenancy always have to tradeoff with isolation vs. efficency.
However, we need to find a way to do APIM and IoT cloud use cases.


>
> Cheers,
> Anjana.
>
> On Thu, Mar 31, 2016 at 11:45 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Anjana,
>>
>> Currently we keep different Hbase/ RDBMS table per tenant. In
>> multi-tenant, environment, this is very expensive as we will have to run a
>> query per tenant.
>>
>> How can we fix this? e.g. if we keep tenant as field in the table, that
>> let us do a "group by".
>>
>> --Srinath
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-31 Thread Srinath Perera
Hi Frank,

I agree on your concerns.

However, reason we are though of doing this is relationship between data
item and a user is not 1-1, but rather complicated.

e.g. Problem: Bob writes a API and publish it API manager. Then Alice
subscribes to the API and write an mobile APP. Charlie uses the mobile App,
which results in an API call. We need to track the API calls via DAS. And
when Bob, Alice, and possibly Charlie come to the dashboard server, they
should possibly see the transaction from their view.

Above transaction is owned by all three users: Bob, Alice, and Charlie. It
is complicated to match this to a permission model. We felt you need to
understand the context of the data generated and used ( e.g. above API
manager scenario) to decide how to do access control. Scenario will be very
different with IoT as involved roles will be different so on and so forth.

If there is an efficient way to generalize and map permissions, we can go
with that. We felt it is complicated to solve.

--Srinath

On Thu, Mar 31, 2016 at 4:38 PM, Frank Leymann <fr...@wso2.com> wrote:

> Sorry for jumping into the discussion late (or even too late).  I try to
> understand the discussion by drawing analogies to DBMS - maybe that's wrong
> and I miss the point... If I am right, what you decided in the meeting is
> fully in line with what DBMS are doing :-)
>
> In Srinath's summary, list item (2): The API then actually will be in
> charge of implementing access control. Because the API needs to decide who
> can see which data. Writing an API accessing data is like writing a SQL
> program accessing a database. Then, we are asking an SQL program to
> implement access control by itself, isn't it? This is cumbersome, and
> likely each product has to implement very similar mechanisms.
>
> In Srinath's summary, list item (1): This sounds like established practice
> in DBMS. When a SQL programmer writes a program, she must have all the
> access control rights to access the DBMS. The final program is than subject
> to access control mechanism w.r.t. to the users of the program: whoever is
> allowed to use the program somehow inherits the access writes of the
> programmer (but only in context of this program). When identifying the SQL
> programmer with the tenant (or tenant admin), this is what (1) of the
> summary decided, correct?
>
> From Srinath's summary: "*Also this means, we will not support users
> providing their own analytics queries. Only tenant admins  can provide
> their own queries.*"  Again, identifying tenant admins with SQL
> programmers, that's exactly the paradigm.
>
>
> Best regards,
> Frank
>
> 2016-03-31 10:32 GMT+02:00 Srinath Perera <srin...@wso2.com>:
>
>> We had a meeting. Participants: Sanjiva, Sumedha, NuwanD, Prabath,
>> Srinath (Prabath please list others joined from Trace)
>>
>> Problem: Bob writes a API and publish it API manager. Then Alice
>> subscribes to the API and write an mobile APP. Charlie uses the mobile App,
>> which results in an API call. We need to track the API calls via DAS. And
>> when Bob, Alice, and possibly Charlie come to the dashboard server, they
>> should possibly see the transaction from their view.
>>
>> Challenge is that now there is no clear user in the above transaction.
>> Rather there is three. So we cannot handle this generically at the DAS
>> level via a user concept. Hence, the API manager needs to put the right
>> information when it publish data to DAS and show data only to relevant
>> parties when it showing and exposing data.
>>
>>
>> Solution
>>
>> [image: SecuirtyLayers.png]
>>
>>1. We will keep DAS in the current state with support for tenants
>>without support for users. It is aware about tenant and provide full
>>isolation between tenant. However, it does not aware about users.
>>2. Each product will write an extended receiver and DAL layer as,
>>that will build an API catered for their use cases. This API will support
>>login via OAuth tokens. Since they know the fields in the  tables that has
>>user data init, then can filter the data based on the user.
>>3. We will run the extended DAL layers and receivers in DAS, and they
>>will talk to DAL as an OSGI call.
>>4. Above layers will assume that users have access to OAuth token. In
>>APIM use cases, APIM can issue tokens, and in IoT use cases, APIM that 
>> runs
>>in the IoT server can issue tokens.
>>
>>
>> Also this means, we will not support users providing their own analytics
>> queries. Only tenant admins  can provide their own queries.
>> As decided in the earlier meeting,  We need APIM and IOT Server to be
>>

Re: [Architecture] RFC:Security Challenges in Analytics Story

2016-03-31 Thread Srinath Perera
Please see "Data Isolation level for Data from APIM and IoT? Tenant vs.
User" for decisions

--Srinath

On Fri, Mar 25, 2016 at 10:06 AM, Srinath Perera <srin...@wso2.com> wrote:

> As per meeting ( Sanjiva, Shankar, Sumedha, Anjana, Miyuru, Seshika, Suho,
> Nirmal, Nuwan)
>
> We need APIM and IOT Server to be able to publish events as "system user",
> but ask DAS to place data under Ann's ( related user) account.
>
> We need Devices to be able to *directly* send a event to DAS with an Oauth
> token.
>
> Following is the picture describing full scenario
>
> [image: DASSecuirtyScenarios.png]
> --Srinath
>
> On Thu, Mar 24, 2016 at 9:38 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> This thread described the authorization issue when reading data for
>> gadgets ( as I mentioned in Dashboard server product council).
>>
>> When IoT server/ API manager publish events, it need to tell DAS whose
>> data it is. ( however, server cannot login using that user, as then it will
>> need to keep passwords and also end up having to keep too many
>> connections).
>>
>> Gadget, when requesting data, has to tell DAS on whose behalf it is
>> requesting the data. DAS has to verify and show visible data. ( also DAS
>> data API need to be secured so that random users cannot call it and look at
>> other people's data).
>>
>> --Srinath
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sat, Mar 19, 2016 at 9:13 PM, Srinath Perera <srin...@wso2.com> wrote:
>>
>>> Yes, and Ann can also generate a token and share with Smith, to send
>>> with his requests.
>>>
>>> Also, IMO the most Dashboard requests would come from a browser ( in a
>>> phone or PC), not from simple device. So storing or locating the token
>>> should not be a problem.
>>>
>>> On Fri, Mar 18, 2016 at 3:21 PM, Chathura Ekanayake <chath...@wso2.com>
>>> wrote:
>>>
>>>>
>>>>
>>>>
>>>>> I think we should go for a taken based approach (e.g. OAuth) to handle
>>>>> these scenarios. Following are few ideas
>>>>>
>>>>>
>>>>>1.
>>>>>
>>>>>Using a token ( Ann attesting system user can do publish/ access
>>>>>to this stream on her behalf), Ann let the “system user“ publish data 
>>>>> into
>>>>>Ann’s account
>>>>>
>>>>>
>>>> If a device can store a token, Ann can generate a token with necessary
>>>> scope (to access Ann's event store) and store the token in the device
>>>> itself. In that case, device can send the token with each event, so that
>>>> IoT platform can decide permissions based on the token.
>>>>
>>>>
>>>>>
>>>>>1.
>>>>>
>>>>>When we give user Smith access to a gadget, we generate a token,
>>>>>which he will send when he is accessing the gadget, which the gadget 
>>>>> will
>>>>>send to the DAS backend to get access to correct tables
>>>>>2.
>>>>>
>>>>>Same token can be used for API access as well
>>>>>3.
>>>>>
>>>>>We need to manage the tokens issued to each user so this happen
>>>>>transparently to the end user as much as possible.
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> 
>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>> Site: http://people.apache.org/~hemapani/
>>> Photos: http://www.flickr.com/photos/hemapani/
>>> Phone: 0772360902
>>>
>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> 
> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
> Site: http://home.apache.org/~hemapani/
> Photos: http://www.flickr.com/photos/hemapani/
> Phone: 0772360902
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-31 Thread Srinath Perera
We had a meeting. Participants: Sanjiva, Sumedha, NuwanD, Prabath, Srinath
(Prabath please list others joined from Trace)

Problem: Bob writes a API and publish it API manager. Then Alice subscribes
to the API and write an mobile APP. Charlie uses the mobile App, which
results in an API call. We need to track the API calls via DAS. And when
Bob, Alice, and possibly Charlie come to the dashboard server, they should
possibly see the transaction from their view.

Challenge is that now there is no clear user in the above transaction.
Rather there is three. So we cannot handle this generically at the DAS
level via a user concept. Hence, the API manager needs to put the right
information when it publish data to DAS and show data only to relevant
parties when it showing and exposing data.


Solution

[image: SecuirtyLayers.png]

   1. We will keep DAS in the current state with support for tenants
   without support for users. It is aware about tenant and provide full
   isolation between tenant. However, it does not aware about users.
   2. Each product will write an extended receiver and DAL layer as, that
   will build an API catered for their use cases. This API will support login
   via OAuth tokens. Since they know the fields in the  tables that has user
   data init, then can filter the data based on the user.
   3. We will run the extended DAL layers and receivers in DAS, and they
   will talk to DAL as an OSGI call.
   4. Above layers will assume that users have access to OAuth token. In
   APIM use cases, APIM can issue tokens, and in IoT use cases, APIM that runs
   in the IoT server can issue tokens.


Also this means, we will not support users providing their own analytics
queries. Only tenant admins  can provide their own queries.
As decided in the earlier meeting,  We need APIM and IOT Server to be able
to publish events as "system user", but ask DAS to place data under Ann's (
related user) account.

Please add anything I missed.

--Srinath




On Tue, Mar 29, 2016 at 11:53 AM, Srinath Perera <srin...@wso2.com> wrote:
>
> I have scheduled a meeting tomorrow to discuss this.
>
> --Srinath
>
> On Tue, Mar 29, 2016 at 11:44 AM, Sachith Withana <sach...@wso2.com>
wrote:
>>
>> Hi all,
>>
>> I do believe it would be of great value to incorporate user level data
isolation for DAS.
>>
>> Having said that though, it wouldn't be practical to provide a complete
permission platform to DAS that would suffice all the requirements of APIM
and IOT.
>>
>> IMO, we should provide some features that would help individual products
build their own permission platform that caters to their requirements.
>>
>> Thanks,
>> Sachith
>>
>> On Tue, Mar 29, 2016 at 10:38 AM, Nuwan Dias <nuw...@wso2.com> wrote:
>>>
>>> Please ignore my reply. It was intended for another thread :)
>>>
>>> On Mon, Mar 28, 2016 at 4:26 PM, Nuwan Dias <nuw...@wso2.com> wrote:
>>>>
>>>> Having to publish a single event after collecting all possible data
records from the server would be good in terms of scalability aspects of
the DAS/Analytics platform. However I see that it introduces new challenges
for which we would need solutions.
>>>>
>>>> 1. How to guarantee a event is always published to DAS? In the case of
API Manager, a request has multiple exit points. Such as auth failures,
throttling out, back-end failures, message processing failures, etc. So we
need a way to guarantee that an event is always sent out whatever the state.
>>>>
>>>> 2. With this model, I'm assuming we only have 1 stream definition. Is
this correct? If so would this not make the analytics part complicated? For
example, say I have a spark query to summarize the throttled out events
from an App, since I can only see a single stream the query would have to
deal with null fields and have to deal with the whole bulk of data even if
in reality it might only have to deal with a few. The same complexity would
arise for the CEP based throttling engine and the new alerts we're building
as well.
>>>>
>>>> Thanks,
>>>> NuwanD.
>>>>
>>>> On Mon, Mar 28, 2016 at 2:43 PM, Srinath Perera <srin...@wso2.com>
wrote:
>>>>>
>>>>> Hi Ayyoob, Ruwan, Suho,
>>>>>
>>>>> I think where to handle ( within DAS vs. at higher level API in APIM
or IoT server) is decided by what level user customizations are needed for
analytics queries.
>>>>>
>>>>> If we need individual users to write their own queries as well, then
we need to build user support into DAS. However, if queries can be changed
by tenant admins only, doing this via a high-level API is OK.
>>>>>
>>>>> Where does APIM and IoT

[Architecture] Support Efficient Cross Tenant Analytics in DAS

2016-03-31 Thread Srinath Perera
Hi Anjana,

Currently we keep different Hbase/ RDBMS table per tenant. In multi-tenant,
environment, this is very expensive as we will have to run a query per
tenant.

How can we fix this? e.g. if we keep tenant as field in the table, that let
us do a "group by".

--Srinath

-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Generating a single Event per Message from our Products

2016-03-29 Thread Srinath Perera
Nuwan, regarding Q1, we can setup such a way that we publisher auto
publisher the events after timeout or after N events are accumelated.

Nuwan, Chathura ( regarding Q2),

We already do event batching. Above numbers are after event batching. There
are two bottlenecks. One is sending events over the network and the other
is writing them to DB. Batching helps a lot in moving it over the network,
but does not help much when writing to DB.

Regarding null, one option is to group event generated by a single message
together, which will avoid most nulls. I think our main concern is single
message triggering multiple events. We also need to write queries to copy
the values from single big events to different streams and use those
streams to write queries.

e.g. We can copy values from Big stream to HTTPStream, using which we will
write HTTP analytics queries.

--Srinath



On Tue, Mar 29, 2016 at 1:29 PM, Chathura Ekanayake <chath...@wso2.com>
wrote:

> As we can reduce the number of event transfers with event batching, I
> think the advantage of using a single event stream is to reduce number of
> disk writes at DAS side. But as Nuwan mentioned, dealing with null fields
> can be a problem in writing analytics scripts.
>
> Regards,
> Chathura
>
> On Tue, Mar 29, 2016 at 10:40 AM, Nuwan Dias <nuw...@wso2.com> wrote:
>
>> Having to publish a single event after collecting all possible data
>> records from the server would be good in terms of scalability aspects of
>> the DAS/Analytics platform. However I see that it introduces new challenges
>> for which we would need solutions.
>>
>> 1. How to guarantee a event is always published to DAS? In the case of
>> API Manager, a request has multiple exit points. Such as auth failures,
>> throttling out, back-end failures, message processing failures, etc. So we
>> need a way to guarantee that an event is always sent out whatever the state.
>>
>> 2. With this model, I'm assuming we only have 1 stream definition. Is
>> this correct? If so would this not make the analytics part complicated? For
>> example, say I have a spark query to summarize the throttled out events
>> from an App, since I can only see a single stream the query would have to
>> deal with null fields and have to deal with the whole bulk of data even if
>> in reality it might only have to deal with a few. The same complexity would
>> arise for the CEP based throttling engine and the new alerts we're building
>> as well.
>>
>> Thanks,
>> NuwanD.
>>
>> On Sat, Mar 26, 2016 at 1:22 AM, Inosh Goonewardena <in...@wso2.com>
>> wrote:
>>
>>> +1. With combined event approach we can avoid sending duplicate
>>> information to some level as well. For example, in API analytics scenario
>>> both request and response streams have consumerKey, context, api_version,
>>> api, resourcePath, etc properties which the values will be same for both
>>> request event and corresponding response event. With single event approach
>>> we can avoid such.
>>>
>>> On Fri, Mar 25, 2016 at 1:23 AM, Gihan Anuruddha <gi...@wso2.com> wrote:
>>>
>>>> Hi Janaka,
>>>>
>>>> We do have event batching at the moment as well. You can configure that
>>>> in data-agent-config.xml [1]. AFAIU, what we are trying to do here is to
>>>> combine several events into a single event.  Apart from that, wouldn't be a
>>>> good idea to compress the event after we merge and before we send to DAS?
>>>>
>>>> [1] -
>>>> https://github.com/wso2/carbon-analytics-common/blob/master/features/data-bridge/org.wso2.carbon.databridge.agent.server.feature/src/main/resources/conf/data-agent-config.xml
>>>>
>>>> On Fri, Mar 25, 2016 at 11:39 AM, Janaka Ranabahu <jan...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Srinath,
>>>>>
>>>>> On Fri, Mar 25, 2016 at 11:26 AM, Srinath Perera <srin...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> As per meeting ( Paricipants: Sanjiva, Shankar, Sumedha, Anjana,
>>>>>> Miyuru, Seshika, Suho, Nirmal, Nuwan)
>>>>>>
>>>>>> Currently we generate several events per message from our products.
>>>>>> For example, when a message hits APIM, following events will be 
>>>>>> generated.
>>>>>>
>>>>>>
>>>>>>1. One from HTTP level
>>>>>>2. 1-2 from authentication and authorization logic
>>>>>>3. 1 from Throttling
>>>>>>   

Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-29 Thread Srinath Perera
I have scheduled a meeting tomorrow to discuss this.

--Srinath

On Tue, Mar 29, 2016 at 11:44 AM, Sachith Withana <sach...@wso2.com> wrote:

> Hi all,
>
> I do believe it would be of great value to incorporate user level data
> isolation for DAS.
>
> Having said that though, it wouldn't be practical to provide a complete
> permission platform to DAS that would suffice all the requirements of APIM
> and IOT.
>
> IMO, we should provide some features that would help individual products
> build their own permission platform that caters to their requirements.
>
> Thanks,
> Sachith
>
> On Tue, Mar 29, 2016 at 10:38 AM, Nuwan Dias <nuw...@wso2.com> wrote:
>
>> Please ignore my reply. It was intended for another thread :)
>>
>> On Mon, Mar 28, 2016 at 4:26 PM, Nuwan Dias <nuw...@wso2.com> wrote:
>>
>>> Having to publish a single event after collecting all possible data
>>> records from the server would be good in terms of scalability aspects of
>>> the DAS/Analytics platform. However I see that it introduces new challenges
>>> for which we would need solutions.
>>>
>>> 1. How to guarantee a event is always published to DAS? In the case of
>>> API Manager, a request has multiple exit points. Such as auth failures,
>>> throttling out, back-end failures, message processing failures, etc. So we
>>> need a way to guarantee that an event is always sent out whatever the state.
>>>
>>> 2. With this model, I'm assuming we only have 1 stream definition. Is
>>> this correct? If so would this not make the analytics part complicated? For
>>> example, say I have a spark query to summarize the throttled out events
>>> from an App, since I can only see a single stream the query would have to
>>> deal with null fields and have to deal with the whole bulk of data even if
>>> in reality it might only have to deal with a few. The same complexity would
>>> arise for the CEP based throttling engine and the new alerts we're building
>>> as well.
>>>
>>> Thanks,
>>> NuwanD.
>>>
>>> On Mon, Mar 28, 2016 at 2:43 PM, Srinath Perera <srin...@wso2.com>
>>> wrote:
>>>
>>>> Hi Ayyoob, Ruwan, Suho,
>>>>
>>>> I think where to handle ( within DAS vs. at higher level API in APIM or
>>>> IoT server) is decided by what level user customizations are needed for
>>>> analytics queries.
>>>>
>>>> If we need individual users to write their own queries as well, then we
>>>> need to build user support into DAS. However, if queries can be changed by
>>>> tenant admins only, doing this via a high-level API is OK.
>>>>
>>>> Where does APIM and IoT server stands on this?
>>>>
>>>> --Srinath
>>>>
>>>>
>>>>
>>>> On Sat, Mar 26, 2016 at 9:28 AM, Ayyoob Hamza <ayy...@wso2.com> wrote:
>>>> >
>>>> > Hi,
>>>> > Yes we require user level separation but just wondered whether we
>>>> need this separation in DAS level or whether can we enforce it device type
>>>> API level. This is because IMO, DAS provides a low level API which we
>>>> cannot expose it directly so we need a proxy that maps this to a high level
>>>> API to expose the data. So wondered whether can we do the restriction in
>>>> the high level API endpoint. However if the user level separation is
>>>> required across products such as APIM then I guess the separation should be
>>>> in the DAS level.
>>>> >
>>>> > Further just wanted to bring another concern that we have, we have a
>>>> requirement on device sharing so what this mean is that we can share the
>>>> data of a device to another, which means a drill down permission model,
>>>> where the separation would be in user, device level(eg: Does the user X has
>>>> permission to view the data of the device d of user Y). So in this case I
>>>> wonder whether this needs to be handled in DAS level? rather I see that it
>>>> needs to be handled in the high level API that we provide to expose the
>>>> data.
>>>> >
>>>> > Thanks
>>>> >
>>>> >
>>>> > Ayyoob Hamza
>>>> > Software Engineer
>>>> > WSO2 Inc.; http://wso2.com
>>>> > email: ayy...@wso2.com cell: +94 77 1681010
>>>> >
>>>> > On Sat, Mar 26, 2016 at 1:03 AM, Ruwan Yatawara <ruw...@wso2.com&

Re: [Architecture] Security Authentication Analytics

2016-03-28 Thread Srinath Perera
gt; Regards,
> Damith.
>
> --
> Software Engineer
> WSO2 Inc.; http://wso2.com
> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
> lean.enterprise.middleware
>
> mobile: *+94728671315 <%2B94728671315>*
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [IoTS] Is it the right thing to package "analytics" scripts with a plugin implementation?

2016-03-28 Thread Srinath Perera
Hi Prabath,

Agree on your points. I am +1 to have analytics separate.

--Srinath

On Thu, Mar 24, 2016 at 5:06 PM, Prabath Abeysekera <praba...@wso2.com>
wrote:

> Hi Srinath,
>
> On Thu, Mar 24, 2016 at 1:48 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi All,
>>
>> Prabath, if analytics is not in the device type, where should it go?
>>
>
> IMO, we can have it as a separate project structure with a separate
> packaging as part of the underlying tooling platform. Just as how things
> are done when composing a typical enterprise integration scenario involving
> ESB configs, DSS configs, etc, one may create a multi-module maven project
> with device management plug-in skeleton as one module and analytics as
> another module. When built, there will be two resultant packagings one to
> be deployed into device management nodes and the other to DAS. If the same
> developer is expected to take care of developing the device management
> plugin implementation and the analytics story, then we can may be define a
> composite maven project artifact, when clicked, will create the
> aforementioned structure.
>
> WDYT?
>
>
>>
>> I do not agree with your observation that functional and
>> non-functional parts should be separate. It helps to have it separate if
>> non-functional parts can be reused with other devices. However, that is not
>> the case here.
>>
>
> IMO, it's not only reusability that defines it in this particular context.
> The fact that analytics can change and evolve independently from the device
> management plug-in implementation should also be considered as a valid
> requirement. This somewhat aligns with the first question you've raised
> below, I believe.
>
> The reason is, organizational analytics related requirements change over
> time depending on various business strategies that they might adopt, but
> the plug-in implementation might not. Also, the degree of analytics
> required too would change from one organization to another even if they
> deploy the same IoTS solution equipped with the same plug-in set. If we
> ship analytics bundled into the plug-in, we present things to the end-user
> as something tied into a plug-in imeplementation, which doesn't appear to
> be correct. So, we should have a mechanism that is not coupled to a plug-in
> implementation, yet functions in tandem with it, to cater that sort of
> requirements.
>
> For instance, let's imagine there exists Organizations Org1 and Org2. They
> both are interested in using IoTS, which has the capability to handle
> device DX, produced by a manufacturer ManX. There's a good chance that the
> analytics needed for Org1 and Org2 might be different? So, a single
> solution that is shipped with built-in analytics(bundled with the plug-in)
> might not be suitable for both organizations. Instead, they would want to
> customize the KPIs to suit their business requirements. One might think
> this is hypothetical, but I thought there's a good enough probability that
> this might be the case in a real deployment.
>
>
>>
>> I think, as sumedha said, we should ship what we have and change based on
>> how it is used.
>>
>
> Anyway, if everyone thinks, we should wait for industry feedback to make a
> decision on this, yeah of course, we'll do it that way.
>
>
>>
>> If we choose to ship analytics bundled, we need to answer two questions.
>>
>>
>>1. Would plugin developer will do all analytics or do we have use
>>cases where end users will also create their own analytics? If they want 
>> to
>>build their own analytics, we need think through how it work.
>>
>> To my understanding, both scenarios are possible.
>
>
>>
>>1. There will be analytics that use data from multiple device types?
>>How would such cases handled?
>>
>> Well, in the case of mobile devices, this certainly is a valid scenario
> as people usually prefer to have an aggregated view of how devices as a
> whole behave within an organization adhering compliance rules, etc. Even in
> other typical IoT contexts, when it comes implementing monitoring/analytics
> upon groups that have multiple device types in them i.e. a smart home
> solution, you essentially need to have something similar, IMO.
>
> IoT experts, please do comment in and correct me if I'm wrong.
>
> Cheers,
> Prabath
>
> --Srinath
>>
>>
>>
>> On Thu, Mar 24, 2016 at 12:41 PM, Prabath Abeysekera <praba...@wso2.com>
>> wrote:
>>
>>> Hi Sumedha,
>>>
>>> IMO, the device management plug-in layer does show what features can be
>>> controlled,

Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-28 Thread Srinath Perera
Hi Ayyoob, Ruwan, Suho,

I think where to handle ( within DAS vs. at higher level API in APIM or IoT
server) is decided by what level user customizations are needed for
analytics queries.

If we need individual users to write their own queries as well, then we
need to build user support into DAS. However, if queries can be changed by
tenant admins only, doing this via a high-level API is OK.

Where does APIM and IoT server stands on this?

--Srinath



On Sat, Mar 26, 2016 at 9:28 AM, Ayyoob Hamza <ayy...@wso2.com> wrote:
>
> Hi,
> Yes we require user level separation but just wondered whether we need
this separation in DAS level or whether can we enforce it device type API
level. This is because IMO, DAS provides a low level API which we cannot
expose it directly so we need a proxy that maps this to a high level API to
expose the data. So wondered whether can we do the restriction in the high
level API endpoint. However if the user level separation is required across
products such as APIM then I guess the separation should be in the DAS
level.
>
> Further just wanted to bring another concern that we have, we have a
requirement on device sharing so what this mean is that we can share the
data of a device to another, which means a drill down permission model,
where the separation would be in user, device level(eg: Does the user X has
permission to view the data of the device d of user Y). So in this case I
wonder whether this needs to be handled in DAS level? rather I see that it
needs to be handled in the high level API that we provide to expose the
data.
>
> Thanks
>
>
> Ayyoob Hamza
> Software Engineer
> WSO2 Inc.; http://wso2.com
> email: ayy...@wso2.com cell: +94 77 1681010
>
> On Sat, Mar 26, 2016 at 1:03 AM, Ruwan Yatawara <ruw...@wso2.com> wrote:
>>
>> Hi Suho,
>>
>> Yes, you are right. We require user level isolation in IoT Server.
>>
>> Thanks and Regards,
>>
>> Ruwan Yatawara
>>
>> Senior Software Engineer,
>> WSO2 Inc.
>>
>> email : ruw...@wso2.com
>> mobile : +94 77 9110413
>> blog : http://ruwansrants.blogspot.com/
>> www: :http://wso2.com
>>
>>
>> On Fri, Mar 25, 2016 at 11:55 PM, Sriskandarajah Suhothayan <
s...@wso2.com> wrote:
>>>
>>> Hi
>>>
>>> User level isolation is needed for the IoT server, as in the IoT server
context user registers a device and use that, hence he/she should only be
able to see his/her devices and not any other users devices or data.
>>>
>>> @Pabath & Sumedha correct me if I'm wrong.
>>>
>>> Regards
>>> Suho
>>>
>>> On Fri, Mar 25, 2016 at 9:02 AM, Srinath Perera <srin...@wso2.com>
wrote:
>>>>
>>>> For the data published from APIM and IoT servers, what kind of
isolation do we need?
>>>>
>>>> Option 1: Tenant level - DAS already has this. However, this means
that multiple users (e.g. publishers, subscribers, or IoT users) can see
other people's stats of they are in the same tenant
>>>>
>>>> Option 2: User level - DAS does not have this concept yet.
>>>>
>>>> Also a related question is that if user add their own queries, at what
level they are isolated.
>>>>
>>>> --Srinath
>>>>
>>>> --
>>>> 
>>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>>> Site: http://home.apache.org/~hemapani/
>>>> Photos: http://www.flickr.com/photos/hemapani/
>>>> Phone: 0772360902
>>>
>>>
>>>
>>>
>>> --
>>> S. Suhothayan
>>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>>> WSO2 Inc. http://wso2.com
>>> lean . enterprise . middleware
>>>
>>> cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
>>> twitter: http://twitter.com/suhothayan | linked-in:
http://lk.linkedin.com/in/suhothayan
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>



--

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Generating a single Event per Message from our Products

2016-03-24 Thread Srinath Perera
As per meeting ( Paricipants: Sanjiva, Shankar, Sumedha, Anjana, Miyuru,
Seshika, Suho, Nirmal, Nuwan)

Currently we generate several events per message from our products. For
example, when a message hits APIM, following events will be generated.


   1. One from HTTP level
   2. 1-2 from authentication and authorization logic
   3. 1 from Throttling
   4. 1 for ESB level stats
   5. 2 for request and response

If APIM is handling 10K TPS, that means DAS is receiving events in about
80K TPS. Although data bridge that transfers events are fast, writing to
Disk ( via RDBMS or Hbase) is a problem. We can scale Hbase. However, that
will run to a scenario where APIM deployment will need a very large
deployment of DAS.

We decided to figure out a way to collect all the events and send a single
event to DAS. Basically idea is to extend the data publisher library such
that user can keep adding readings to the library, and it will collect the
readings and send them over as a single event to the server.

However, some flows might terminated in the middle due to failures. There
are two solutions.


   1. Get the product to call a flush from a finally block
   2. Get the library to auto flush collected reading every few seconds

I feel #2 is simpler.

Do we have any concerns about going to this model?

Suho, Anjana we need to think how to do this with our stream definition as
we force you to define the streams before hand.

--Srinath





-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC:Security Challenges in Analytics Story

2016-03-24 Thread Srinath Perera
As per meeting ( Sanjiva, Shankar, Sumedha, Anjana, Miyuru, Seshika, Suho,
Nirmal, Nuwan)

We need APIM and IOT Server to be able to publish events as "system user",
but ask DAS to place data under Ann's ( related user) account.

We need Devices to be able to *directly* send a event to DAS with an Oauth
token.

Following is the picture describing full scenario

[image: DASSecuirtyScenarios.png]
--Srinath

On Thu, Mar 24, 2016 at 9:38 AM, Srinath Perera <srin...@wso2.com> wrote:

> This thread described the authorization issue when reading data for
> gadgets ( as I mentioned in Dashboard server product council).
>
> When IoT server/ API manager publish events, it need to tell DAS whose
> data it is. ( however, server cannot login using that user, as then it will
> need to keep passwords and also end up having to keep too many
> connections).
>
> Gadget, when requesting data, has to tell DAS on whose behalf it is
> requesting the data. DAS has to verify and show visible data. ( also DAS
> data API need to be secured so that random users cannot call it and look at
> other people's data).
>
> --Srinath
>
>
>
>
>
>
>
>
>
> On Sat, Mar 19, 2016 at 9:13 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Yes, and Ann can also generate a token and share with Smith, to send with
>> his requests.
>>
>> Also, IMO the most Dashboard requests would come from a browser ( in a
>> phone or PC), not from simple device. So storing or locating the token
>> should not be a problem.
>>
>> On Fri, Mar 18, 2016 at 3:21 PM, Chathura Ekanayake <chath...@wso2.com>
>> wrote:
>>
>>>
>>>
>>>
>>>> I think we should go for a taken based approach (e.g. OAuth) to handle
>>>> these scenarios. Following are few ideas
>>>>
>>>>
>>>>1.
>>>>
>>>>Using a token ( Ann attesting system user can do publish/ access to
>>>>this stream on her behalf), Ann let the “system user“ publish data into
>>>>Ann’s account
>>>>
>>>>
>>> If a device can store a token, Ann can generate a token with necessary
>>> scope (to access Ann's event store) and store the token in the device
>>> itself. In that case, device can send the token with each event, so that
>>> IoT platform can decide permissions based on the token.
>>>
>>>
>>>>
>>>>1.
>>>>
>>>>When we give user Smith access to a gadget, we generate a token,
>>>>which he will send when he is accessing the gadget, which the gadget 
>>>> will
>>>>send to the DAS backend to get access to correct tables
>>>>2.
>>>>
>>>>Same token can be used for API access as well
>>>>3.
>>>>
>>>>We need to manage the tokens issued to each user so this happen
>>>>transparently to the end user as much as possible.
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://people.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> 
> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
> Site: http://home.apache.org/~hemapani/
> Photos: http://www.flickr.com/photos/hemapani/
> Phone: 0772360902
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Log Analyzer Analytics for API Manager

2016-03-24 Thread Srinath Perera
As per meeting ( Sanjiva, Shankar, Sumedha, Anjana, Miyuru, Seshika, Suho,
Nirmal, Nuwan, Malith, SajithP, SameeraR .. pls add)


   1. Drop #3
   2. Other than at above use cases are OK.
   3. Need to integrate this with APIM Admin dashboard and for ESB
   analytics, it can run with analytics4ESB.
   4. We need to polish the UIs, make the look professional

Miyuru, add anything I missed.





On Thu, Mar 24, 2016 at 1:53 PM, Miyuru Dayarathna <miyu...@wso2.com> wrote:

> Hi Srinath,
>
> Yes, all the above mentioned use cases can be implemented with the logs
> available from APIM and ESB.
>
> --
> Thanks,
> Miyuru Dayarathna
> Senior Technical Lead
> Mobile: +94713527783
> Blog: http://miyurublog.blogspot.com
>
> On Thu, Mar 17, 2016 at 10:56 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Does all these logs available for APIM and ESB?
>>
>> --Srinath
>>
>> On Wed, Mar 16, 2016 at 2:37 PM, Miyuru Dayarathna <miyu...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> We have created a set of use cases to be implemented as part of the
>>> developments of Log Analyzer Analytics for API Manager. Specifically we
>>> will be creating a set of charts which will provide us information on the
>>> following use cases on API Manager.
>>>
>>> 1. Number of errors (classes/messages) thrown in a specific date period
>>> 2. New artifact deployed
>>> 3. How long do servers take to start
>>> 4. How long the servers were up
>>> 5. Invalid login attempts
>>> 6. Message processing errors (Number of API failures)
>>> 7. Access token validation errors (not sent, invalid, revoked)
>>> 8. Access token expiration errors
>>>
>>> The attached PDF file elaborates on these use cases. Let me know if you
>>> have any comments on them.
>>>
>>> --
>>> Thanks,
>>> Miyuru Dayarathna
>>> Senior Technical Lead
>>> Mobile: +94713527783
>>> Blog: http://miyurublog.blogspot.com
>>>
>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://people.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> Thanks,
> Miyuru Dayarathna
> Senior Technical Lead
> Mobile: +94713527783
> Blog: http://miyurublog.blogspot.com
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Defining UX Pasona for the Platform

2016-03-24 Thread Srinath Perera
Ping, any updates?

On Fri, Feb 26, 2016 at 5:42 AM, Dakshika Jayathilaka <daksh...@wso2.com>
wrote:

> Sure,
>
> We'll try to come up with main personas that we can use. IMHO there may be
> key personas + sub levels. I'll schedule a meeting once we complete first
> draft.
>
> Thank you,
>
> Regards,
>
> *Dakshika Jayathilaka*
> PMC Member & Committer of Apache Stratos
> Senior Software Engineer
> WSO2, Inc.
> lean.enterprise.middleware
> 0771100911
>
> On Thu, Feb 25, 2016 at 8:54 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Dakshika,
>>
>> As per our chat, most of the time, we will use a common set of Pasona's
>> for UX user stories (e.g. developer, DevOps Person, Business User etc). IMO
>> defining them before will save lot of time. Then for most use cases, we can
>> pick one from the list.
>>
>> Could UX team define Pasona's? When we have the first cut, lets do a
>> review.
>>
>> --Srinath
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://people.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [IoTS] Is it the right thing to package "analytics" scripts with a plugin implementation?

2016-03-24 Thread Srinath Perera
he developers customize the
>>>> same to come up with the desired plug-in implementations and package things
>>>> up to be able to push it to the CDM-F.
>>>>
>>>> Right now the model being used is that, all the aforementioned
>>>> resources such as the plugin implementation, analytics scripts, etc are
>>>> packed up as a single archive. IMO, having analytics scripts bundled up
>>>> with the rest of the bits is not a good approach depending on a few 
>>>> reasons.
>>>>
>>>> 1. When it comes real production deployments, these analytics scripts
>>>> will be deployed in a separate DAS cluster as opposed to how things are
>>>> when the same is done within the scope of a single server. So, if the two
>>>> components are running separately, there's no point packaging everything up
>>>> as a single archive as we'd have to break things down to two separate
>>>> packagings (a) to be deployed into IoTS (b) to be deployed into DAS.
>>>>
>>>> 2. Analytics, I believe, primarily is a non-functional requirement.
>>>> Therefore, bundling non-functional bits with functional bits is not a good
>>>> practice.
>>>>
>>>> 3. If we ship analytics scripts with a particular plug-in, the natural
>>>> expectation is that it would only cover specific bits bound to the scope of
>>>> particular plug-in. However, in a practical environment, what majority of
>>>> the folks are looking at are much more advanced and composite KPIs and
>>>> related analytics to be able to help organizational strategies, etc. So it
>>>> is highly likely that people might generally be interested in brining in
>>>> analytics as a separate step relieving the need to pack the scripts, etc
>>>> with the functional bits. i.e. plug-in implementation.
>>>>
>>>>
>>>> Instead what I propose is to have a separate "project type" or
>>>> something similar as part of the underlying tooling platform, which can
>>>> effectively be used to compose all analytics related scripts, etc. We can
>>>> then package it up separately and deploy into the relevant DAS clusters to
>>>> get the analytics up and running.
>>>>
>>>>
>>>> Some of the stuff I've brought up might be subjective, but I am of the
>>>> opinion that we need to remove analytics related scripts from the plug-in
>>>> skeleton.
>>>>
>>>> WDYT?
>>>>
>>>>
>>>> Cheers,
>>>> Prabath
>>>> --
>>>> Prabath Abeysekara
>>>> Technical Lead
>>>> WSO2 Inc.
>>>> Email: praba...@wso2.com
>>>> Mobile: +94774171471
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> /sumedha
>> m: +94 773017743
>> b :  bit.ly/sumedha
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Prabath Abeysekara
> Technical Lead
> WSO2 Inc.
> Email: praba...@wso2.com
> Mobile: +94774171471
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Session Affinity Alternatives

2016-03-24 Thread Srinath Perera
Hi Frank,

Sorry, maybe we have not thought this through.

When I said "state transfer" I mean "full session state in the message
(REST-style)"

Q1: What can we do at the platform level to support "state transfer"?

Q1. Case 1: for stateful Apps and Services written by end users - end user
can choose to use state transfer, and platform can only do minimal ( please
correct me if I am missing something)


Q1. Case 2: What are we doing for stateful Apps/ Services that are part of
the platform? Here we can choose to do "state transfer"

Q2: How widely used is the state transfer? Would it make the message too
big and make simple cases complex?


--Srinath



On Wed, Mar 23, 2016 at 3:37 PM, Frank Leymann <fr...@wso2.com> wrote:

> OK...
>
> But B.(ii) has disadvantages (performance, garbage collection,...). We
> need to argue that the benefit of platform/server supported state transfer
> outweighs the disadvantages.
>
>
> Best regards,
> Frank
>
> 2016-03-23 7:04 GMT+01:00 Srinath Perera <srin...@wso2.com>:
>
>> Hi Frank
>>
>> Agreed,
>>
>> IMO, we should go for session affinity model ( Option A) as default, and
>> let him enable Option B (II) if he need.
>>
>> Regarding B (I) "state transfer", I think user can choose to do it ( as
>> it does not need any help from the platform). That is zero work for us.
>>
>> --Srinath
>>
>> On Mon, Mar 21, 2016 at 5:45 PM, Frank Leymann <fr...@wso2.com> wrote:
>>
>>> Hi Srinath,
>>>
>>> I am not at all arguing in favor of session replication or session
>>> persistence either.
>>>
>>> I think in many situation it's fine that a client fails when a session
>>> dropped and the client has to reconnect explicitly - this is option A.
>>>
>>> Otherwise (as a second option B), "state transfer" is to be used.  And
>>> this can be achieved by (i) including the full session state in the message
>>> (REST-style), or (ii) including only the session id in the message and hold
>>> the full state in a DB (which is used for session restore if server
>>> affinity has been lost).
>>>
>>> Option B.(ii) has quite a bunch of disadvantages: a DB is involved with
>>> obvious performance impact; garbage collection of sessions no longer
>>> needed; DBMS as single point-of-failure;
>>>
>>>
>>>
>>> Best regards,
>>> Frank
>>>
>>> 2016-03-18 7:08 GMT+01:00 Srinath Perera <srin...@wso2.com>:
>>>
>>>> Hi Frank,
>>>>
>>>> Proposal is when the session parameter has changed, framework (e.g.
>>>> Servelt runtime, or MSS framework) will write the update to the disk
>>>> asynchronsly.
>>>>
>>>> Since we are a middleware platform, impact of losing a session depends
>>>> on the kind of the application end user is running.
>>>>
>>>> However, AFAIK session replication or pressitance that is in WSO2 AS
>>>> was rarely used. ( Azeez, please correct me if I am wrong).
>>>>
>>>> --Srinath
>>>>
>>>> On Thu, Mar 17, 2016 at 11:42 PM, Frank Leymann <fr...@wso2.com> wrote:
>>>>
>>>>> Sorry for jumping in late in the discussion
>>>>>
>>>>> Session affinity in general is bad (scalability, HA) - I guess that's
>>>>> what we all agree on.  But in certain scenarios, sticky sessions may be
>>>>> fine.  Thus, what is the underlying scenario in the case we discuss?
>>>>>
>>>>> As far as I understand, persisting session state requires the
>>>>> application to be aware of this. Or is the proposal that provide an
>>>>> environment that reconstructs the session state on behalf of the
>>>>> application?  As Srinath points out we are loosing data if a session 
>>>>> aborts
>>>>> and the application is not a transaction: is that critical in our 
>>>>> scenario?
>>>>>
>>>>>
>>>>> Best regards,
>>>>> Frank
>>>>>
>>>>> 2016-03-17 10:21 GMT+01:00 Srinath Perera <srin...@wso2.com>:
>>>>>
>>>>>> Let's not do session replication. It is very hard to make it work
>>>>>> IMO.
>>>>>>
>>>>>> I would like to propose a variation to Azeez's version.
>>>>>>
>>>>>> We can do local session + session affinity + asynchronousl

Re: [Architecture] Incremental Processing Support in DAS

2016-03-24 Thread Srinath Perera
Hi Sachith, Anjana,

+1 for the backend model.

Are we handling the case when last run was done, 25 minutes of data is
processed. Basically, next run has to re-run last hour and update the value.

When does one hour counting starts? is it from the moment server starts?
That will be probabilistic when you restart. I think we need to either
start with know place ( midnight) or let user configure it.

I am bit concern about the syntax though. This only works for very specific
type of queries ( that includes aggregate and a group by). What happen if
user do this with a different query? Can we give clear error message?

--Srinath

On Mon, Mar 21, 2016 at 5:15 PM, Sachith Withana <sach...@wso2.com> wrote:

> Hi all,
>
> We are adding incremental processing capability to Spark in DAS.
> As the first stage, we added time slicing to Spark execution.
>
> Here's a quick introduction into that.
>
> *Execution*:
>
> In the first run of the script, it will process all the data in the given
> table and store the last processed event timestamp.
> Then from the next run onwards it would start processing starting from
> that stored timestamp.
>
> Until the query that contains the data processing part, completes, last
> processed event timestamp would not be overridden with the new value.
> This is to ensure that the data processing for the query wouldn't have to
> be done again, if the whole query fails.
> This is ensured by adding a commit query after the main query.
> Refer to the Syntax section for the example.
>
> *Syntax*:
>
> In the spark script, incremental processing support has to be specified
> per table, this would happen in the create temporary table line.
>
> ex: CREATE TEMPORARY TABLE T1 USING CarbonAnalytics options (tableName
> "test",
> *incrementalProcessing "T1,3600");*
>
> INSERT INTO T2 SELECT username, age GROUP BY age FROM T1;
>
> INC_TABLE_COMMIT T1;
>
> The last line is where it ensures the processing took place successfully
> and replaces the lastProcessed timestamp with the new one.
>
> *TimeWindow*:
>
> To do the incremental processing, the user has to provide the time window
> per which the data would be processed.
> In the above example. the data would be summarized in *1 hour *time
> windows.
>
> *WindowOffset*:
>
> Events might arrive late that would belong to a previous processed time
> window. To account to that, we have added an optional parameter that would
> allow to
> process immediately previous time windows as well ( acts like a buffer).
> ex: If this is set to 1, apart from the to-be-processed data, data related
> to the previously processed time window will also be taken for processing.
>
>
> *Limitations*:
>
> Currently, multiple time windows cannot be specified per temporary table
> in the same script.
> It would have to be done using different temporary tables.
>
>
>
> *Future Improvements:*
> - Add aggregation function support for incremental processing
>
>
> Thanks,
> Sachith
> --
> Sachith Withana
> Software Engineer; WSO2 Inc.; http://wso2.com
> E-mail: sachith AT wso2.com
> M: +94715518127
> Linked-In: <http://goog_416592669>
> https://lk.linkedin.com/in/sachithwithana
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RFC:Security Challenges in Analytics Story

2016-03-23 Thread Srinath Perera
This thread described the authorization issue when reading data for gadgets
( as I mentioned in Dashboard server product council).

When IoT server/ API manager publish events, it need to tell DAS whose data
it is. ( however, server cannot login using that user, as then it will need
to keep passwords and also end up having to keep too many connections).

Gadget, when requesting data, has to tell DAS on whose behalf it is
requesting the data. DAS has to verify and show visible data. ( also DAS
data API need to be secured so that random users cannot call it and look at
other people's data).

--Srinath









On Sat, Mar 19, 2016 at 9:13 PM, Srinath Perera <srin...@wso2.com> wrote:

> Yes, and Ann can also generate a token and share with Smith, to send with
> his requests.
>
> Also, IMO the most Dashboard requests would come from a browser ( in a
> phone or PC), not from simple device. So storing or locating the token
> should not be a problem.
>
> On Fri, Mar 18, 2016 at 3:21 PM, Chathura Ekanayake <chath...@wso2.com>
> wrote:
>
>>
>>
>>
>>> I think we should go for a taken based approach (e.g. OAuth) to handle
>>> these scenarios. Following are few ideas
>>>
>>>
>>>1.
>>>
>>>Using a token ( Ann attesting system user can do publish/ access to
>>>this stream on her behalf), Ann let the “system user“ publish data into
>>>Ann’s account
>>>
>>>
>> If a device can store a token, Ann can generate a token with necessary
>> scope (to access Ann's event store) and store the token in the device
>> itself. In that case, device can send the token with each event, so that
>> IoT platform can decide permissions based on the token.
>>
>>
>>>
>>>1.
>>>
>>>When we give user Smith access to a gadget, we generate a token,
>>>which he will send when he is accessing the gadget, which the gadget will
>>>send to the DAS backend to get access to correct tables
>>>2.
>>>
>>>Same token can be used for API access as well
>>>3.
>>>
>>>We need to manage the tokens issued to each user so this happen
>>>transparently to the end user as much as possible.
>>>
>>>
>>>
>>
>
>
> --
> 
> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
> Site: http://people.apache.org/~hemapani/
> Photos: http://www.flickr.com/photos/hemapani/
> Phone: 0772360902
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Artifact deployment coordination support for C5

2016-03-23 Thread Srinath Perera
Kasun, I would go with JMS as it gives event persistence and you will not
loose the events even if listener is restarted.

--Srinath.

On Wed, Mar 23, 2016 at 10:45 AM, KasunG Gajasinghe <kas...@wso2.com> wrote:

> Hi,
>
> Given several issues we discovered with automatic artifact synchronization
> with DepSync in C4, we have discussed how to approach this problem in C5.
>
> We are thinking of not doing the automated artifact synchronization in C5.
> Rather, users should use their own mechanism to synchronize the artifacts
> across a cluster. Common approaches are RSync as a cron job and shell
> scripts.
>
> But, it is vital to know the artifact deployment status of the nodes in
> the entire cluster from a central place. For that, we are providing this
> deployment coordination feature. There will be two ways to use this.
>
> 1. JMS based publishing - the deployment status will be published by each
> node to a jms topic/queue
>
> 2. Log based publishing - publish the logs by using a syslog appender [1]
> or our own custom appender to a central location.
>
> The log publishing may not be limited to just the deployment coordination.
> In a containerized deployment, the carbon products will run in disposable
> containers. But sometimes, the logs need to be backed up for later
> reference. This will help with that.
>
> Any thoughts on this matter?
>
> [1]
> https://logging.apache.org/log4j/2.x/manual/appenders.html#SyslogAppender
>
> Thanks,
> KasunG
>
>
>
> --
> ~~--~~
> Sending this mail via my phone. Do excuse any typo or short replies
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Session Affinity Alternatives

2016-03-23 Thread Srinath Perera
Hi Frank

Agreed,

IMO, we should go for session affinity model ( Option A) as default, and
let him enable Option B (II) if he need.

Regarding B (I) "state transfer", I think user can choose to do it ( as it
does not need any help from the platform). That is zero work for us.

--Srinath

On Mon, Mar 21, 2016 at 5:45 PM, Frank Leymann <fr...@wso2.com> wrote:

> Hi Srinath,
>
> I am not at all arguing in favor of session replication or session
> persistence either.
>
> I think in many situation it's fine that a client fails when a session
> dropped and the client has to reconnect explicitly - this is option A.
>
> Otherwise (as a second option B), "state transfer" is to be used.  And
> this can be achieved by (i) including the full session state in the message
> (REST-style), or (ii) including only the session id in the message and hold
> the full state in a DB (which is used for session restore if server
> affinity has been lost).
>
> Option B.(ii) has quite a bunch of disadvantages: a DB is involved with
> obvious performance impact; garbage collection of sessions no longer
> needed; DBMS as single point-of-failure;....
>
>
>
> Best regards,
> Frank
>
> 2016-03-18 7:08 GMT+01:00 Srinath Perera <srin...@wso2.com>:
>
>> Hi Frank,
>>
>> Proposal is when the session parameter has changed, framework (e.g.
>> Servelt runtime, or MSS framework) will write the update to the disk
>> asynchronsly.
>>
>> Since we are a middleware platform, impact of losing a session depends on
>> the kind of the application end user is running.
>>
>> However, AFAIK session replication or pressitance that is in WSO2 AS was
>> rarely used. ( Azeez, please correct me if I am wrong).
>>
>> --Srinath
>>
>> On Thu, Mar 17, 2016 at 11:42 PM, Frank Leymann <fr...@wso2.com> wrote:
>>
>>> Sorry for jumping in late in the discussion
>>>
>>> Session affinity in general is bad (scalability, HA) - I guess that's
>>> what we all agree on.  But in certain scenarios, sticky sessions may be
>>> fine.  Thus, what is the underlying scenario in the case we discuss?
>>>
>>> As far as I understand, persisting session state requires the
>>> application to be aware of this. Or is the proposal that provide an
>>> environment that reconstructs the session state on behalf of the
>>> application?  As Srinath points out we are loosing data if a session aborts
>>> and the application is not a transaction: is that critical in our scenario?
>>>
>>>
>>> Best regards,
>>> Frank
>>>
>>> 2016-03-17 10:21 GMT+01:00 Srinath Perera <srin...@wso2.com>:
>>>
>>>> Let's not do session replication. It is very hard to make it work IMO.
>>>>
>>>> I would like to propose a variation to Azeez's version.
>>>>
>>>> We can do local session + session affinity + asynchronously save the
>>>> session state to a DB
>>>>
>>>> If a node cannot find the session, it will go and reload session from
>>>> the DB. ( This should happen if the node has failed, or we have moved
>>>> session away due to high load).
>>>>
>>>> With this model, there is a chance that you might loose last update to
>>>> the session. However, that will be very rare. I would keep "asynchronously
>>>> save the session state to a DB" off by default, so user will enable it for
>>>> complex scenarios.
>>>>
>>>> --Srinath
>>>>
>>>>
>>>>
>>>> On Sat, Mar 12, 2016 at 6:25 PM, Afkham Azeez <az...@wso2.com> wrote:
>>>>
>>>>> Of course the simplest solution is similar to what Tomcat does, local
>>>>> sessions (no replication) & in a cluster, have session affinity configured
>>>>> at the load balancer, so that should be the default. If the node that had
>>>>> the session dies, the clients connected to that instance would get errors
>>>>> or have to login again. For HA deployments, we would need session
>>>>> replication or session persistence.
>>>>>
>>>>> On Sat, Mar 12, 2016 at 12:58 PM, Sanjiva Weerawarana <
>>>>> sanj...@wso2.com> wrote:
>>>>>
>>>>>> Azeez we cannot have a model where every simple server (cluster)
>>>>>> deployment requires Redis.
>>>>>>
>>>>>> Please indicate what you think the right solution is .. its not clear
>>&g

Re: [Architecture] Log Analyzer Analytics for API Manager

2016-03-20 Thread Srinath Perera
Does all these logs available for APIM and ESB?

--Srinath

On Wed, Mar 16, 2016 at 2:37 PM, Miyuru Dayarathna  wrote:

> Hi,
>
> We have created a set of use cases to be implemented as part of the
> developments of Log Analyzer Analytics for API Manager. Specifically we
> will be creating a set of charts which will provide us information on the
> following use cases on API Manager.
>
> 1. Number of errors (classes/messages) thrown in a specific date period
> 2. New artifact deployed
> 3. How long do servers take to start
> 4. How long the servers were up
> 5. Invalid login attempts
> 6. Message processing errors (Number of API failures)
> 7. Access token validation errors (not sent, invalid, revoked)
> 8. Access token expiration errors
>
> The attached PDF file elaborates on these use cases. Let me know if you
> have any comments on them.
>
> --
> Thanks,
> Miyuru Dayarathna
> Senior Technical Lead
> Mobile: +94713527783
> Blog: http://miyurublog.blogspot.com
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Session Affinity Alternatives

2016-03-19 Thread Srinath Perera
Hi Frank,

Proposal is when the session parameter has changed, framework (e.g. Servelt
runtime, or MSS framework) will write the update to the disk asynchronsly.

Since we are a middleware platform, impact of losing a session depends on
the kind of the application end user is running.

However, AFAIK session replication or pressitance that is in WSO2 AS was
rarely used. ( Azeez, please correct me if I am wrong).

--Srinath

On Thu, Mar 17, 2016 at 11:42 PM, Frank Leymann <fr...@wso2.com> wrote:

> Sorry for jumping in late in the discussion
>
> Session affinity in general is bad (scalability, HA) - I guess that's what
> we all agree on.  But in certain scenarios, sticky sessions may be fine.
> Thus, what is the underlying scenario in the case we discuss?
>
> As far as I understand, persisting session state requires the application
> to be aware of this. Or is the proposal that provide an environment that
> reconstructs the session state on behalf of the application?  As Srinath
> points out we are loosing data if a session aborts and the application is
> not a transaction: is that critical in our scenario?
>
>
> Best regards,
> Frank
>
> 2016-03-17 10:21 GMT+01:00 Srinath Perera <srin...@wso2.com>:
>
>> Let's not do session replication. It is very hard to make it work IMO.
>>
>> I would like to propose a variation to Azeez's version.
>>
>> We can do local session + session affinity + asynchronously save the
>> session state to a DB
>>
>> If a node cannot find the session, it will go and reload session from the
>> DB. ( This should happen if the node has failed, or we have moved session
>> away due to high load).
>>
>> With this model, there is a chance that you might loose last update to
>> the session. However, that will be very rare. I would keep "asynchronously
>> save the session state to a DB" off by default, so user will enable it for
>> complex scenarios.
>>
>> --Srinath
>>
>>
>>
>> On Sat, Mar 12, 2016 at 6:25 PM, Afkham Azeez <az...@wso2.com> wrote:
>>
>>> Of course the simplest solution is similar to what Tomcat does, local
>>> sessions (no replication) & in a cluster, have session affinity configured
>>> at the load balancer, so that should be the default. If the node that had
>>> the session dies, the clients connected to that instance would get errors
>>> or have to login again. For HA deployments, we would need session
>>> replication or session persistence.
>>>
>>> On Sat, Mar 12, 2016 at 12:58 PM, Sanjiva Weerawarana <sanj...@wso2.com>
>>> wrote:
>>>
>>>> Azeez we cannot have a model where every simple server (cluster)
>>>> deployment requires Redis.
>>>>
>>>> Please indicate what you think the right solution is .. its not clear
>>>> to me.
>>>>
>>>> On Thu, Mar 10, 2016 at 7:34 PM, Afkham Azeez <az...@wso2.com> wrote:
>>>>
>>>>> Storing everything as cookies may not work always  and there could be
>>>>> sensitive runtime data that you don't want to save on the browser. In
>>>>> addition, when it comes to Java programming models, using the HTTP session
>>>>> to store serializable objects is the natural way of programming. Yes, your
>>>>> solution would work for certain cases, but it doesn't cover all cases.
>>>>>
>>>>> On Thu, Mar 10, 2016 at 6:48 PM, Joseph Fonseka <jos...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> I think we should go with 3 to keep things simple.
>>>>>>
>>>>>> To solve HA problem ( without session persistence or replication ).
>>>>>>
>>>>>> 1. Use SSO to authenticate the user.
>>>>>> 2. Use the session to store the claims return from IdP. ( Ex user_id,
>>>>>> roles )
>>>>>> 3. DO NOT store app specific data on the session instead use cookies,
>>>>>> local storage in the browser.
>>>>>> 4. In case the container get terminated and user get redirected to
>>>>>> another container it will initiate a SSO flow and repopulate a new 
>>>>>> session
>>>>>> with user claims. Then the app can continue as normal.
>>>>>>
>>>>>> WDYT?
>>>>>>
>>>>>> Regards
>>>>>> Jo
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>&

Re: [Architecture] RFC:Security Challenges in Analytics Story

2016-03-19 Thread Srinath Perera
Yes, and Ann can also generate a token and share with Smith, to send with
his requests.

Also, IMO the most Dashboard requests would come from a browser ( in a
phone or PC), not from simple device. So storing or locating the token
should not be a problem.

On Fri, Mar 18, 2016 at 3:21 PM, Chathura Ekanayake 
wrote:

>
>
>
>> I think we should go for a taken based approach (e.g. OAuth) to handle
>> these scenarios. Following are few ideas
>>
>>
>>1.
>>
>>Using a token ( Ann attesting system user can do publish/ access to
>>this stream on her behalf), Ann let the “system user“ publish data into
>>Ann’s account
>>
>>
> If a device can store a token, Ann can generate a token with necessary
> scope (to access Ann's event store) and store the token in the device
> itself. In that case, device can send the token with each event, so that
> IoT platform can decide permissions based on the token.
>
>
>>
>>1.
>>
>>When we give user Smith access to a gadget, we generate a token,
>>which he will send when he is accessing the gadget, which the gadget will
>>send to the DAS backend to get access to correct tables
>>2.
>>
>>Same token can be used for API access as well
>>3.
>>
>>We need to manage the tokens issued to each user so this happen
>>transparently to the end user as much as possible.
>>
>>
>>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] RFC:Security Challenges in Analytics Story

2016-03-19 Thread Srinath Perera
I am using IoT platform as an example. Same scenarios applies for other use
cases as well.

What we need

   1.

   User Ann, logs into her IoT platform and registers a device
   2.

   IoT platform collects the data from the device and send the data to DAS
   3.

   DAS stores that data under Ann’s account
   4.

   Ann needs to be able to log into IoT platform and see gadgets for her
   devices. At the same time, those gadgets cannot be seen by other users. If
   the gadget is a common gadget for all the users. Gadget will show relevant
   data based on the logged in user.
   5.

   Ann logs into DAS create a gadget that uses data from her devices
   6.

   Then she share that gadget with the user Smith.
   7.

   User Smith come and access the gadget.



Currently in DAS, when publishing data, users can login as a tenant. That
data will be stored under a table assigned to that tenant. ( we append the
tenant name to the table name and handles that transparently).

Any user within the current tenant can access that data published to the
same tenant.

Above scenarios have following problems.


   1.

   When publishing data to DAS, IoT platform should either login as Ann, or
   need to publish data as a “system” user. Logging in as Ann is not desirable
   because then the IoT server has to store the user name passwords of Ann (
   hence all it’s users).
   2.

   If system user is used, when Ann access the inbuilt gadgets, the Gadget
   needs to talk to DAS using “system” user. Hence, Gadget configurations
   needs to save system user user name and passwords. Furthermore, gadget
   needs to check permissions for Ann before giving her access to Gadget. ( do
   we support gadget level permissions?) However, sharing “system” user allow
   her to access data of other users as well.
   3.

   It is not possible for Ann to develop her own gadget without getting
   access to the “system” account used to publish data to DAS. However,
   sharing “system” user allow her to access data of other users as well.



Potential Solutions

I think we should go for a taken based approach (e.g. OAuth) to handle
these scenarios. Following are few ideas


   1.

   Using a token ( Ann attesting system user can do publish/ access to this
   stream on her behalf), Ann let the “system user“ publish data into Ann’s
   account
   2.

   When we give user Smith access to a gadget, we generate a token, which
   he will send when he is accessing the gadget, which the gadget will send to
   the DAS backend to get access to correct tables
   3.

   Same token can be used for API access as well
   4.

   We need to manage the tokens issued to each user so this happen
   transparently to the end user as much as possible.


Of course, above is only a high level sketch. However, I am sure we can
figure out the details.

Thanks
Srinath

Content is in the doc,
https://docs.google.com/document/d/1qBj5uvzLdALoORmeAwldou4O6uE8ZYR7DZ6djBt7yIw/edit
-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Session Affinity Alternatives

2016-03-19 Thread Srinath Perera
t;>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> With regards,
>>>>>>>>>>>>>> *Manu*ranga Perera.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> phone : 071 7 70 20 50
>>>>>>>>>>>>>> mail : m...@wso2.com
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> *Afkham Azeez*
>>>>>>>>>>>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>>>>>>>>>>>> Member; Apache Software Foundation; http://www.apache.org/
>>>>>>>>>>>>> * <http://www.apache.org/>*
>>>>>>>>>>>>> *email: **az...@wso2.com* <az...@wso2.com>
>>>>>>>>>>>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>>>>>>>>>>>> *http://blog.afkham.org* <http://blog.afkham.org>
>>>>>>>>>>>>> *twitter: **http://twitter.com/afkham_azeez*
>>>>>>>>>>>>> <http://twitter.com/afkham_azeez>
>>>>>>>>>>>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>>>>>>>>>>>> <http://lk.linkedin.com/in/afkhamazeez>*
>>>>>>>>>>>>>
>>>>>>>>>>>>> *Lean . Enterprise . Middleware*
>>>>>>>>>>>>>
>>>>>>>>>>>>> ___
>>>>>>>>>>>>> Architecture mailing list
>>>>>>>>>>>>> Architecture@wso2.org
>>>>>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Sanjiva Weerawarana, Ph.D.
>>>>>>>>>>>> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
>>>>>>>>>>>> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11
>>>>>>>>>>>> 214 5345) x5700; cell: +94 77 787 6880 | +1 408 466 5099;
>>>>>>>>>>>> voip: +1 650 265 8311
>>>>>>>>>>>> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
>>>>>>>>>>>> Lean . Enterprise . Middleware
>>>>>>>>>>>>
>>>>>>>>>>>> ___
>>>>>>>>>>>> Architecture mailing list
>>>>>>>>>>>> Architecture@wso2.org
>>>>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> *Afkham Azeez*
>>>>>>>>>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>>>>>>>>>> Member; Apache Software Foundation; http://www.apache.org/
>>>>>>>>>>> * <http://www.apache.org/>*
>>>>>>>>>>> *email: **az...@wso2.com* <az...@wso2.com>
>>>>>>>>>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>>>>>>>>>> *http://blog.afkham.org* <http://blog.afkham.org>
>>>>>>>>>>> *twitter: **http://twitter.com/afkham_azeez*
>>>>>>>>>>> <http://twitter.com/afkham_azeez>
>>>>>>>>>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>>>>>>>>>> <http://lk.linkedin.com/in/afkhamazeez>*
>>>>>>>>>>>
>>>>>>>>>>> *Lean . Enterprise . Middleware*
>>>>>>>>>>>
>>>>>>>>>>> ___
>>>>>>>>>>> Architecture mailing list
>>>>>>>>>>> Architecture@wso2.org
>>>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Sanjiva Weerawarana, Ph.D.
>>>>>>>>>> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
>>>>>>>>>> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214
>>>>>>>>>> 5345) x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1
>>>>>>>>>> 650 265 8311
>>>>>>>>>> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
>>>>>>>>>> Lean . Enterprise . Middleware
>>>>>>>>>>
>>>>>>>>>> ___
>>>>>>>>>> Architecture mailing list
>>>>>>>>>> Architecture@wso2.org
>>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> With regards,
>>>>>>>>> *Manu*ranga Perera.
>>>>>>>>>
>>>>>>>>> phone : 071 7 70 20 50
>>>>>>>>> mail : m...@wso2.com
>>>>>>>>>
>>>>>>>>> ___
>>>>>>>>> Architecture mailing list
>>>>>>>>> Architecture@wso2.org
>>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Sanjiva Weerawarana, Ph.D.
>>>>>>>> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
>>>>>>>> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214
>>>>>>>> 5345) x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650
>>>>>>>> 265 8311
>>>>>>>> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
>>>>>>>> Lean . Enterprise . Middleware
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Architecture mailing list
>>>>>>>> Architecture@wso2.org
>>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Afkham Azeez*
>>>>>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>>>>>> Member; Apache Software Foundation; http://www.apache.org/
>>>>>>> * <http://www.apache.org/>*
>>>>>>> *email: **az...@wso2.com* <az...@wso2.com>
>>>>>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>>>>>> *http://blog.afkham.org* <http://blog.afkham.org>
>>>>>>> *twitter: **http://twitter.com/afkham_azeez*
>>>>>>> <http://twitter.com/afkham_azeez>
>>>>>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>>>>>> <http://lk.linkedin.com/in/afkhamazeez>*
>>>>>>>
>>>>>>> *Lean . Enterprise . Middleware*
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Afkham Azeez*
>>>>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>>>>> Member; Apache Software Foundation; http://www.apache.org/
>>>>>> * <http://www.apache.org/>*
>>>>>> *email: **az...@wso2.com* <az...@wso2.com>
>>>>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>>>>> *http://blog.afkham.org* <http://blog.afkham.org>
>>>>>> *twitter: **http://twitter.com/afkham_azeez*
>>>>>> <http://twitter.com/afkham_azeez>
>>>>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>>>>> <http://lk.linkedin.com/in/afkhamazeez>*
>>>>>>
>>>>>> *Lean . Enterprise . Middleware*
>>>>>>
>>>>>> ___
>>>>>> Architecture mailing list
>>>>>> Architecture@wso2.org
>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Lakmal Warusawithana
>>>>> Director - Cloud Architecture; WSO2 Inc.
>>>>> Mobile : +94714289692
>>>>> Blog : http://lakmalsview.blogspot.com/
>>>>>
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> --
>>>> *Joseph Fonseka*
>>>> WSO2 Inc.; http://wso2.com
>>>> lean.enterprise.middleware
>>>>
>>>> mobile: +94 772 512 430
>>>> skype: jpfonseka
>>>>
>>>> * <http://lk.linkedin.com/in/rumeshbandara>*
>>>>
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> *Afkham Azeez*
>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; http://www.apache.org/
>>> * <http://www.apache.org/>*
>>> *email: **az...@wso2.com* <az...@wso2.com>
>>> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
>>> *http://blog.afkham.org* <http://blog.afkham.org>
>>> *twitter: **http://twitter.com/afkham_azeez*
>>> <http://twitter.com/afkham_azeez>
>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>> <http://lk.linkedin.com/in/afkhamazeez>*
>>>
>>> *Lean . Enterprise . Middleware*
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Sanjiva Weerawarana, Ph.D.
>> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
>> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
>> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
>> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
>> Lean . Enterprise . Middleware
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * <http://www.apache.org/>*
> *email: **az...@wso2.com* <az...@wso2.com>
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* <http://blog.afkham.org>
> *twitter: **http://twitter.com/afkham_azeez*
> <http://twitter.com/afkham_azeez>
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> <http://lk.linkedin.com/in/afkhamazeez>*
>
> *Lean . Enterprise . Middleware*
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] The new disruptor based Netty transport is not working well for MSF4J

2016-03-14 Thread Srinath Perera
I talked to Ranawaka and Isuru in detail.

Disrupter helps a lot when tasks are CPU bound. In such cases, in can works
with very few threads and reduce the overhead of communication between
threads.

However, when threads  block for I/O this advantage is reduced a lot. In
those cases, we need to have multiple disrupter workers ( batch
processors). We are doing that.

However, doing  test with 500ms sleep is not fair IMO. Sleep often waits
more than the given value. With 200 threads, it can only do 400TPS with
500ms sleep. I think we should compare against a DB backend.

Shell we test disrupter and java work pool model against a DB backend?

--Srinath



On Mon, Mar 14, 2016 at 10:26 AM, Kasun Indrasiri  wrote:

> Hi,
>
> Let me try to clarify few things here.
>
> - Initially we implemented Netty HTTP transport with conventional thread
> model (workers) and at the same time we also tested the Disruptor based
> model for the same Gateway/Header based routing use case. Disruptor based
> approach gave us around  ~20k of perf gain on perf testing environments.
> - MSF4J 1.0 GA didn't use GW's HTTP transport code as it is. It reused
> basic transport runtime but with a custom Netty handler that is used to
> dispatch the requests. So, MSFJ 1.0 GA, didn't have disruptor and
> performance/latency of MSF4J 1.0 GA has nothing to do with Disruprtor.
>
> - Now we are trying to migrate MSF4J into the exact same transport code as
> it with the GW core (carbon-transport's HTTP transport). And that's where
> we came across the above perf issue.
> - So, unlike GW scenario, for MSF4J and even for any other content-aware
> ESB scenario, the above approach is not the optimum it seems. Hence we are
> now looking into how such scenarios are handled with Disruptor.
>
> In that context, if we look at the original LMAX use case[1] is also quite
> similar to what we are trying with content aware scenarios. In their use
> case they had heavy CPU intensive components(such as Business Logic
> component) as well as IO bound components (such as Un-marshaller,
> Replicator). And they get better performance for the same use case with
> Disruptor over a conventional worker-thread model.
>
>
> [image: Inline image 1]
>
> So, we need to have a close look into that how we can implement a
> dependent consumer scenario[2] (one consumer is IO bound and the other is
> CPU bound) and check whether we can get more perf gain compared to the
> conventional worker thread model.
>
> Ranawaka is current looking into this.
>
> [1] http://martinfowler.com/articles/lmax.html
> [2]
> http://mechanitis.blogspot.com/2011/07/dissecting-disruptor-wiring-up.html
>
> On Sun, Mar 13, 2016 at 8:11 AM, Isuru Ranawaka  wrote:
>
>> Hi Azeez,
>>
>> In GW Disruptor threads are not used for make calls for backends.
>> Backends are called by Netty worker pool and those calls are non blocking
>> calls. So if backend responds after a delay it won't be a problem.
>>
>>
>> In MSF4J  if it includes IO operations or delayed operations then it
>> causes problems because processing happens through Disruptor threads and
>> by occupying all the limited disruptor threads. But this should be solved
>> by operating Disruptor Event Handlers through workerpool and now we are
>> looking in to that why it does not provide expected results.
>>
>> thanks
>> IsuruR
>>
>> On Sat, Mar 12, 2016 at 6:46 PM, Afkham Azeez  wrote:
>>
>>> Kasun et al,
>>> In MSF4J, the threads from the disruptor pool itself are used for
>>> processing in the MSF4J operation. In the case of the GW passthrough & HBR
>>> scenarios, are those disruptor threads used to make the calls to the actual
>>> backends? Is that a blocking call? What if the backend service is changed
>>> so that it responds after a delay rather than instantaneously?
>>>
>>> On Sat, Mar 12, 2016 at 6:21 PM, Afkham Azeez  wrote:
>>>


 On Sat, Mar 12, 2016 at 1:40 PM, Sanjiva Weerawarana 
 wrote:

> On Thu, Mar 10, 2016 at 6:20 PM, Sagara Gunathunga 
> wrote:
>
>> On Thu, Mar 10, 2016 at 10:26 AM, Afkham Azeez 
>> wrote:
>>
>>> No from day 1, we have decided that GW & MSF4J will use the same
>>> Netty transport component so that the config file will be the same as 
>>> well
>>> as improvements made to that transport will be automatically available 
>>> for
>>> both products. So now at least for MSF4J, we have issues in using the 
>>> Netty
>>> transport in its current state, so we have to fix those issues.
>>>
>>
>> Reuse of same config files and components provide an advantage to us
>> as the F/W developers/maintainers but my question was what are the 
>> benefits
>> grant to end users of MSF4J through Carbon transport ?
>>
>
> We are writing MSF4J and the rest of the platform. Not someone else.
> As such we have to keep 

Re: [Architecture] Vega Lite

2016-03-11 Thread Srinath Perera
Tharik, do you think potentially we can merge VizGrammer with Vega lite?

If yes, let's write to them about VizGrammar. If they are willing, maybe we
can merge with vega lite and contribute it back to vega.

--Srinath

On Wed, Mar 2, 2016 at 11:59 AM, Tharik Kanaka <tha...@wso2.com> wrote:

> Hi Srinath,
>
> I have checked this out last week. Yes its a layer built on top of vega
> and they have simplified the vega grammar. In Vega users need to define
> scales, axes  and define mark to draw the graph which is little bit
> complicated.
>
> When it comes to vega-lite it has eliminated those and let users to define
> x, y, color and markType which is much simple and easy to use. From
> configuration grammar point of view this is little bit like VizGrammar as
> we also have let users to define x,y and color.
>
> We can keep using vega under the VizGrammar. On the hand in VizGrammar we
> are not going to restrict to to vega charts but also we will implement d3
> charts if there are any limitations with vega. Also we have embedded
> runtime implementation such as custom tooltip and click callbacks inside
> the VizGrammar.
>
> Regards,
>
> On Wed, Mar 2, 2016 at 11:02 AM, Dunith Dhanushka <dun...@wso2.com> wrote:
>
>> Thanks Srinath!
>>
>> Will have a look.
>>
>>
>>
>> On Wed, Mar 2, 2016 at 11:01 AM, Srinath Perera <srin...@wso2.com> wrote:
>>
>>> Please check this. Is it similar to the layer we built on top of Vega?
>>>
>>> https://vega.github.io/vega-lite/
>>> https://medium.com/@uwdata/introducing-vega-lite-438f9215f09e#.jd3f668ka
>>>
>>> --
>>> 
>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>> Site: http://people.apache.org/~hemapani/
>>> Photos: http://www.flickr.com/photos/hemapani/
>>> Phone: 0772360902
>>>
>>
>>
>>
>> --
>> Regards,
>>
>> Dunith Dhanushka,
>> Senior Software Engineer
>> WSO2 Inc,
>>
>> Mobile - +94 71 8615744
>> Blog - dunithd.wordpress.com <http://blog.dunith.com>
>> Twitter - @dunithd <http://twitter.com/dunithd>
>>
>
>
>
> --
>
> *Tharik Kanaka*
>
> WSO2, Inc |#20, Palm Grove, Colombo 03, Sri Lanka
>
> Email: tha...@wso2.com | Web: www.wso2.com
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Vega Lite

2016-03-01 Thread Srinath Perera
Please check this. Is it similar to the layer we built on top of Vega?

https://vega.github.io/vega-lite/
https://medium.com/@uwdata/introducing-vega-lite-438f9215f09e#.jd3f668ka

-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Adding RNN to WSO2 Machine Learner

2016-03-01 Thread Srinath Perera
Hi Thamali,


   1. RNN can do both classification and predict next value. Are we trying
   to do both?
   2. When Upul played with it, he had trouble getting deeplearning4j
   implementation work with predict next value scenario. Is it fixed?
   3. What are the data sets we will use to verify the accuracy of RNN
   after integration?


--Srinath

On Tue, Mar 1, 2016 at 3:44 PM, Thamali Wijewardhana <tham...@wso2.com>
wrote:

> Hi,
>
> Currently we are working on a project to add Recurrent Neural Network(RNN)
> algorithm to machine learner. RNN is one of deep learning algorithms with
> record breaking accuracy. For more information on RNN please refer link[1].
>
> We have decided to use deeplearning4j which is an open source deep
> learning library scalable on spark and Hadoop.
>
> Since there is a plan to add spark pipeline to machine Learner, we have
> decided to use spark pipeline concept to our project.
>
> I have designed an architecture for the RNN implementation.
>
> This architecture is developed to be compatible with spark pipeline.
>
> Data set is taken in csv format and then it is converted to spark data
> frame since apache spark works mostly with data frames.
>
> Next step is a transformer which is needed to tokenize the sequential
> data. A tokenizer is basically used for take a sequence of data and break
> it into individual units. For example, it can be used to break the words in
> a sentence to words.
>
> Next step is again a transformer used to converts tokens to vectors. This
> must be done because the features should be added to spark pipeline in
> org.apache.spark.mllib.linlag.VectorUDT format.
>
> Next, the transformed data are fed to the data set iterator. This is an
> object of a class which implement
> org.deeplearning4j.datasets.iterator.DataSetIterator. The dataset iterator
> traverses through a data set and prepares data for neural networks.
>
> Next component is the RNN algorithm model which is an estimator. The
> iterated data from data set iterator is fed to RNN and a model is
> generated. Then this model can be used for predictions.
>
> We have decided to complete this project in two steps :
>
>
>-
>
>First create a spark pipeline program containing the steps in machine
>learner(uploading dataset, generate model, calculating accuracy and
>prediction) and check whether the project is feasible.
>-
>
>Next add the algorithm to ML
>
> Currently we have almost completed the first step and now we are
> collecting more data and tuning for hyper parameters.
>
> [1]
> https://docs.google.com/document/d/1edg1fdKCYR7-B1oOLy2kon179GSs6x2Zx9oSRDn_NEU/edit
>
>
>
> ​
>



-- 

Srinath Perera, Ph.D.
   http://people.apache.org/~hemapani/
   http://srinathsview.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Defining UX Pasona for the Platform

2016-02-25 Thread Srinath Perera
Hi Dakshika,

As per our chat, most of the time, we will use a common set of Pasona's for
UX user stories (e.g. developer, DevOps Person, Business User etc). IMO
defining them before will save lot of time. Then for most use cases, we can
pick one from the list.

Could UX team define Pasona's? When we have the first cut, lets do a
review.

--Srinath

-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Do we need a UI to configure analytics Queries? Was: [Fwd: [APIM Analytics] Update on current progress]

2016-02-22 Thread Srinath Perera
Switching to architecture@ as this is technical discussion.

Hi Sanjiva, Isabelle,

We have two choices


   1. Let user define/ configure Analytics (e.g. Alerts) via a UI. Each
   alerts is shown as a policy, and users can enable and configure it
   2. Enable this only via config files ( No UI)


WDYT? Nuwan have raised valid issues ( see below). They can be addressed,
but would be bit complicated.

I am inclined to go for config only approach. ( which fit better with No
Carbon Console approach)

We need to agree and do one thing for all analytics things.

--Srinath


-- Forwarded message --
From: Nuwan Dias <nuw...@wso2.com>
Date: Mon, Feb 22, 2016 at 5:44 PM
Subject: Re: [APIM Analytics] Update on current progress
To: Srinath Perera <srin...@wso2.com>
Cc: Mohanadarshan Vivekanandalingam <mo...@wso2.com>, Nirmal Fernando <
nir...@wso2.com>, "engineering-gr...@wso2.com" <engineering-gr...@wso2.com>,
Vidura Gamini Abhaya <vid...@wso2.com>, Sachith Withana <sach...@wso2.com>,
Fazlan Nazeem <fazl...@wso2.com>, Tishan Dahanayakage <tis...@wso2.com>,
Upul Bandara <u...@wso2.com>, Maheshakya Wijewardena <mahesha...@wso2.com>,
Janaka Ranabahu <jan...@wso2.com>, Tharindu Dharmarathna <tharin...@wso2.com>,
Sachini Jayasekara <sachi...@wso2.com>


Hi Srinath,

Its fine to have a simple UI. It would look nice.

However, please consider these when developing the UI (these are actual
problems we've come across in support).

1. When there are a cluster of DAS nodes, to which does the UI connect and
how are the configs replicated?
2. When the admin username changes on DAS, what needs to be done on APIM?
3. Having a UI makes it possible to change values during runtime. How do
these affect the runtime's behaviour?
4. The UI should not be the only way of configuring these params, dev ops
need to automate deployment. So think of that too.

There can be more complications to address. Anyhow, if its tough to find
solutions to any of these programatically we need to have reasonable
answers/workarounds to address them.

Thanks,
NuwanD.

On Fri, Feb 19, 2016 at 5:40 PM, Srinath Perera <srin...@wso2.com> wrote:

> Nuwan, when we chatted, we said lets have a simple UI to configure.
>
> If we dropping the UI, we need to talk to all stakeholders and agree.
>
> --Srinath
>
> On Thu, Feb 18, 2016 at 2:40 PM, Mohanadarshan Vivekanandalingam <
> mo...@wso2.com> wrote:
>
>> Hi Nuwan,
>>
>> I hope you can remember that we discussed the need for a UI (similar to
>> CEP execution manager) to define thresholds/ to get parameter values for
>> some conditions in queries. Specially for CEP queries likewise there is a
>> requirement whether we need get parameter/threshold values for some spark
>> queries as well..
>>
>> Do you think that we don't required a UI for this ?
>>
>> Thanks,
>> Mohan
>>
>>
>>
>>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] How do we get DAS server location?

2016-02-19 Thread Srinath Perera
Hi Sanjiva,

I think we did though Hazelcast. AFAIK, we have not done it for DAS
discovery yet.

If we use Hazelcast, it is trivial to do. But that will add Hazelcast to
all our products. ( or Maybe we can find and borrow that part of the code).

--Srinath

On Sat, Feb 20, 2016 at 10:00 AM, Sanjiva Weerawarana <sanj...@wso2.com>
wrote:

> Guys we also need the servers to discover each other when on the same
> machine or LAN. Have we done that yet? That's very easy to do [1] and IIRC
> we used it before for something.
>
> [1] https://en.wikipedia.org/wiki/Zero-configuration_networking
>
> On Fri, Feb 19, 2016 at 7:05 PM, Malith Dhanushka <mal...@wso2.com> wrote:
>
>>
>>
>> On Fri, Feb 19, 2016 at 5:00 PM, Anjana Fernando <anj...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> On Fri, Feb 19, 2016 at 4:54 PM, Srinath Perera <srin...@wso2.com>
>>> wrote:
>>>
>>>> Kasun, Nuwan
>>>>
>>>> All product needs to get DAS server location from one place.
>>>>
>>>>
>>>>1. Do we have a place for that? Otherwise, we need something like
>>>>conf/das-client.xml and create a component to read it and use it with 
>>>> API
>>>>and ESB when they want to send events to DAS ( Anjana, can ESB analytics
>>>>guys do it?)
>>>>
>>>> Yeah, we can check on that. As I remember, there were some discussions
>>> on this, where a similar functionality was requested by APIM team as I
>>> remember. @Malith, can you please look into this.
>>>
>>>
>> Noted
>>
>>>
>>>>1.
>>>>2. If we have a UI to specify DAS location, we should drop that.
>>>>This is a admin UI, and in C5 we drop all admin UIs.
>>>>3. I think we discussed about doing a UDP broadcast, and
>>>>automatically find the DAS server if it is in same LAN. Have we tried 
>>>> that?
>>>>( IMO this is post #1, but should do this ASAP)
>>>>
>>>> I think we just having a common configuration for pointing to a DAS
>>> server should be enough (at least for now). UDP multicast will not work
>>> most often with certain network configuration, so I don't think it would be
>>> that useful.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>>
>>>> Thanks
>>>> Srinath
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> 
>>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>>> Site: http://people.apache.org/~hemapani/
>>>> Photos: http://www.flickr.com/photos/hemapani/
>>>> Phone: 0772360902
>>>>
>>>
>>>
>>>
>>> --
>>> *Anjana Fernando*
>>> Senior Technical Lead
>>> WSO2 Inc. | http://wso2.com
>>> lean . enterprise . middleware
>>>
>>
>>
>>
>> --
>> Malith Dhanushka
>> Senior Software Engineer - Data Technologies
>> *WSO2, Inc. : wso2.com <http://wso2.com/>*
>> *Mobile*  : +94 716 506 693
>>
>
>
>
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
> Lean . Enterprise . Middleware
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] How do we get DAS server location?

2016-02-19 Thread Srinath Perera
Kasun, Nuwan

All product needs to get DAS server location from one place.


   1. Do we have a place for that? Otherwise, we need something like
   conf/das-client.xml and create a component to read it and use it with API
   and ESB when they want to send events to DAS ( Anjana, can ESB analytics
   guys do it?)
   2. If we have a UI to specify DAS location, we should drop that. This is
   a admin UI, and in C5 we drop all admin UIs.
   3. I think we discussed about doing a UDP broadcast, and automatically
   find the DAS server if it is in same LAN. Have we tried that? ( IMO this is
   post #1, but should do this ASAP)

Thanks
Srinath




-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Packaging analytics artifacts into a p2 repo

2016-02-09 Thread Srinath Perera
Thanks!! Seshika, we should try this with Fraud artifacts.

--Srinath

On Tue, Feb 9, 2016 at 11:14 AM, Chanika Geeganage  wrote:

> Hi,
>
> I have done a POC to evaluate whether it is possible to package analytics
> artifacts into a p2 repo. The complete set of steps can be found from [1]
>
> In here,  p2.inf file of the analytics artifact feature is added to define
> instructions for both configuration phase and un-installation phase of the
> feature. For an example if we want to deploy/undeploy the
> APIM_Realtime_Analytics.car file to/from the server, the content in the
> p2.inf would be as below.
>
> instructions.configure = \
>
> org.eclipse.equinox.p2.touchpoint.natives.mkdir(path:${installFolder}/../../deployment/);\
>
> org.eclipse.equinox.p2.touchpoint.natives.mkdir(path:${installFolder}/../../deployment/server/);\
>
> org.eclipse.equinox.p2.touchpoint.natives.mkdir(path:${installFolder}/../../deployment/server/carbonapps/);\
>
> org.eclipse.equinox.p2.touchpoint.natives.copy(source:${installFolder}/../../../samples/capps/APIM_Realtime_Analytics.car,target:${installFolder}/../../deployment/server/carbonapps/APIM_Realtime_Analytics.car,overwrite:true);\
>
> instructions.uninstall = \
>
> org.eclipse.equinox.p2.touchpoint.natives.remove(path:${installFolder}/../../deployment/server/carbonapps/APIM_Realtime_Analytics.car,overwrite:true);\
>
>
> I have attached the two projects (analytics artifact feature, feature
> repository) for your reference.
>
> [1]
> https://docs.google.com/a/wso2.com/document/d/1Oq2OXgdyR1VdEmIKbQz09cTdPC3dhNNQx3nR0HE5cVc/edit?usp=sharing
>
> Thanks
> --
> Best Regards..
>
> Chanika Geeganage
> Software Engineer
> WSO2, Inc.; http://wso2.com
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] We should not create User Management Experiences in Other places

2016-02-08 Thread Srinath Perera
Hi Dakshika,

I was at Log Analyzer UX review today, and UX templates have recreated a
user management UI experience.

IMHO, this is WRONG. We created Carbon to avoid having to reduce things
like users, registry, Authentication, Authorization, logs etc again for
each product. Things like user management must come from Carbon kernel or
Carbon core.

It is different discussion whether we need a UI for user management ( this
is discussion about removing Admin UIs). If we need such UI, it should come
from Carbn core.

Please comment.

Thanks
Srinath

-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] We should not create User Management Experiences in Other places

2016-02-08 Thread Srinath Perera
Hi Dakshika,

It is more than LA does not need it.

Actually, user management should not be requirement for any app at all. If
some app needs that, it should come from the carbon core components ( we
can improve the existing user management App if it can be improved).

--Srinath

On Mon, Feb 8, 2016 at 6:40 PM, Dakshika Jayathilaka <daksh...@wso2.com>
wrote:

> Hi Srinath,
>
> I agreed on your point. Currently we don't have shareable way of handling
> user management on frontend applications. IMHO user management UI needs to
> separate from application itself and eventually it has to move into a
> shareable UI component.
>
> AFAIK above mentioned wireframes created according to the LA user stories
> and If we don't think it is a requirement of LA app, we can remove those
> stories from the workflow.
>
> Thank you.
>
> Regards,
>
> *Dakshika Jayathilaka*
> PMC Member & Committer of Apache Stratos
> Senior Software Engineer
> WSO2, Inc.
> lean.enterprise.middleware
> 0771100911
>
> On Mon, Feb 8, 2016 at 1:40 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Dakshika,
>>
>> I was at Log Analyzer UX review today, and UX templates have recreated a
>> user management UI experience.
>>
>> IMHO, this is WRONG. We created Carbon to avoid having to reduce things
>> like users, registry, Authentication, Authorization, logs etc again for
>> each product. Things like user management must come from Carbon kernel or
>> Carbon core.
>>
>> It is different discussion whether we need a UI for user management (
>> this is discussion about removing Admin UIs). If we need such UI, it should
>> come from Carbn core.
>>
>> Please comment.
>>
>> Thanks
>> Srinath
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://people.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Distributed Coordination in C5 was [Hangout Session] With Kernel Team On MB C5 Plans @ Mon Feb 8, 2016 2pm - 3pm (ram...@wso2.com)

2016-02-08 Thread Srinath Perera
Moving to arch@

CEP ( for Storm based version), MB ( for base algo), ESB ( for tasks) are
known cases. For those, we can use Hazelcast ( they can use it directly).

Hazelcast works OK with small number of nodes. AFAIK, there is no better
solution ( we use Zookeeper before) unless there is something new.

--Srinath



On Mon, Feb 8, 2016 at 8:07 PM, Sameera Jayasoma  wrote:

> Looks like several products require distributed coordination. We need to
> evaluate these requirements and come up with a solution.
>
> Hazelcast is used in C4 based products to achieve distributed
> coordination. Not sure whether we should go ahead with Hazelcast after all
> the issues we've faced so far with it.
>
> Kasun, Ramith and Suho can you guys explain the requirement to use the
> distributed coordination.
>
> Thanks,
> Sameera.
>
>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Embedding Log Analyzer with Analytics distribution of Products

2016-02-01 Thread Srinath Perera
Hi All,

I believe we should integrate Log Analyzer with analytics distributions of
the products.

It is true some of the information you can take from Log analyzer is
already available under normal analytics. For those, we do not need to use
Log analyzer.

However, log analyzer let us find and understand use cases that is not
already instrumented. For example, when we see a error, we might check has
a similar error happened before. Basically we can check ad-hoc dynamic use
cases via log analyzer. Example of this is analytics done by our Cloud
team.

In general, log analyzer will be used by advanced users who will understand
inner workings for the product. It will be a very powerful debugging tool.

However, if we want to embed the log analyzer, then it is challenging due
to ruby based log stash we use with log analyzer. I think in that case, we
also need a java based log agent.

Please comment.

Thanks
Srinath
-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Embedding Log Analyzer with Analytics distribution of Products

2016-02-01 Thread Srinath Perera
On Mon, Feb 1, 2016 at 2:21 PM, Anjana Fernando <anj...@wso2.com> wrote:

> Hi,
>
> The initial use case I had with Log Analyzer was, as a general log
> analysis tool, where users can just point to a log location, can be
> WSO2/non-WSO2 logs, and run queries against it / create dashboards. The
> concern I've with integrating log analyzer also with our new analytics
> distributions is, whether we will have some considering overlapping
> functionality between the two. The DAS4X analytics effort is to basically
> create mostly the static dashboards that would be there (maybe with
> alerts), which can be successfully done by internally publishing all the
> events required for those. But then, if we also say, you can/should use log
> analyzer (which is a different UI/experience altogether) to create
> dashboards/queries, that we missed from the earlier effort, that does not
> sound right.
>

Anjana, point is dynamic/ad-hoc query use cases. E.g.
1) You see a new error, and want to check has it happend before.
2) You see two error happening together. You need to know it has happend
together before.


>
> So the point is, as I see, if we do the pure DAS4X solution right for a
> product, they do not have an immediate need to use the log analysis
> features again to do any custom analysis. But of course, if they want to
> process the logs also nevertheless, they can setup the log analyzer product
> and do it, for example, as a replacement to syslog, for centralized log
> storage.
>
> Cheers,
> Anjana.
>
> On Mon, Feb 1, 2016 at 2:04 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi All,
>>
>> I believe we should integrate Log Analyzer with analytics distributions
>> of the products.
>>
>> It is true some of the information you can take from Log analyzer is
>> already available under normal analytics. For those, we do not need to use
>> Log analyzer.
>>
>> However, log analyzer let us find and understand use cases that is not
>> already instrumented. For example, when we see a error, we might check has
>> a similar error happened before. Basically we can check ad-hoc dynamic use
>> cases via log analyzer. Example of this is analytics done by our Cloud
>> team.
>>
>> In general, log analyzer will be used by advanced users who will
>> understand inner workings for the product. It will be a very powerful
>> debugging tool.
>>
>> However, if we want to embed the log analyzer, then it is challenging due
>> to ruby based log stash we use with log analyzer. I think in that case, we
>> also need a java based log agent.
>>
>> Please comment.
>>
>> Thanks
>> Srinath
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://people.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Security Analytics

2016-02-01 Thread Srinath Perera
Hi Seshika,

Thinking about this, I think we should replace the line chart with a
Heatmap that has time (as days) as X axis and user, role, service provider,
ip ( user can pick any) y axis because heatmap is much better on drill down
and exploration.


[image: Inline image 2]
Then user can click on any cell and see more information. However, we will
have to create in grouping things like IP address to manageable number of
groups in Y axis.

Let's chat more.

--Srinath


On Mon, Feb 1, 2016 at 11:30 PM, Lahiru Sandaruwan  wrote:

> Hi Seshi,
>
> I think we can consider Authorization stats also. Since WSO2 IS has a good
> implementation of XACML spec, we can collect stats on, the requests
> allowed, denied, with which granularity, etc.
>
> Thanks.
>
> On Mon, Feb 1, 2016 at 1:59 PM, Seshika Fernando  wrote:
>
>> Hi all,
>>
>> 'Security Analytics' is basically providing useful analytics for the WSO2
>> Identity Server product through the use of WSO2 DAS. After discussing with
>> IS guys (Prabath and Johann) we initiated a Security Analytics roadmap and
>> I'm currently in the process of identifying and detailing the Security
>> Analytics needs.
>>
>> In this process we discovered that security analytics can be dealt in 2
>> ways...
>>
>>1. Presentation of Identity Analytics - Analyze available identity
>>data from logs, audit trails etc; and enable users to view results it in
>>many ways.
>>2. Adaptive Analytics - Analyze identity data (historical and
>>realtime) to identify anomalous patterns and feed the decisions back into
>>the Identity server to enable additional checks
>>
>> We will first focus on Presentation of Identity Analytics and the
>> attached document is a WIP description of the type of analytics we want to
>> have.
>>
>> Open for suggestions.
>> @Johann, @Prabath: Comments are mandatory from you guys. :)
>>
>> seshi
>>
>> 1.
>> https://docs.google.com/a/wso2.com/document/d/1qWzo20hrzOXPyoTuyfk9J40agUnvuSK5yNtwmlTZxcU/edit?usp=sharing
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> --
> Lahiru Sandaruwan
> Committer and PMC member, Apache Stratos,
> Senior Software Engineer,
> WSO2 Inc., http://wso2.com
> lean.enterprise.middleware
>
> phone: +94773325954
> email: lahi...@wso2.com blog: http://lahiruwrites.blogspot.com/
> linked-in: http://lk.linkedin.com/pub/lahiru-sandaruwan/16/153/146
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Notebook Support Use cases for DAS

2015-12-07 Thread Srinath Perera
Anjana, how is this thread progressing? Who is looking at/ thinking about
notebooks?

On Thu, Nov 26, 2015 at 9:19 AM, Anjana Fernando <anj...@wso2.com> wrote:

> Hi Srinath,
>
> On Thu, Nov 26, 2015 at 9:08 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Anjana,
>>
>> Great!! I think the next step is deciding whether we do this with
>> Zeppelin and or we build it from scratch.
>>
>> Pros of Zeppelin
>>
>>1. We get lot of features OOB
>>2. Code maintained by community, patches etc.
>>3. New features will get added and it will evolve
>>4. We get to contribute to an Apache project and build recognition
>>
>> Cons
>>
>>1. Real deep integration might be lot of work ( we get initial
>>version very fast, but integrating details .. e.g. make our UIs work
>>in Zeppelin, or get Zeppelin to post to UES) might be tricky.
>>2. Zeppelin is still in incubator
>>3. Need to assess community
>>
>> I suggest you guys have a detailed chat with MiyuruD, who looked at it in
>> detail, try out things, thing about it and report back.
>>
>
> +1, we'll work with Miyuru also and see how to go forward.
>
>
>>
>>
>> On Thu, Nov 26, 2015 at 3:12 AM, Anjana Fernando <anj...@wso2.com> wrote:
>>
>>> Hi Srinath,
>>>
>>> The story looks good. For that part about, the "user can play with the
>>> data interactively", to make it more functional, we should probably
>>> consider integration of Scala scripts to the mix, rather than only having
>>> Spark SQL. Spark SQL maybe limited in functionality on certain data
>>> operations, and with Scala, we should be able to use all the functionality
>>> of Spark. For example, it would be easier to integrate ML operations with
>>> other batch operations etc.. to create a more natural flow of operations.
>>> The implementation may be tricky though, considering clustering,
>>> multi-tenancy etc..
>>>
>> Lets keep Scala version post MVP.
>>
>
> Sure.
>
>
>>
>>
>>>
>>> Also, I would like to also bring up the question on, are most batch jobs
>>> actually meant to be scheduled as such repeatedly, for a data set that
>>> actually grows always? .. or is it mostly a thing where we execute
>>> something once and get the results and that's it. Maybe this is a different
>>> discussion though. But, for scheduled batch jobs as such, I guess
>>> incremental processing would be critical, which no one seems to bother that
>>> much though.
>>>
>> I think it is mostly scheduled batches as we have. Shall we take this up
>> in a different thread?
>>
>
> Yep, sure.
>
>
>>
>>
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Mon, Nov 23, 2015 at 2:57 PM, Srinath Perera <srin...@wso2.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> I tried to write down the use cases, to start thinking about this
>>>> starting from what we discussed in the meeting. Please comment. ( doc is at
>>>> https://docs.google.com/document/d/1355YEXbhcd2fvS-zG_CiMigT-iTncxYn3DTHlJRTYyo/edit#
>>>> ( same content is below).
>>>>
>>>> Thanks
>>>> Srinath
>>>> Batch, interactive, and Predictive Story
>>>>
>>>>1.
>>>>
>>>>Data is uploaded to the system or send as a data stream and
>>>>collected for some time ( in DAS)
>>>>2.
>>>>
>>>>Data Scientist come in and select a data set, and look at schema of
>>>>data and do standard descriptive statistics like Mean, Max, Percentiles 
>>>> and
>>>>standard deviation about the data.
>>>>3.
>>>>
>>>>Data Scientist cleans up the data using series of transformations.
>>>>This might include combining multiple data sets into one data set.
>>>> [Notebooks]
>>>>4.
>>>>
>>>>He can play with the data interactively
>>>>5.
>>>>
>>>>He visualize the data in several ways [Notebooks]
>>>>6.
>>>>
>>>>If he need descriptive statistics, he can export the data mutations
>>>>in the notebooks as a script and schedule it.
>>>>7.
>>>>
>>>>If what he needs is machine learning, he can initialize and run the
>>>>ML Wizard from

Re: [Architecture] Evaluating the possibility of applying LZ4 compression algorithm to WSO2 MB

2015-12-06 Thread Srinath Perera
Hi Sajini,

reference to client side compression, +1 for the model.

Thanks
Srinath

On Mon, Dec 7, 2015 at 10:48 AM, Sajini De Silva <saj...@wso2.com> wrote:

> Hi Srinath,
>
> For client side compression, if all the subscribers can handle compressed
> data then there is no problem. But if the publisher sends compressed data
> and subscribers cannot handle them there MB come in to play. MB should be
> able to decompressed those data and send to subscribers those who cannot
> handle compressed data. WDYT?
>
> The second option, server side compression is always transparent from the
> user.
>
> Thank you,
> Sajini
>
> On Mon, Dec 7, 2015 at 10:36 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> Hi Thilanka,
>>
>>1. One question to ask is does client side compression is needed? We
>>can easily have the same effect by asking user to compress the data first
>>and send? What do we gain by adding this to MB
>>2. +1 for server side compression. I guess this is transparent from
>>end user.
>>
>> Thanks
>> Srinath
>>
>>
>> On Fri, Dec 4, 2015 at 11:07 PM, Thilanka Bowala <thila...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> Current MB doesn't have any method to compress the message content.
>>> Thus, we are willing to use lz4 as the compression algorithm [1]. We are
>>> willing to use lz4-java library [2] to this. These are the possibilities.
>>>
>>>1. Handling client side compression. Adding a configuration to the
>>>publisher to indicate that, if messages are compressed or not. Also,
>>>maintaining a flag from a subscriber side to indicate that, if that
>>>subscriber can handle decompression or not.
>>>2. Handling server side compression before storing into the
>>>database. Adding a configuration to indicate that if the server want to
>>>handle compression or not.
>>>
>>> These are the limitations.
>>>
>>>1. Other clients like .Net clients can't handle compression.
>>>2. LZ4 uses data repetitions to compress messages. If the message
>>>content haven't repetitions, compressed message size will expand by 0.4% 
>>> [3]
>>>3. LZ4 has a rule, not to compress data, if the block size is less
>>>than 13bytes [3]. Compressing message blocks with less than 13 bytes will
>>>increase the size of the block.
>>>
>>> I'm planning to handle compression using the 1st approach. If time
>>> allows, I'm planning to do the 2nd approach also.
>>> These are some problems.
>>>
>>>1. Is it OK to handle from the server side?
>>>2. Are there any proper libraries better than LZ4?
>>>
>>>
>>> Any comments and suggestions will be highly appreciated.
>>>
>>>
>>> [1] http://cyan4973.github.io/lz4/
>>> [2] https://github.com/jpountz/lz4-java
>>> [3] https://github.com/Cyan4973/lz4/blob/master/lz4_Block_format.md
>>>
>>>
>>> Thanks and regards,
>>>
>>> --
>>> Thilanka Bowala
>>> Software Engineering Intern
>>> Mobile : +94 (0) 710 403098
>>> thila...@wso2.com
>>>
>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://people.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> Sajini De SIlva
> Software Engineer; WSO2 Inc.; http://wso2.com ,
> Email: saj...@wso2.com
> Blog: http://sajinid.blogspot.com/
> Git hub profile: https://github.com/sajinidesilva
>
> Phone: +94 712797729
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Forceful queue/topic subscription deletion for Message Broker

2015-12-06 Thread Srinath Perera
Hi Sajini,

Great!! make sure it works on all scenarios. e.g. from UI and also when
user list subscriptions via API and want to delete them etc.

Thanks
Srinath

On Thu, Dec 3, 2015 at 4:16 PM, Sajini De Silva <saj...@wso2.com> wrote:

> Hi Srinath,
>
> For durable subscriptions, IP address of the node is appended to
> subscription ID. Currently for queue subscriptions, we don't do that. We
> can append it to subscription ID for queue subscriptions as well as in
> durable subscriptions.
>
> Thank you
> Sajini
>
> On Thu, Dec 3, 2015 at 3:31 PM, Srinath Perera <srin...@wso2.com> wrote:
>
>> If the user must go to the node where subscription is made to disconnect
>> the subscription, we might need to give a way for a user to find which node
>> has what subscriptions.
>>
>> --Srinath
>>
>> On Thu, Dec 3, 2015 at 11:36 AM, Ramith Jayasinghe <ram...@wso2.com>
>> wrote:
>>
>>> I propose,
>>>  1) user cant disconnect consumers in another nodes. ( simplifies the
>>> approach).
>>>  2) there need to be a permission around it.
>>>  3) it could for any subscriber ( may be lets leave out MQTT for a bit)
>>> whose misbehaving.
>>>
>>>
>>> On Thu, Dec 3, 2015 at 11:31 AM, Sajini De Silva <saj...@wso2.com>
>>> wrote:
>>>
>>>> Hi Hasitha,
>>>>
>>>> We haven't implemented the MQTT UI yet. Therefore IMO its better not to
>>>> implement this feature for MQTT for now. We can consider it when
>>>> implementing MQTT UI in our major release. WDYT?
>>>>
>>>> Thank you,
>>>> Sajini
>>>>
>>>> On Thu, Dec 3, 2015 at 11:19 AM, Hasitha Hiranya <hasit...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> There is a feature request that Message Broker should support to
>>>>> forcefully disconnect a subscriber. This means, server should be able to
>>>>> disconnect misbehaving subscribers. Server has a control over it.
>>>>>
>>>>> Steps to do this would be
>>>>>
>>>>> 1. Find a way to disconnect the socket from mina transport level.
>>>>> 2. Send a message to Andes core to delete (disconnect) the subscriber.
>>>>>
>>>>> At step two the node that is disconnecting  the socket, will broadcast
>>>>> a Hazelcast message that this subscriber is closed.
>>>>>
>>>>> With above implementation steps, a limitation is introduced as "you
>>>>> can only disconnect the subscribers originated to that node only". For
>>>>> example, say there is a MB cluster MB1, MB2, MB3. There is sub1 for MB1 
>>>>> and
>>>>> sub2 for MB2. You need to go to MB1's web console to disconnect sub1 and
>>>>> MB2's web console to disconnect sub2. So cluster aspect is lost there.
>>>>>
>>>>> Thus two implementation approaches are there
>>>>>
>>>>> 1. Forceful disconnection can be done only from the node subscriber is
>>>>> connected to.
>>>>> 2. Forceful disconnection can be done from any node (this is bit
>>>>> complex. Involves Hazelcast notifications)
>>>>>
>>>>> As our end goal for implementation would option 1 be adequate?
>>>>> Or do we need to reach out for option 2 as well?
>>>>> Do we need to facilitate this for any subscription type
>>>>> (queue/topic/durable topic/shared durable topic) ? What about MQTT
>>>>> subscribers?
>>>>>
>>>>> Also what are the permissions a user should have to perform this
>>>>> action? Are we going to introduce a new permission type?
>>>>>
>>>>> Thanks
>>>>> --
>>>>> *Hasitha Abeykoon*
>>>>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>>>>> *cell:* *+94 719363063*
>>>>> *blog: **abeykoon.blogspot.com* <http://abeykoon.blogspot.com>
>>>>>
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Sajini De SIlva
>>>> Software Engineer; WSO2 Inc.; http://wso2.com ,
>>>> Email: saj...@wso2.com
>>>> Blog: http://sajinid.blogspot.com/
>>>> Git hub profile: https://github.com/sajinidesilva
>>>>
>>>> Phone: +94 712797729
>>>>
>>>>
>>>
>>>
>>> --
>>> Ramith Jayasinghe
>>> Technical Lead
>>> WSO2 Inc., http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> E: ram...@wso2.com
>>> P: +94 777542851
>>>
>>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://people.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> Sajini De SIlva
> Software Engineer; WSO2 Inc.; http://wso2.com ,
> Email: saj...@wso2.com
> Blog: http://sajinid.blogspot.com/
> Git hub profile: https://github.com/sajinidesilva
>
> Phone: +94 712797729
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Forceful queue/topic subscription deletion for Message Broker

2015-12-03 Thread Srinath Perera
If the user must go to the node where subscription is made to disconnect
the subscription, we might need to give a way for a user to find which node
has what subscriptions.

--Srinath

On Thu, Dec 3, 2015 at 11:36 AM, Ramith Jayasinghe  wrote:

> I propose,
>  1) user cant disconnect consumers in another nodes. ( simplifies the
> approach).
>  2) there need to be a permission around it.
>  3) it could for any subscriber ( may be lets leave out MQTT for a bit)
> whose misbehaving.
>
>
> On Thu, Dec 3, 2015 at 11:31 AM, Sajini De Silva  wrote:
>
>> Hi Hasitha,
>>
>> We haven't implemented the MQTT UI yet. Therefore IMO its better not to
>> implement this feature for MQTT for now. We can consider it when
>> implementing MQTT UI in our major release. WDYT?
>>
>> Thank you,
>> Sajini
>>
>> On Thu, Dec 3, 2015 at 11:19 AM, Hasitha Hiranya 
>> wrote:
>>
>>> Hi,
>>>
>>> There is a feature request that Message Broker should support to
>>> forcefully disconnect a subscriber. This means, server should be able to
>>> disconnect misbehaving subscribers. Server has a control over it.
>>>
>>> Steps to do this would be
>>>
>>> 1. Find a way to disconnect the socket from mina transport level.
>>> 2. Send a message to Andes core to delete (disconnect) the subscriber.
>>>
>>> At step two the node that is disconnecting  the socket, will broadcast a
>>> Hazelcast message that this subscriber is closed.
>>>
>>> With above implementation steps, a limitation is introduced as "you can
>>> only disconnect the subscribers originated to that node only". For example,
>>> say there is a MB cluster MB1, MB2, MB3. There is sub1 for MB1 and sub2 for
>>> MB2. You need to go to MB1's web console to disconnect sub1 and MB2's web
>>> console to disconnect sub2. So cluster aspect is lost there.
>>>
>>> Thus two implementation approaches are there
>>>
>>> 1. Forceful disconnection can be done only from the node subscriber is
>>> connected to.
>>> 2. Forceful disconnection can be done from any node (this is bit
>>> complex. Involves Hazelcast notifications)
>>>
>>> As our end goal for implementation would option 1 be adequate?
>>> Or do we need to reach out for option 2 as well?
>>> Do we need to facilitate this for any subscription type
>>> (queue/topic/durable topic/shared durable topic) ? What about MQTT
>>> subscribers?
>>>
>>> Also what are the permissions a user should have to perform this action?
>>> Are we going to introduce a new permission type?
>>>
>>> Thanks
>>> --
>>> *Hasitha Abeykoon*
>>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>>> *cell:* *+94 719363063*
>>> *blog: **abeykoon.blogspot.com* 
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Sajini De SIlva
>> Software Engineer; WSO2 Inc.; http://wso2.com ,
>> Email: saj...@wso2.com
>> Blog: http://sajinid.blogspot.com/
>> Git hub profile: https://github.com/sajinidesilva
>>
>> Phone: +94 712797729
>>
>>
>
>
> --
> Ramith Jayasinghe
> Technical Lead
> WSO2 Inc., http://wso2.com
> lean.enterprise.middleware
>
> E: ram...@wso2.com
> P: +94 777542851
>
>


-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Gaps in our ML algorithms

2015-11-30 Thread Srinath Perera
Hi Nirmal,

I tried to think about what are gaps in ML algorithms. Basically, there are
three main things we want to do with ML ( ignoring unsupervised and
semi-supervised method for understanding data).

Data, there are 3 types.
Numerical, Mix ( categorical included), time series ( data is
autocorrelated) .

Note: Let's not worry about spatiotemporal data for now.

Following are my thoughts, and I have marked gaps with "?". We should plan
to have at least one technique in each category.


[image: Inline image 1]
Please comment.

Thanks
Srinath

-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://people.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


  1   2   3   4   >