Re: [Architecture] WSO2 CEP/Siddhi Storm Integration

2013-12-10 Thread Srinath Perera
Hi Suho,

Basically proposal is to use @parallel=distributed{ queries} to group
queries into distributed groups and then use storm to distribute the groups
across different machines. I think that should work.

--Srinath


On Mon, Dec 9, 2013 at 11:41 AM, Sriskandarajah Suhothayan s...@wso2.comwrote:

 I'm working on the Siddhi syntax for the distributed processing

 We can use the bellow execution plan for distributed processing.

 ?xml version=1.0 encoding=UTF-8?
 executionPlan name=ATMStatsExecutionPlan statistics=disable
trace=disable xmlns=
 http://wso2.org/carbon/eventprocessor;
 descriptionThis execution plan is used to identify the possible
 fraud transaction/description
 processModelocal|active-passive|distributed/processMode
 importedStreams
 stream as=atmStatsStream name=atmStatsStream version=1.0.0/
 /importedStreams
 queryExpressions
 ![CDATA[

 @parallel=full
 from atmRowStream[cardType==Credit]
 inset into atmStatsStream

 @parallel=partition
 {
 partition by bankPartition atmStatsStream.cardProvider

 from every a1 = atmStatsStream[amountWithdrawed  100]
 - b1 = atmStatsStream[amountWithdrawed  1 and a1.cardNo
 == b1.cardNo]
 within 1 day
 select a1.cardNo as cardNo, a1.cardHolderName as
 cardHolderName, b1.amountWithdrawed as amountWithdrawed, b1.location as
 location, b1.cardHolderMobile as cardHolderMobile
 insert into possibleFraudStream
 partition by bankPartition

 from every a1 = atmStatsStream[amountWithdrawed  100]
 - b1 = atmStatsStream[amountWithdrawed  1 and
 a1.cardNo == b1.cardNo]
 within 1 day
 select a1.cardNo as cardNo, a1.cardHolderName as
 cardHolderName, b1.amountWithdrawed as amountWithdrawed, b1.location as
 location, b1.cardHolderMobile as cardHolderMobile
 insert into possibleFraudStream
 partition by bankPartition
 }

 ]]
 /queryExpressions
 exportedStreams
 stream name=possibleFraudStream valueOf=possibleFraudStream
 version=1.0.0/
 /exportedStreams
 /executionPlan

 Here we'll have three modes of execution
 1. local
 2. active-passive
 3. distributed

 *Local mode*
 This is the one we have now.

 * Active-passive*
 Here there will be 2 nodes, one active and the other passive. There will
 be a handshake protocol between Active and passive, this will be used for
 state replication and syncing when a node goes down and joins back.

 *Distributed*
 Here we use Annotations (These are ignored on the other modes), the
 parallel annotation denotes the parallelism level and it can be full
 for fully distributed, petition for distribute according to the
 partition, or single for no distribution.
 In the petition case all the queries need to be partitioned by the same
 partition. We can also use curly braces {} to denote grouping
 of parallelism whereby forcing all the queries to fall on the same Siddhi
 instance.
 We can combine storms reliable messaging and snapshot persistence to
 achieve reliable messaging but this still needs more investigation.

 Currently we'll mainly focus on the Active-passive case as it will
 provided reliable and fault tolerant message processing easily and at the
 same time we'll also work on the storm integration for the distributed case.

 Thoughts?

 Suho





 On Wed, Nov 27, 2013 at 11:07 AM, Sanjiva Weerawarana sanj...@wso2.comwrote:

 +1 .. excellent job getting this off the ground! I'd love to see the
 numbers in a real distributed set up :).


 On Wed, Nov 27, 2013 at 1:47 PM, Srinath Perera srin...@wso2.com wrote:

 Hi All,

 I have written a Siddhi bolt that you can use to run Siddhi using Storm
 in a distributed setup.

 You can create a SiddhiBolt(s) given any Siddhi query like following.

 SiddhiBolt siddhiBolt = new SiddhiBolt(
 new String[]{ define stream PlayStream1 ( sid string, ts long,
 x double, y double, z double, a double, v double);},
 new String[]{ from PlayStream1#window.timeBatch(1sec) select
 sid, avg(v) as avgV insert into AvgRunPlay; },
 new String[]{AvgRunPlay});

 Then those bolts can be used within Storm topology like any other bolt.
 However, the name of components and streams used in CEP queries should
 match.

 TopologyBuilder builder = new TopologyBuilder();
 builder.setSpout(PlayStream1, new FootballDataSpout(), 1);
 builder.setBolt(AvgRunPlay, siddhiBolt1,
 1).shuffleGrouping(PlayStream1);

 builder.setBolt(FastRunPlay, siddhiBolt2,1).shuffleGrouping(AvgRunPlay);
 builder.setBolt(LeafEacho, new EchoBolt(),
 1).shuffleGrouping(FastRunPlay);

 I have done a quick performance test and got about 140K TPS in local
 cluster. We need to test using distributed setup. Lasantha will integrate
 this with CEP code base.

 Some potential TODO are
 1) Write two new bolts for 

Re: [Architecture] per-developer git repos for App Factory

2013-12-10 Thread Ajanthan Balachandran
On Tue, Dec 10, 2013 at 6:29 AM, Shiroshica Kulatilake sh...@wso2.comwrote:

 Hi,

 So,
 when a application is created a repo is created for that application - we
 do this now

Say the created repo is https://git.cloud.wso2.com/fooTenant/fooApp.

 when a developer is invited we create a repo for him/her - we need to
 clone the application repo ?

This time we will make a server side copy of the app repository and expose
it as a  new repo(forking) say
https://git.cloud.wso2.com/fooTenant/DevName/fooApp.

 when a developer wants to code he/she will do a git pull /fetch updateand 
 work on it - meaning we use this
 repo directly - or we clone this again ?

He need to clone forged reopo(
https://git.cloud.wso2.com/fooTenant/DevName/fooApp) and push changes to same
repo.

 once done the developer will send a pull request to AF - we need to add
 this via eventing ?

Yes there should be a human task(or some kind of task) fired for app owner.

 App owner will review / merge and notify developer - need to add this - is
 this similar to github functionality ?

Forking repo into personal space is already  available gitblit but the
merging functionality is still in development stage[0]


 Thank you,
 Shiro




 On Mon, Dec 9, 2013 at 10:44 PM, Chan duli...@wso2.com wrote:

 Hi guys,
 Quick question - is this pulling by developer happening locally or is it
 cloud base?

 Cheers~


 On Mon, Dec 9, 2013 at 10:11 PM, Sanjiva Weerawarana sanj...@wso2.comwrote:

 I think the model is each developer has multiple repos .. the developer
 account will show all repos they have.


 On Mon, Dec 9, 2013 at 4:20 PM, Dimuthu Leelarathne 
 dimut...@wso2.comwrote:

 Hi,

 I think it should be as follows.

 When a person because a developer for the first time create a repo for
 him. Whenever he is invited to a project then clone the project into his
 repo.

 So basically we will have to clone the complete project to into his
 repo. This could be cloning a repo as a folder into developer's repo.

 dimuthu





 On Mon, Dec 9, 2013 at 8:32 AM, Ajanthan Balachandran 
 ajant...@wso2.com wrote:

 Hi,
 In Add Developer option,When the developer is invited according to
 diagram AF is creating a repo.
 Isn't it forging existing repo?
 Thanks.


 On Sun, Dec 8, 2013 at 11:15 PM, Sanjiva Weerawarana sanj...@wso2.com
  wrote:

 Following up on the discussion we had earlier this week, here's the
 thing I wrote up a while ago ..
 [image: Inline image 1]

 Here's the link to edit / change:


 http://www.websequencediagrams.com/?lz=dGl0bGUgQXBwIEZhY3RvcnkgR2l0IFJlcG9zCgpvcHQgQ3JlYXRlIE5ldyBBcHAKICBBcHBPd25lciAtPiBBRjoAGQhuABcKRiAtPiBHaXRibGl0ABMNcmVwbwogABMIADwFAEYHOiBIZXJlJ3MgeW91ciBhcHAAJAgARwZKZW5raW5zOiBBZGQAZQVidWlsZCB0YXNrIGZvACYLZW5kAIExBkFkZCBEZXZlbG9wZXIAgScPAA8JOiBpbnZpdGF0aW9uIHRvIGpvaW4gcHJvamVjdAogADYKAIFjCFN1cmUgAIFJGXJlcG8AgQkFZGV2AIFYDgBmCwCBWwxwcml2ADEIAIFCKWRldgCBXA8AgR0KV3JpdGVzIENvZGUAgS8QAIMGCUdpdCBQdWxsAIFQEACCCgsALBQAgikLTG9jYWwgSURFAIMABgBLH3NoAINkDgCDQglCADYHAINVBwCDEgcgQ2xvdWQ6IERlcGxveSBkAIJ6CQBXFQCDPwsAgyMKdGVzdC9kZWJ1ZwCEBQpQdWxsIFJlcXVlcwCDRRUAGwVyABcJAIU9BgCFEwoAEwwgcmVjZWl2ZWQAhXwQAIVCCVJldmkAhWkFAEYJAIYoCwCGDwlNZXJnZQCFABpOb3RpZnkgYWNjZXB0AIVTBQs=vs2010

 Sanjiva.
 --
 Sanjiva Weerawarana, Ph.D.
 Founder, Chairman  CEO; WSO2, Inc.;  http://wso2.com/
 email: sanj...@wso2.com; office: +1 650 745 4499 x5700; cell: +94 77
 787 6880 | +1 650 265 8311
 blog: http://sanjiva.weerawarana.org/
 Lean . Enterprise . Middleware

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 ajanthan
 --
 Ajanthan Balachandiran
 Senior Software Engineer;
 Solutions Technologies Team ;WSO2, Inc.;  http://wso2.com/

 email: ajanthan http://goog_595075977@wso2.com; cell: +94775581497
 blog: http://bkayts.blogspot.com/


 Lean . Enterprise . Middleware

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 Dimuthu Leelarathne
 Architect  Product Lead of App Factory

 WSO2, Inc. (http://wso2.com)
 email: dimut...@wso2.com
 Mobile : 0773661935

 Lean . Enterprise . Middleware

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 Sanjiva Weerawarana, Ph.D.
 Founder, Chairman  CEO; WSO2, Inc.;  http://wso2.com/
 email: sanj...@wso2.com; office: +1 650 745 4499 x5700; cell: +94 77
 787 6880 | +1 650 265 8311
 blog: http://sanjiva.weerawarana.org/
 Lean . Enterprise . Middleware

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 Chan (Dulitha Wijewantha)
 Software Engineer - Mobile Development
  WSO2Mobile
 Lean.Enterprise.Mobileware
  * ~Email   duli...@wso2.com duli...@wso2mobile.com*
 *  

Re: [Architecture] WSO2 CEP/Siddhi Storm Integration

2013-12-10 Thread Lasantha Fernando
Hi all,

Storm trident has a partitionBy() method that would partition according to
fields of a tuple. Also, there are methods like shuffle(), global() to
repartition processing. Can we integrate this in creating distributed
partitions for CEP?

However, it seems using Trident will introduce some performance hit since
it introduces a level of complexity on top of storm to guarantee
exactly-once semantics.

Thanks,
Lasantha



On 10 December 2013 14:05, Srinath Perera srin...@wso2.com wrote:

 Hi Suho,

 Basically proposal is to use @parallel=distributed{ queries} to group
 queries into distributed groups and then use storm to distribute the groups
 across different machines. I think that should work.

 --Srinath


 On Mon, Dec 9, 2013 at 11:41 AM, Sriskandarajah Suhothayan 
 s...@wso2.comwrote:

 I'm working on the Siddhi syntax for the distributed processing

 We can use the bellow execution plan for distributed processing.

 ?xml version=1.0 encoding=UTF-8?
 executionPlan name=ATMStatsExecutionPlan statistics=disable
trace=disable xmlns=
 http://wso2.org/carbon/eventprocessor;
 descriptionThis execution plan is used to identify the possible
 fraud transaction/description
 processModelocal|active-passive|distributed/processMode
 importedStreams
 stream as=atmStatsStream name=atmStatsStream
 version=1.0.0/
 /importedStreams
 queryExpressions
 ![CDATA[

 @parallel=full
 from atmRowStream[cardType==Credit]
 inset into atmStatsStream

 @parallel=partition
 {
 partition by bankPartition atmStatsStream.cardProvider

 from every a1 = atmStatsStream[amountWithdrawed  100]
 - b1 = atmStatsStream[amountWithdrawed  1 and a1.cardNo
 == b1.cardNo]
 within 1 day
 select a1.cardNo as cardNo, a1.cardHolderName as
 cardHolderName, b1.amountWithdrawed as amountWithdrawed, b1.location as
 location, b1.cardHolderMobile as cardHolderMobile
 insert into possibleFraudStream
 partition by bankPartition

 from every a1 = atmStatsStream[amountWithdrawed  100]
 - b1 = atmStatsStream[amountWithdrawed  1 and
 a1.cardNo == b1.cardNo]
 within 1 day
 select a1.cardNo as cardNo, a1.cardHolderName as
 cardHolderName, b1.amountWithdrawed as amountWithdrawed, b1.location as
 location, b1.cardHolderMobile as cardHolderMobile
 insert into possibleFraudStream
 partition by bankPartition
 }

 ]]
 /queryExpressions
 exportedStreams
 stream name=possibleFraudStream valueOf=possibleFraudStream
 version=1.0.0/
 /exportedStreams
 /executionPlan

 Here we'll have three modes of execution
 1. local
 2. active-passive
 3. distributed

 *Local mode*
 This is the one we have now.

 * Active-passive*
 Here there will be 2 nodes, one active and the other passive. There will
 be a handshake protocol between Active and passive, this will be used for
 state replication and syncing when a node goes down and joins back.

 *Distributed*
 Here we use Annotations (These are ignored on the other modes), the
 parallel annotation denotes the parallelism level and it can be full
 for fully distributed, petition for distribute according to the
 partition, or single for no distribution.
 In the petition case all the queries need to be partitioned by the same
 partition. We can also use curly braces {} to denote grouping
 of parallelism whereby forcing all the queries to fall on the same Siddhi
 instance.
 We can combine storms reliable messaging and snapshot persistence to
 achieve reliable messaging but this still needs more investigation.

 Currently we'll mainly focus on the Active-passive case as it will
 provided reliable and fault tolerant message processing easily and at the
 same time we'll also work on the storm integration for the distributed case.

 Thoughts?

 Suho





 On Wed, Nov 27, 2013 at 11:07 AM, Sanjiva Weerawarana 
 sanj...@wso2.comwrote:

 +1 .. excellent job getting this off the ground! I'd love to see the
 numbers in a real distributed set up :).


 On Wed, Nov 27, 2013 at 1:47 PM, Srinath Perera srin...@wso2.comwrote:

 Hi All,

 I have written a Siddhi bolt that you can use to run Siddhi using Storm
 in a distributed setup.

 You can create a SiddhiBolt(s) given any Siddhi query like following.

 SiddhiBolt siddhiBolt = new SiddhiBolt(
 new String[]{ define stream PlayStream1 ( sid string, ts
 long, x double, y double, z double, a double, v double);},
 new String[]{ from PlayStream1#window.timeBatch(1sec) select
 sid, avg(v) as avgV insert into AvgRunPlay; },
 new String[]{AvgRunPlay});

 Then those bolts can be used within Storm topology like any other bolt.
 However, the name of components and streams used in CEP queries should
 match.

 TopologyBuilder builder = new TopologyBuilder();
 

[Architecture] [Appfactory][Resources] Improve resource creation and Application Life cycle Management

2013-12-10 Thread Ramith Jayasinghe
Hi

We will be changing the behaviour on how resources/data-sources will be
created in each stages.

Current behaviour is that when a user ( - a Developer) with relevant
permission creates an resource/datasource it will be created in all stages.
Relevant users needs to modify the values for each stage that they own.
For example : DevOps users will change values in Production stage, while QA
user(s) changes in 'Testing' stage.

Flaw in above model is that a User(s) is implicitly changing the
environments he/she don't have access ( for example: Developer creates a
resource in production environment).

Now the change we are going to introduce is as follows:

 1. A users (who has permission) can create an resource/data source only in
a initial stage ( - Initial Stage is defined as a stage not having a
previous stage. e.g. Development).
 2. When a user promotes an application version AF will not deploy new
artifact ( currently it does)
 3. Once promoted users owning the new stage needs to explicitly deploy the
artifact in that stage. During this action Appfactory will check if there
are new resources were introduced to previous stage and copy them to
current. (Yes in order for this to happen current user needs read access to
previous stage. In our opinion this is better than the scheme we have now)

4. Optionally, when demoting an application version Appfactory can
un-deploy that from the stage. This will make sure that user needs to
deploy the application explicitly when the application version is promoted
( therefore ensuring AF copies any additional resources introduced as
explained in Step 3).


Thoughts?

-- Ramith Jayasinghe
Technical Lead
WSO2 Inc., http://wso2.com
lean.enterprise.middleware

E: ram...@wso2.com
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Update] Moving the APIM Store to the Enterprise Store

2013-12-10 Thread Sameera Medagammaddegedara
Hello Everyone,

This is a small update on the current progress of the task as of 10/11/2013:

   - The earlier approach of moving the ES Store code to the APIM Store has
   been changed. The new approach involves gradually moving the APIM Store to
   the ES Store. This was deemed better as;
  - It would allow changes to the ES to be more easily propagated to
  the APIM Store and Publisher
   - The working git repository has been changed to:
   https://github.com/splinter/apim .The new repo is a copy of the
   Enterprise Store master branch as opposed to the earlier repo which merely
   contained the APIM Store and Publisher code.This change is inline with the
   previous point.
   - The new Jaggery test framework has been integrated
   - The first task which is been tackled is getting the API listing page
   implemented using the ES Store. [In Progress]

*Problems*

   - There is a discrepancy between the storage paths used by the ES asset
   types and those used by the APIM RXTs (api, document and provider). The
   differences are;
  - ES path:
  
/_system/governance/{ASSET_TYPE}/@{overview_provider}/@{overview_name}/@{overview_version}
  - APIM
  - API path:
 
/_system/governance/apimgt/applicationdata/provider/@{overview_provider}/@{overview_name}/@{overview_version}/api
 - Document path:
 
/_system/governance/apimgt/applicationdata/provider/@{overview_apiBasePath}/documentation/@{overview_name}
 - Provider path:
 /_system/governance//providers/@{overview_version}/@{overview_name}
 - In order to  accommodate APIM paths the ArtifactManager
  (modules/carbon/scripts/artifact.js:line104) will need to be changed.

*Notes:*

   - Identify main tasks and create redmine issues .A link [1] to a gdoc
   containing a high level breakdown of the tasks is given in the references
   section.

*References*

[1] ES- APIM Store Task breakdown, url
https://docs.google.com/a/wso2.com/document/d/1w8kjQ5GLgENC_GVKDiHXU4mT-8MmSCgkqMAeJiVo6O0/edit?usp=sharing

Thank You,

Sameera


On Mon, Dec 9, 2013 at 6:42 PM, Sameera Medagammaddegedara 
samee...@wso2.com wrote:

 This is a small update on the current status of the task;

- All ES modules have been placed in the ref.modules/ folder in order
to provide a cleaner separation between the existing APIM-Store modules and
the ES modules. In order to do this I have added a script to resolve paths
to modules and scripts (refer: modules/orchestrator/orchestrator.js) .Once
we finish moving APIM-Store to ES we can move the ES modules to the modules
root.
- Added app.js script which contains the initialization code for the
store (This involved some re factoring).
- Added Caramel and the Store theme from ES.

 *Note:*

- I am planning to add the new Jaggery test framework to the APIM
Store app.


 The working branch in git for task  is given below;

 https://github.com/splinter/apim-apps/tree/add-caramel-configs


 Thank You,
 Sameera

 --
 Sameera Medagammaddegedara
 Software Engineer

 Contact:
 Email: samee...@wso2.com
 Mobile: + 94 077 255 3005




-- 
Sameera Medagammaddegedara
Software Engineer

Contact:
Email: samee...@wso2.com
Mobile: + 94 077 255 3005
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Dev] [ANN] WSO2 Business Process Server (WSO2 BPS) 3.1.0 Released

2013-12-10 Thread Nandika Jayawardana
WSO2 Business Process Server (WSO2 BPS) 3.1.0 Release Notes December 2013

WSO2 Business Process Server (BPS) is an easy-to-use Open Source Business
Process Server that executes business processes written following WS-BPEL
and WS-Human Task standards. WS-BPEL is the defacto standard for composing
multiple synchronous and asynchronous web services into collaborative and
transactional process flows which increase the flexibility and agility of
your Service Oriented Architecture. WS-Human Task allows people activities
to be integrated with business processes.. WSO2 BPS is powered by
Apache ODEhttp://ode.apache.org and
available underApache Software License
v2.0http://www.apache.org/licenses/LICENSE-2.0.html.
WSO2 BPS provides a complete Web based graphical console to deploy, manage
and monitor business process and process instances.

WSO2 BPS is developed on top of the revolutionary Carbon platform
(Middleware a' la carte), and is based on the OSGi framework to achieve the
better modularity for you SOA. Carbon platform contains lots of new
features and many other optional components that can be used to customize
or enhance the functionalities provided by BPS to suits you SOA needs. In
addition to installing optional components you can uninstall unwanted
features without any trouble.

WSO2 BPS is an opensource product available under the Apache Software
License (v2.0) http://www.apache.org/licenses/LICENSE-2.0.html . This
includes all of the extra integration and management functionality as well.
Key Features

   - Deploying Business Processes written in compliance with WS-BPEL 2.0
   Standard and BPEL4WS 1.1 standard.
   - Support for Human Interactions in BPEL Processes with WS-Human Task
   and BPEL4People.
   - Managing BPEL packages, processes and process instances.
   - BPEL Extensions and XPath extensions support
   - Instance recovery(Only supports 'Invoke' activity) support through
   management console
   - OpenJPA based Data Access Layer For BPEL and Human Tasks
   - WS-Security support for business processes.
   - Support for invoking secured(Using WS-Security) partner services.
   - Support for HumanTask Coordination
   - BPEL Package hot update which facilitate Versioning of BPEL Packages
   - BPEL deployment descriptor editor
   - E4X based data manipulation support for BPEL assignments
   - Configure external data base system as the BPEL engine's persistence
   storage
   - Caching support for business processes.
   - Throttling support for business processes.
   - Transport management.
   - Internationalized web based management console.
   - System monitoring.
   - Try-it for business processes.
   - SOAP Message Tracing.
   - End-point configuration mechanism based on WSO2 Unified Endpoints.
   - Customizable server - You can customize the BPS to fit into your exact
   requirements, by removing certain features or by adding new optional
   features.
   - Performance improvements in XPath evaluations
   - Clustering support for BPEL engine.
   - Process monitoring with WSO2 Business Activity Monitor
   - JMX monitoring support

New Features In This Release

   - WS-Human Task Coordination Support
   - Literal based and Expression based user assignment support for Human
   Tasks
   - Hazelcast based clustering improvements for ODE

Issues Fixed for this release

   - WSO2 BPS related components of the WSO2 Carbon Platform -
   https://wso2.org/jira/secure/IssueNavigator.jspa?mode=hiderequestId=11673

XML  WS-* Standards Support

   - BPEL4WS 1.1
   - WS-BPEL 2.0
   - WS-Human Task 1.1
   - BPEL4People 1.1
   - SOAP 1.1/1.2
   - WSDL 1.1
   - WSDL 2.0
   - MTOM, XOP  SOAP with Attachments
   - WS-Addressing
   - WS-Security 1.0/1.1
   - WS-Trust
   - WS-SecureConversation
   - WS-SecurityPolicy
   - WS-ReliableMessaging
   - WS-Policy
   - WS-PolicyAttachment
   - WS-MetadataExchange
   - WS-Transfer
   - XKMS

Open Source components included in WSO2 BPS/Java

   - Apache ODE (BPEL)
   - Apache Axis2 (SOAP)
   - Apache Axiom (High performance XML Object Model)
   - Apache Rampart/Apache WSS4J (WS-Security)
   - Apache Rahas(WS-SecureConversation)
   - Apache Sandesha2 (WS-ReliableMessaging)
   - Apache Batik
   - WS-Addressing implementation in Axis2
   - Apache Neethi (WS-Policy)
   - WS-SecurityPolicy implementation in Axis2
   - Apache XML Schema
   - Apache Derby (Database)
   - Apache OpenJPA
   - Embedded Apache Tomcat
   - Spring Framework

Apache Axis2 modules included with WSO2 BPS

   - Apache Rampart: Supporting WS-Security  WS-Trust
   - Apache Rahas: Supporting WS-SecureConversation
   - Apache Sandesha2: Supporting WS-Reliable Messaging
   - Mex: Supporting WS-MetaDataExchange
   - Throttle: For throttling requests
   - Statistics: For gathering  monitoring statistics
   - SOAP Tracer: For tracing SOAP requests  responses
   - XFer: Supporting WS-Transfer
   - XKMS: Supporting XML Key Management Specification

Known Issues

   - WS-Human Task implementation does not support sub 

Re: [Architecture] [Dev] Developer Studio 3.3.0 Alpha 3 Released!

2013-12-10 Thread Krishantha Samaraweera
Hi Harshana,

On Thu, Nov 21, 2013 at 2:37 PM, Harshana Martin harsh...@wso2.com wrote:

 Hi Samisa,


 On Thu, Nov 21, 2013 at 2:31 PM, Samisa Abeysinghe sam...@wso2.comwrote:

 Given there is no auto test help, to guard against slips, lets at least
 use a checklist of all mediators to be tested.


 Yes.. That's a good idea.. we did such cross check when we did DevS 3.0.0
 release. So we will reuse the same set to make sure no slip ups.


 I was looking for auto tests, becuase, it is hard to test them all
 manually.

 Alternatively, we can see if we can use ESB auto tests in here, where we
 develop the artifacts for all mediators using DevS and test those artifacts
 with ESB auto test framework. That could work.


 Yes.. +1

 We can produce CAR files with all mediator scenarios and provide them to
 ESB Testing Framework which could show us the error in configurations. Also
 once we are done with it, the same CAR files can be used for future ESB
 testing as well.


When can we get CAR files to cover all mediators?  we can add some base
tests to cover deploy, undeploy and re-deployment of CAR files and allow
ESB team to complete the rest.

Thanks,
Krishantha.


 We will execute that plan for the release. Thanks for the valuable input
 Samisa.

 Best Regards,
 Harshana



 Thanks,
 Samisa...


 Samisa Abeysinghe

 Vice President Training

 WSO2 Inc.
 http://wso2.com



 On Thu, Nov 21, 2013 at 2:27 PM, Harshana Martin harsh...@wso2.comwrote:

 Hi Samisa,


 On Thu, Nov 21, 2013 at 6:50 AM, Samisa Abeysinghe sam...@wso2.comwrote:

 What is the plan to test all mediators for basic functionality for
 3.3.0?


 At the moment we are planning to test the mediators using the samples
 mainly and some known use cases.


 I understand that some mediators were badly broken in 3.2.0 release
 e.g. XACML.


 Although Entitlement mediator was a specific mediator only, we fixed but
 there could be some other mediators like that. So we are going to cover all
 most all mediators during the testing.


 Can we make use of the auto test framework to ensure that there are no
 absolute blockers in any mediator out of the full mediator lot.


 Test framework is still not developed to support that level of
 requirements. We will try to improve it while testing the 3.3.0. But most
 of the testing will happen manually for this release. We are planning to
 assign someone full-time basis to improve the test framework to support
 complete automated testing.

 Thanks and Regards,
 Harshana



 Thanks,
 Samisa...


 Samisa Abeysinghe

 Vice President Training

 WSO2 Inc.
 http://wso2.com



 On Mon, Nov 18, 2013 at 3:49 PM, Asanka Sanjeewa asan...@wso2.comwrote:

 Hi All,

 We have WSO2 Developer Studio 3.3.0 Alpha 3 version  ready to be
 downloaded at [1]. Installed eclipse distributions available at [2].

 This release includes following new feature, improvements and bug
 fixes.

 New Feature

- [TOOLS-1855 https://wso2.org/jira/browse/TOOLS-1855] - Cloud
connector support

 Improvements

- [TOOLS-1870 https://wso2.org/jira/browse/TOOLS-1870] -
Enabling Jaggery Editor Auto-Complete even without typing a character
- [TOOLS-2085 https://wso2.org/jira/browse/TOOLS-2085] - There
is no way to rename a GReg artifact.
- [TOOLS-2106 https://wso2.org/jira/browse/TOOLS-2106] - Adding
perspective login change icon into toolbar
- [TOOLS-2107 https://wso2.org/jira/browse/TOOLS-2107] - Login
window should not pop-up when resetting the perspective and should 
 pop-up
for new perspective Only

 Bug Fixes

- [TOOLS-1337 https://wso2.org/jira/browse/TOOLS-1337] - There
is no way to add include block inside Script Mediator
- [TOOLS-1694 https://wso2.org/jira/browse/TOOLS-1694] - AF
perspective detais view not filling all the feilds
- [TOOLS-1719 https://wso2.org/jira/browse/TOOLS-1719] - [Dev
Studio-3.2] - Logging not consistent with WSO2 Standards
- [TOOLS-1722 https://wso2.org/jira/browse/TOOLS-1722] - Dev
Studio fails to create Registry resource without file extension
- [TOOLS-1731 https://wso2.org/jira/browse/TOOLS-1731] - Dev
Studio fails to rename a resource correctly.
- [TOOLS-1737 https://wso2.org/jira/browse/TOOLS-1737] - [Dev
Studio 3.2] End Point Connection Issues
- [TOOLS-1738 https://wso2.org/jira/browse/TOOLS-1738] - [Dev
Studio 3.2] - Null Argument error for Clone Mediator
- [TOOLS-1743 https://wso2.org/jira/browse/TOOLS-1743] - [Dev
Studio 3.2] - Exceptions Creating EndPoint (JMS)
- [TOOLS-1746 https://wso2.org/jira/browse/TOOLS-1746] - [Dev
Studio-3.2] - Argument not valid Exception in Opening with Registry 
 Info
Editor
- [TOOLS-1747 https://wso2.org/jira/browse/TOOLS-1747] - [Dev
Studio-3.2] -Edit and saving resources to registy not possible
- [TOOLS-1757 https://wso2.org/jira/browse/TOOLS-1757] - Create
new Axis2 Service Project - wrong directory
- [TOOLS-1945 

[Architecture] Connector:Google Drive

2013-12-10 Thread indika prasad
*Introduction*

Google Drive is a file storage and synchronization service provided by
Google, which enables user cloud storage, file sharing and collaborative
editing.
Google Drive API is used to deal with google drive to perform certain
operations allowed by the API. 

Google Drive Connector Summery

•   Connector Name:  GoogleDrive
•   Version: 1.00
•   Technology:  Java

*Authentication*:

There are two types of accounts that can be owned by any application.
Service accounts or regular Google accounts. Service accounts associated 
with a service or a project. They do not belong to a user and can only be
accessed programmatically by the associated application. Regular account is
belong to the user. GoogleDrive connector is support for both type of
accounts and authentication flow is depending on account type as bellow.

 •  Service account authentication: 
End user need to provide service account email address and private
key to authenticate. 
 •  Regular account authentication:
Client id, Client Secrete, access token and refresh token are needed
to authenticate.

*Methods Summery*
Selected Methods (17) for version 1.0 
•   *init* Config data for the connector.
•   *getFile*Gets a file's metadata by ID.  
•   *insertFile*Insert a new file.  
•   *patchFile  *Updates file metadata. This method 
supports patch semantics.   
•   *updateFile *Updates file metadata and/or content.  
•   *copyFile   *Creates a copy of the specified file.  
•   *deleteFile *Permanently deletes a file by ID. 
Skips the trash. 
•   *ListFile   *Lists the user's files.
•   *trashFile  *Moves a file to the trash. 
•   *untrashFile*Restores a file from the trash.
•   *watchFile  *Start watching for changes to a file.  
•   *listChangesForUser *Lists the changes for a user.  
•   *insertPermissionToFile *Inserts permission for a file. 
•   *listFilePermission *Lists a file's permissions.
•   *deleteComment  *Deletes a comment. 
•   *getCommentByID *Gets a comment by ID.  
•   *insertNewComment   *Creates a new comment on the 
given file.



Rest of the available methods(23) provided by the API
•   *touchFile  *Set the file's updated time to the 
current server time. 
•   *delete *Removes a child from a folder.  
•   *getChild   *Gets a specific child reference.   
 
•   *insertFiletoFolder *Inserts a file into a folder.  
 
•   *listFolders*Lists a folder's children. To list all 
children of the
root folder, use the alias root for the folderId value.  
•   *deleteParentFromFile   *Removes a parent from a file.  
 
•   *getParentReference *Gets a specific parent 
reference.   
•   *insertParentFolderToFile   * Adds a parent folder 
for a file.   
•   *listParents*Lists a file's parents. 
•   *deletePermissionFromFile   *Deletes permission 
from a file. 
•   *getPermissionID*Gets permission by ID.  
•   *getIdForEmail  *Returns the permission ID for an email 
address. 
•   *ListComments   *Lists a file's comments.
•   *deleteReply*Deletes a reply.
•   *getReply   *Gets a reply.   
•   *insertNewReply *Creates a new reply to the given 
comment.   
•   *ListReplies*Lists all of the replies to a comment. 
 
•   *deleteProperty *Deletes a property. 
•   *getProperty*Gets a property by its key. 
•   *insertNewProperty  *Adds a property to a file. 
 
•   *ListProperty   *Lists a file's properties.  
•   *DeleteChild*Removes a child from a folder.  
•   *GetChildRef*Gets a specific child reference.   

 Your comments are greatly appreciate.

Thanks
Indika Kularathne




--
View this message in context: 
http://wso2-oxygen-tank.10903.n7.nabble.com/Connector-Google-Drive-tp89555.html
Sent from the WSO2 Architecture mailing list archive at Nabble.com.
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Appfactory][Resources] Improve resource creation and Application Life cycle Management

2013-12-10 Thread Harsha Thirimanna
As I remember there was another one point in the discussion,
When we promoting the artifact , we can do the deployment, resource and
database creation at the same time if the logged user has permission to the
both stages. I am not sure we are doing it right now.
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Appfactory][Resources] Improve resource creation and Application Life cycle Management

2013-12-10 Thread Ushani Balasooriya
Hi Ramith,

Just need small clarifications for the questions below.

1) Is it the same when it comes to governance/life cycle management? As in
if we promote an app version from Development to Testing, does it mean that
app version will not be deployed in the proceeding environment? E.g.,
Testing
2) If so, user will have to build and deploy the app version in the new
stage before they test?
3) therefore the Repo and Build page should be available for Test users and
Dev Ops as well?
4) So when a test user promote an the app version to Production, it does
not mean that it will be deployed in Production untill Dev Ops really do it
and it will be just a stage change, Am I correct?

Regards,


On Tue, Dec 10, 2013 at 8:57 PM, Harsha Thirimanna hars...@wso2.com wrote:

 As I remember there was another one point in the discussion,
 When we promoting the artifact , we can do the deployment, resource and
 database creation at the same time if the logged user has permission to the
 both stages. I am not sure we are doing it right now.

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




-- 
*Ushani Balasooriya*
Software Engineer - QA;
WSO2 Inc; http://www.wso2.com/.
Mobile; +94772636796
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Appfactory][Resources] Improve resource creation and Application Life cycle Management

2013-12-10 Thread Manjula Rathnayake
Hi Ushani,

See the comments inline,


On Wed, Dec 11, 2013 at 9:21 AM, Ushani Balasooriya ush...@wso2.com wrote:

 Hi Ramith,

 Just need small clarifications for the questions below.

 1) Is it the same when it comes to governance/life cycle management? As in
 if we promote an app version from Development to Testing, does it mean that
 app version will not be deployed in the proceeding environment? E.g.,
 Testing

Lify cycle change is same as before. application get promoted to next
stage. But application does not get auto deployed in Testing until
QA(authorized person in QA stage) login and click on 'configure and deploy'
button.
Currently there is not configuring option allowed to QA person, resources
are copied from Development stage. QA person can edit the values of
resources as it was before. However, based on user experience, we might
give a wizard like user interface to configure all the dependecies used by
this application.(This is not M10 feature)

 2) If so, user will have to build and deploy the app version in the new
 stage before they test?

No. In test stage, no build is triggered.

 3) therefore the Repo and Build page should be available for Test users
 and Dev Ops as well?

No.

 4) So when a test user promote an the app version to Production, it does
 not mean that it will be deployed in Production untill Dev Ops really do it
 and it will be just a stage change, Am I correct?

Yes.


 Regards,


 On Tue, Dec 10, 2013 at 8:57 PM, Harsha Thirimanna hars...@wso2.comwrote:

 As I remember there was another one point in the discussion,
 When we promoting the artifact , we can do the deployment, resource and
 database creation at the same time if the logged user has permission to the
 both stages. I am not sure we are doing it right now.

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 *Ushani Balasooriya*
 Software Engineer - QA;
 WSO2 Inc; http://www.wso2.com/.
 Mobile; +94772636796


 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


thank you.

-- 
Manjula Rathnayaka
Software Engineer
WSO2, Inc.
Mobile:+94 77 743 1987
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Appfactory][Resources] Improve resource creation and Application Life cycle Management

2013-12-10 Thread Ramith Jayasinghe
Hi Harsha,
 my suggestion would be keep the behaviour consistent ( by not auto
deploying and copying resources even if the user has permission to both
stages).
 Reasons:
  1. This might confuse the users.
  2. in my view having access to multiple stages is a rare situation.

regards
Ramith.

On Tue, Dec 10, 2013 at 8:57 PM, Harsha Thirimanna hars...@wso2.com wrote:

 As I remember there was another one point in the discussion,
 When we promoting the artifact , we can do the deployment, resource and
 database creation at the same time if the logged user has permission to the
 both stages. I am not sure we are doing it right now.

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




-- 
Ramith Jayasinghe
Technical Lead
WSO2 Inc., http://wso2.com
lean.enterprise.middleware

E: ram...@wso2.com
P: +94 776715671
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Appfactory] BAM integration-Getting the data summerization done right

2013-12-10 Thread Gayan Dhanushka
Hi all,

As per a discussion that myself Dimuthu and Srinath had last week I changed
the data files of the gadgets back to read from the database as it solves
some of the complications that we may encounter in the future.

Thanks
GayanD

Gayan Dhanuska
Software Engineer
http://wso2.com/
Lean Enterprise Middleware

Mobile
071 666 2327

Office
Tel   : 94 11 214 5345
Fax  : 94 11 214 5300

Twitter : https://twitter.com/gayanlggd


On Mon, Dec 2, 2013 at 12:01 PM, Gayan Dhanushka gay...@wso2.com wrote:

 Hi all,

 We started off the AF BAM integration by publishing events to BAM and then
 rendering gadgets making a DB call to the MySQL database which contains the
 summerized data. Later as per a discussion which was conducted it was
 suggested that it is preferable to read the AF registry for some data which
 are captured through rxts (e.g appcreation, appversion). But there are some
 concerns.

 1) These rxts do capture some data but not the whole event (e.g.
 appcreation rxt doesn't capture the appcreation timestamp)

 2) In order to capture that missing data publishing events which are
 already captured by the underlying rxts needs to be done. So the
 summerization of this data is essential. Therefore the hive scripts needs
 to run meaning they cannot be removed.

 3) Reading the registry through the ArtifactManager in Jaggery requires
 heavy calculations when the no of apps is very large.

 4) Gadget will have a huge pain of doing these calculations while its real
 purpose is just rendering data.

 So I can hardly see how reading data from the registry can simplify things
 or reduce resource consumption. WDYT?

 Thanks
 GayanD


 Gayan Dhanuska
 Software Engineer
 http://wso2.com/
 Lean Enterprise Middleware

 Mobile
 071 666 2327

 Office
 Tel   : 94 11 214 5345
 Fax  : 94 11 214 5300

 Twitter : https://twitter.com/gayanlggd

___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Connector:Google Drive

2013-12-10 Thread Chanaka Fernando
Hi Indika,

Since Google Drive is somewhat related to files, it is better to have some
folder related methods to your supported list. My suggestion is to add the
following methods to your list of supporting methods.

•   *touchFile  *Set the file's updated time to the
current server time.
•   *getChild   *Gets a specific child reference.
•   *insertFiletoFolder *Inserts a file into a
folder.
•   *listFolders*Lists a folder's children. To list
all children of the
root folder, use the alias root for the folderId value.

WDYT?

Thanks,
Chanaka


On Tue, Dec 10, 2013 at 10:24 AM, indika prasad indika@gmail.comwrote:

 *Introduction*

 Google Drive is a file storage and synchronization service provided by
 Google, which enables user cloud storage, file sharing and collaborative
 editing.
 Google Drive API is used to deal with google drive to perform certain
 operations allowed by the API.

 Google Drive Connector Summery

 •   Connector Name:  GoogleDrive
 •   Version: 1.00
 •   Technology:  Java

 *Authentication*:

 There are two types of accounts that can be owned by any application.
 Service accounts or regular Google accounts. Service accounts associated
 with a service or a project. They do not belong to a user and can only be
 accessed programmatically by the associated application. Regular account is
 belong to the user. GoogleDrive connector is support for both type of
 accounts and authentication flow is depending on account type as bellow.

  •  Service account authentication:
 End user need to provide service account email address and private
 key to authenticate.
  •  Regular account authentication:
 Client id, Client Secrete, access token and refresh token are
 needed
 to authenticate.

 *Methods Summery*
 Selected Methods (17) for version 1.0
 •   *init* Config data for the connector.
 •   *getFile*Gets a file's metadata by ID.
 •   *insertFile*Insert a new file.
 •   *patchFile  *Updates file metadata. This
 method supports patch semantics.
 •   *updateFile *Updates file metadata and/or
 content.
 •   *copyFile   *Creates a copy of the specified
 file.
 •   *deleteFile *Permanently deletes a file by ID.
 Skips the trash.
 •   *ListFile   *Lists the user's files.
 •   *trashFile  *Moves a file to the trash.
 •   *untrashFile*Restores a file from the trash.
 •   *watchFile  *Start watching for changes to a
 file.
 •   *listChangesForUser *Lists the changes for a
 user.
 •   *insertPermissionToFile *Inserts permission for a
 file.
 •   *listFilePermission *Lists a file's
 permissions.
 •   *deleteComment  *Deletes a comment.
 •   *getCommentByID *Gets a comment by ID.
 •   *insertNewComment   *Creates a new comment on
 the given file.



 Rest of the available methods(23) provided by the API
 •   *touchFile  *Set the file's updated time to
 the current server time.
 •   *delete *Removes a child from a folder.
 •   *getChild   *Gets a specific child reference.
 •   *insertFiletoFolder *Inserts a file into a
 folder.
 •   *listFolders*Lists a folder's children. To
 list all children of the
 root folder, use the alias root for the folderId value.
 •   *deleteParentFromFile   *Removes a parent from a
 file.
 •   *getParentReference *Gets a specific parent
 reference.
 •   *insertParentFolderToFile   * Adds a parent
 folder for a file.
 •   *listParents*Lists a file's parents.
 •   *deletePermissionFromFile   *Deletes
 permission from a file.
 •   *getPermissionID*Gets permission by ID.
 •   *getIdForEmail  *Returns the permission ID for an
 email address.
 •   *ListComments   *Lists a file's comments.
 •   *deleteReply*Deletes a reply.
 •   *getReply   *Gets a reply.
 •   *insertNewReply *Creates a new reply to the given
 comment.
 •   *ListReplies*Lists all of the replies to a
 comment.
 •   *deleteProperty *Deletes a property.
 •   *getProperty*Gets a property by its key.
 •   *insertNewProperty  *Adds a property to a file.
 •   *ListProperty   *Lists a file's properties.