Re: [Architecture] How to forcefully remove an entry from app factory registry path cache?

2014-12-02 Thread Manjula Rathnayake
Hi Shazni and Pirinthapan,

thanks for the clarification.

On Tue, Dec 2, 2014 at 1:28 PM, Shazni Nazeer sha...@wso2.com wrote:

 Hi Manjula,

 Yes. The cacheId that you specify is not the the '*connectionId*' that we
 create in the method. When a resource is added to the cache we take the '
 *connectionId*' the way it is implemented to create the cache key.
 Therefore while retrieving the cache we should use the same way.

 Shazni Nazeer

 Senior Software Engineer

 Mob : +94 37331
 LinkedIn : http://lk.linkedin.com/in/shazninazeer
 Blog : http://shazninazeer.blogspot.com

 On Tue, Dec 2, 2014 at 12:20 PM, Pirinthapan Mahendran 
 pirintha...@wso2.com wrote:

 Hi Manjula,

 As my understanding cacheKey is a RegistryCacheKey object, which is
 different from the cacheId. So we don't need to read the cacheId.

 Thanks.



 Mahendran Pirinthapan
 Software Engineer | WSO2 Inc.
 Mobile +94772378732.

 On Tue, Dec 2, 2014 at 12:01 PM, Manjula Rathnayake manju...@wso2.com
 wrote:

 Hi Shazni,

 I checked the code for removeCache method and found that cacheKey is
 calculated as below.

 *String connectionId = (dataBaseConfiguration.getUserName() != null?
 dataBaseConfiguration.getUserName().split(@)[0]:dataBaseConfiguration.getUserName())
 + @ + dataBaseConfiguration.getDbUrl(); cacheKey =
 RegistryUtils.buildRegistryCacheKey(connectionId, tenantId, path);*

 But in registry.xml we have below element too,
 *cacheIdroot@jdbc:mysql://mysql1.appfactory.private.wso2.com:3306/dbGovernanceCloud
 http://mysql1.appfactory.private.wso2.com:3306/dbGovernanceCloud/cacheId*

 Shouldn't we read the cacheId element first and calculate as above if
 cacheId element is not defined?  Or did I misunderstand the configuration?

 thank you.

 On Wed, Nov 12, 2014 at 2:23 PM, Shazni Nazeer sha...@wso2.com wrote:

 Hi,

 Given that we know the registry path of the resource of which cache to
 be deleted and have an instance of the registry we can manually delete the
 cache with a method like removeCache in the below file. However, it's not a
 clean or correct way of manipulating the registry cache.


 https://github.com/wso2-dev/carbon-governance/blob/master/components/governance/org.wso2.carbon.governance.custom.lifecycles.checklist/src/main/java/org/wso2/carbon/governance/custom/lifecycles/checklist/util/LifecycleBeanPopulator.java

 Shazni Nazeer

 Senior Software Engineer

 Mob : +94 37331
 LinkedIn : http://lk.linkedin.com/in/shazninazeer
 Blog : http://shazninazeer.blogspot.com

 On Wed, Nov 12, 2014 at 1:48 PM, Dimuthu Leelarathne dimut...@wso2.com
  wrote:

 Hi Pulasthi,

 So when we are doing global invalidation, what is the method we are
 going to use to invalidate the cache within the JVM? :) Or are you going 
 to
 do it by magic?

 thanks,
 dimuthu

 On Wed, Nov 12, 2014 at 1:36 PM, Pulasthi Supun pulas...@wso2.com
 wrote:

 Hi Dimuthu,



 On Tue, Nov 11, 2014 at 4:17 PM, Dimuthu Leelarathne 
 dimut...@wso2.com wrote:

 Hi,



 On Tue, Nov 11, 2014 at 3:43 PM, Pulasthi Supun pulas...@wso2.com
 wrote:

 Hi Dimuthu,

 On Tue, Nov 11, 2014 at 2:05 PM, Dimuthu Leelarathne 
 dimut...@wso2.com wrote:

 Hi Pulasthi,

 Yes. We do not need  global invalidation (although it would solve
 the problem), but the request is to sink AF Registry cache with the 
 DB. We
 are in the same JVM, and we need a method/way to tell registry remove 
 this
 particular path from Registry path cache.


 Such a method would need to be accessed through something like the
 remote registry right?. The Registry api does not provide such a 
 method to
 remove entries from the Registry cache.


 It should not be remote. An OSGi level method would be fine. Is
 there a way to patch the registry that we use?


 I talked with Azeez regarding this. He also agrees that providing
 such a method to manipulate the cache is wrong. We need to think of some
 other solution for this. I am not sure if the global cache invalidation 
 has
 completed or can be backported into 4.2.0.
 @Amal is the work on that complete ?

 Regards,
 Pulasthi


 thanks,
 dimuthu



 Regards,
 Pulasthi


 thanks,
 dimuthu


 On Tue, Nov 11, 2014 at 12:16 PM, Pulasthi Supun 
 pulas...@wso2.com wrote:

 Hi All,


 From what i understand the AF and SM are in different domains
 that is why distributed caching will not be able to handle this 
 scenario
 right?. global cluster cache invalidation has been done with the use 
 of a
 pub sub mechanism ( discussed in archi under Global cluster cache
 invalidation code review Notes ) but this will only be available in 
 the
 next release AFAIK.

 Regards,
 Pulasthi

 On Tue, Nov 11, 2014 at 10:00 AM, Amalka Subasinghe 
 ama...@wso2.com wrote:


 Hi,

 The scenario is, we have mounted SM's registry to the App
 Factory registry to remove the remote call for read the resources. 
 but
 still the write calls happens via SM. see the image below.


 ​

 The problem is, once we do a write call to the SM's registry,
 App Factory registry cache won't get 

Re: [Architecture] [POC] Performance evaluation of Hive vs Shark

2014-12-02 Thread Niranda Perera
Hi David,

Sorry to re-initiate this thread. But may I know if you have done any
benchmarking on Datastax Spark cassandra connector and Stratio Deep-spark
cassandra integration? Would love to take a look at it.

I recently checked deep-spark github repo and noticed that there is no
activity since Oct 29th. May I know what your future plans on this
particular project?

Cheers

On Tue, Aug 26, 2014 at 9:12 PM, David Morales dmora...@stratio.com wrote:

 Yes, it is already included in our benchmarks.

 It could be a nice idea to share our findings, let me talk about it here.
 Meanwhile, you can ask us any question by using my mail or this thread, we
 are glad to help you.


 Best regards.


 2014-08-24 15:49 GMT+02:00 Niranda Perera nira...@wso2.com:

 Hi David,

 Thank you for your detailed reply.

 It was great to hear about Stratio-Deep and I must say, it looks very
 interesting. Storage handlers for databases such Cassandra, MongoDB etc
 would be very helpful. We will definitely look up on Stratio-Deep.

 I came across with the Datastax Spark-Cassandra connector (
 https://github.com/datastax/spark-cassandra-connector ). Have you done
 any comparison with your implementation and Datastax's connector?

 And, yes, please do share the performance results with us once it's ready.

 On a different note, is there any way for us to interact with Stratio dev
 community, in the form of dev mail lists etc, so that we could mutually
 share our findings?

 Best regards



 On Fri, Aug 22, 2014 at 2:07 PM, David Morales dmora...@stratio.com
 wrote:

 Hi there,

 *1. About the size of deployments.*

 It depends on your use case... specially when you combine spark with a
 datastore. We use to deploy spark with cassandra or mongodb, instead of
 using HDFS for example.

 Spark will be faster if you put the data in memory, so if you need a lot
 of speed (interactive queries, for example), you should have enough memory.


 *2. About storage handlers.*

 We have developed the first tight integration between Cassandra and
 Spark, called Stratio Deep, announced in the first spark summit. You can
 check Stratio Deep out here: https://github.com/Stratio/stratio-deep (open,
 apache2 license).

 *Deep is a thin integration layer between Apache Spark and several NoSQL
 datastores. We actually support Apache Cassandra and MongoDB, but in the
 near future we will add support for sever other datastores.*

 Datastax have announce its own driver for spark in the last spark
 summit, but we have been working in our solution for almost a year.

 Furthermore, we are working to extend this solution in order to
 work also with other databases... MongoDB integration is completed right
 now and ElasticSearch will be ready in a few weeks.

 And that is not all, we have also developed an integration with
 Cassandra and Lucene for indexing data (open source, apache2).

 *Stratio Cassandra is a fork of Apache Cassandra
 http://cassandra.apache.org/ where index functionality has been extended
 to provide near real time search such as ElasticSearch or Solr,
 including full text search
 http://en.wikipedia.org/wiki/Full_text_search capabilities and free
 multivariable search. It is achieved through an Apache Lucene
 http://lucene.apache.org/ based implementation of Cassandra secondary
 indexes, where each node of the cluster indexes its own data.*


 We will publish some benchmarks in two weeks, so i will share our
 results here if you are interested.


 If you are more interested in distributed file systems, you should take
 a look on Tachyon: http://tachyon-project.org/index.html


 *3. Spark - Hive compatibility*

 Spark will support anything with the Hadoop InputFormat interface.


 *4. Performance*

 We are working a lot with Cassandra and mongoDB and the performance is
 quite nice. We are finishing right now some benchmarks comparing Hadoop +
 HDFS vs Spark + HDFS vs Spark + Cassandra (using stratio deep and even our
 fork of Cassandra).

 Let me please share this results with you when they were ready, ok?




 Regards.



























 2014-08-22 7:53 GMT+02:00 Niranda Perera nira...@wso2.com:

 Hi Srinath,
 Yes, I am working on deploying it on a multi-node cluster with the debs
 dataset. I will keep architecture@ posted on the progress.


 Hi David,
 Thank you very much for the detailed insight you've provided.
 Few quick questions,
 1. Do you have experiences in using storage handlers in Spark?
 2. Would a storage handler used in Hive, be directly compatible with
 Spark?
 3. How do you grade the performance of Spark with other databases such
 as Cassandra, HBase, H2, etc?

 Thank you very much again for your interest. Look forward to hearing
 from you.

 Regards


 On Thu, Aug 21, 2014 at 7:02 PM, Srinath Perera srin...@wso2.com
 wrote:

 Niranda, we need test Spark in multi-node mode before making a
 decision. Spark is very fast, I think there is no doubt about that. We 
 need
 to make sure it stable.

 David, thanks for a 

[Architecture] WSO2 Message Broker 3.0.0 Milestone 2 Released !

2014-12-02 Thread Hasitha Amal De Silva
The WSO2 Message Broker team is pleased to announce the 2nd
Milestone release of WSO2 Message Broker (MB)  3.0.0.

WSO2 Message Broker (WSO2 MB) 3.0.0 is designed to be a fast, lightweight
and user friendly open source distributed message brokering system under
the Apache Software License v2.0
http://www.apache.org/licenses/LICENSE-2.0.html. It primarily allows
system administrators and developers to easily configure JMS queues and
topics which could be used for message routing, message stores and message
processors.

WSO2 MB is compliant with Advanced Message Queuing Protocol Version 0-9-1,
Java Message Service Specification version 1.1 and MQTT protocol version
3.1. WSO2 MB 3.0.0 is developed on top of the revolutionary WSO2 Carbon
platform http://wso2.org/projects/carbon (Middleware a' la carte), an
OSGi based framework that provides seamless modularity to your SOA via
modularization. This release also contains many other new features and a
range of optional components (add-ons) that can be installed to customize
the behavior of the MB. Further, any existing features of the MB which are
not required to your environment can be easily removed using the underlying
provisioning framework of Carbon. In a nutshell, WSO2 MB can be fully
customized and tailored to meet your exact SOA needs.
You can download this distribution from [1].
Release notes for this milestone can be found at [2].
Known issues are listed at [3].

New Features introduced in this release :

1) Support for MQTT Messaging Protocol : as per specification version 3.1

2) New configuration model : The previous configurations were tightly
coupled with QPID model. The new model separates configurations specific to
WSO2 MB and provides a much cleaner, consistent property breakdown.
How You Can Contribute:Mailing Lists

Join our mailing list and correspond with the developers directly.

   - Developer List : d...@wso2.org | Subscribe
   dev-requ...@wso2.org?subject=subscribe | Mail Archive
   http://wso2.org/mailarchive/carbon-dev/
   - User List : StackOverflow.com
   http://stackoverflow.com/questions/tagged/wso2

Reporting Issues

WSO2 encourages you to report issues and your enhancement requests for the
WSO2 MB using the public JIRA https://wso2.org/jira/browse/MB.

You can also watch how they are resolved, and comment on the progress..

[1]
https://svn.wso2.org/repos/wso2/scratch/MB/3.0.0/M2/wso2mb-3.0.0-SNAPSHOT-m2.zip

[2]
https://wso2.org/jira/secure/ReleaseNote.jspa?projectId=10200version=11592
https://www.google.com/url?q=https%3A%2F%2Fwso2.org%2Fjira%2Fsecure%2FReleaseNote.jspa%3FprojectId%3D10200%26version%3D11592sa=Dsntz=1usg=AFQjCNFwXleQJ-86nDwCdtCaSsW-ebgxiQ

[3]
https://wso2.org/jira/browse/MB#selectedTab=com.atlassian.jira.plugin.system.project%3Aissues-panel


*~ The WSO2 MB Team ~*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Proposed ESB connector scenario - Integrating Canvas (LMS) with Eventbrite, MailChimp, GoogleCalendar, Facebook, GoToTraining and Disqus.

2014-12-02 Thread Rasika Hettige
Hi All,

Since the Eventbrite API has been changed recently, the scenarios will also
be modified accordingly:

*Course Creation and Marketing/Training Management*
1.  A course can be created in Canvas as an offline process.
2.  This course can be retrieved from Canvas and it can be created as a paid
event with the required 
ticket details in the EventBrite API. After which the created event
should be published through the 
EventBrite API.
3.  For the published events a campaign will be created and sent in the
MailChimp API for marketing 
purposes. As a prerequisite MailChimp should have a list of existing
subscribers.

Note:  Once the mail has  been sent from MailChimp,  Users can access the
event in EventBrite through the mail sent from MailChimp and they can pay
online and purchase tickets for the event in EventBrite. This is an offline
process.

1.  The attendees could be retrieved from the EventBrite API;
 a.   They could be added as users to the course in the Canvas API.
 b.  And they could also be added to the MailChimp subscribers list, if
they are not in the list already.
2.  The  users added to Canvas will be receiving an email invitation, using
which they can register 
themselves to the course. The enrolled users can be retrieved from
Canvas and a calendar could be 
created and shared with them in the Google Calendar API. (In order
perform this step, students must
have given a Gmail ID)


Following step under Course Creation and Marketing is removed as the
facebook API does not support such functionality now.

4.  The students who have specified their Facebook ID while registering
will be invited to join the private group in Facebook.

To cater the above changes, following methods have been added.

•   createEvent - Create an Event.
•   createTicketClass - Create a Ticket Class to associate with the event.
•   publishEvent - Publish a created event.

Thanks and Regards
Rasika




--
View this message in context: 
http://wso2-oxygen-tank.10903.n7.nabble.com/Proposed-ESB-connector-scenario-Integrating-Canvas-LMS-with-Eventbrite-MailChimp-GoogleCalendar-Face-tp106067p108184.html
Sent from the WSO2 Architecture mailing list archive at Nabble.com.
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] ESB connector scenario - Integrating Eventbrite with GoToWebinar, Facebook, ConstantContact, Gmail, Nexmo and SurveyMonkey

2014-12-02 Thread Rasika Hettige
Hi All,

Since the Eventbrite API has been changed recently, following steps will be
revised accordingly,

*Create Events and Tickets*


· Create Webinars in GoToWebinar as an offline process of ESB.

· Retrieve selected webinars from the GoToWebinar API and create
events in the Eventbrite API.

· Create tickets in the Eventbrite API for the above created events.

· And publish events in Eventbrite API .

Thanks  Regards
Rasika




--
View this message in context: 
http://wso2-oxygen-tank.10903.n7.nabble.com/ESB-connector-scenario-Integrating-Eventbrite-with-GoToWebinar-Facebook-ConstantContact-Gmail-Nexmo-y-tp108150p108188.html
Sent from the WSO2 Architecture mailing list archive at Nabble.com.
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Associating multiple lifecycles to a registry resource

2014-12-02 Thread Sagara Gunathunga
Current G-Reg admin console is designed to associate only one Lifecycle
with a registry resource at any given time but it seems we have valid use
cases which require to associate more than one Lifecycle with a registry
resource.

E.g  - ES + G-Reg integration

- ES has an approval process which define and manage lifecycle of an assets
within the 'context of store'.

- Same asset/resource (e.g REST Service ) has governance lifecycle within
the 'context of G-Reg' (e.g - dev, Q/A, product status).

Right now both of above Lifecycles have been implemented using SCXML and
the problem is it's not possible to associate more than one Lifecycle with
a registry resource. During the last week we had a meeting and identified
supporting to associate multiple Lifecycles is the best way go forward.

Further in order to realize this multiple Lifecycles concept properly we
should think associating more than one Lifecycle result into associating
multiple 'contexts' to a resource and under each context the resource can
have independent/dependent lifecycles.Further with this change 'state'
of a resource should be qualified with a given context, in other words
question what is the state of resource A should be raised as what is the
state of a resource A under 'context -X' .

As an example consider Published a Q/A stage service into the store'

- Under project or governance context - service state is 'Q/A'
- Under Store context   - service is  'Published'


@Senka, I would like to know is there any specific reason that we haven't
implement this support in past ?  If there is no such barrier we can
proceed further.

 Thanks !
-- 
Sagara Gunathunga

Senior Technical Lead; WSO2, Inc.;  http://wso2.com
V.P Apache Web Services;http://ws.apache.org/
Linkedin; http://www.linkedin.com/in/ssagara
Blog ;  http://ssagara.blogspot.com
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Associating multiple lifecycles to a registry resource

2014-12-02 Thread Senaka Fernando
Hi Sagara,

No there is no real-barrier. Even with SCXML this is possible. SCXML is
just a standard for states and transitions. We create an instance of a
state engine using a set of resource properties. If you want multiple
lifecycles, and want to retain the same model, it is a matter of using
multiple properties. If you group these together, these could end-up being
a Context that you define. But, when we say multiple we need to be careful
of whether it is 1 or 2 or 3 or X. That's what makes things complicated.

Having said that, in the past, we had something called aspect, which was
later improved to lifecycle (i.e. lifecycle extends aspect), but then
lifecycle was not build as an extension point and the aspect interface
itself was useless. So, we ended with just one default lifecycle
implementation and few extensions based on that, and there was no real need
and/or support for multiple lifecycles. This is why this was never
implemented in the past. But, with API-M and ES use-cases we had the need
but then it was hard to generalize since different products had their own
versions. It took a while for everybody to reach common ground and I
believe that we've got there now.

Coincidentally I happened to write a blog post on the need of multiple
lifecycles, few days back, [1].

[1]
http://senakafdo.blogspot.co.uk/2014/11/state-of-development-vs-state-of.html
.

Thanks,
Senaka.

On Tue, Dec 2, 2014 at 4:35 PM, Sagara Gunathunga sag...@wso2.com wrote:


 Current G-Reg admin console is designed to associate only one Lifecycle
 with a registry resource at any given time but it seems we have valid use
 cases which require to associate more than one Lifecycle with a registry
 resource.

 E.g  - ES + G-Reg integration

 - ES has an approval process which define and manage lifecycle of an
 assets within the 'context of store'.

 - Same asset/resource (e.g REST Service ) has governance lifecycle within
 the 'context of G-Reg' (e.g - dev, Q/A, product status).

 Right now both of above Lifecycles have been implemented using SCXML and
 the problem is it's not possible to associate more than one Lifecycle with
 a registry resource. During the last week we had a meeting and identified
 supporting to associate multiple Lifecycles is the best way go forward.

 Further in order to realize this multiple Lifecycles concept properly we
 should think associating more than one Lifecycle result into associating
 multiple 'contexts' to a resource and under each context the resource can
 have independent/dependent lifecycles.Further with this change 'state'
 of a resource should be qualified with a given context, in other words
 question what is the state of resource A should be raised as what is the
 state of a resource A under 'context -X' .

 As an example consider Published a Q/A stage service into the store'

 - Under project or governance context - service state is 'Q/A'
 - Under Store context   - service is  'Published'



 @Senka, I would like to know is there any specific reason that we haven't
 implement this support in past ?  If there is no such barrier we can
 proceed further.

  Thanks !
 --
 Sagara Gunathunga

 Senior Technical Lead; WSO2, Inc.;  http://wso2.com
 V.P Apache Web Services;http://ws.apache.org/
 Linkedin; http://www.linkedin.com/in/ssagara
 Blog ;  http://ssagara.blogspot.com




-- 


*[image: http://wso2.com] http://wso2.comSenaka Fernando*
Solutions Architect; WSO2 Inc.; http://wso2.com



*Member; Apache Software Foundation; http://apache.org
http://apache.orgE-mail: senaka AT wso2.com http://wso2.com**P: +1 408
754 7388 %2B1%20408%20754%207388; ext: 51736*;


*M: +44 782 741 1966 %2B44%20782%20741%201966Linked-In:
http://linkedin.com/in/senakafernando
http://linkedin.com/in/senakafernando*Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Associating multiple lifecycles to a registry resource

2014-12-02 Thread Sagara Gunathunga
On Tue, Dec 2, 2014 at 10:34 PM, Senaka Fernando sen...@wso2.com wrote:

 Hi Sagara,

 No there is no real-barrier. Even with SCXML this is possible. SCXML is
 just a standard for states and transitions. We create an instance of a
 state engine using a set of resource properties. If you want multiple
 lifecycles, and want to retain the same model, it is a matter of using
 multiple properties. If you group these together, these could end-up being
 a Context that you define. But, when we say multiple we need to be careful
 of whether it is 1 or 2 or 3 or X. That's what makes things complicated.

 Having said that, in the past, we had something called aspect, which was
 later improved to lifecycle (i.e. lifecycle extends aspect), but then
 lifecycle was not build as an extension point and the aspect interface
 itself was useless. So, we ended with just one default lifecycle
 implementation and few extensions based on that, and there was no real need
 and/or support for multiple lifecycles. This is why this was never
 implemented in the past. But, with API-M and ES use-cases we had the need
 but then it was hard to generalize since different products had their own
 versions. It took a while for everybody to reach common ground and I
 believe that we've got there now.


Thanks for sharing your insights and we are more or less in a common ground
where everybody agree that inventing lifecycle implementation per product
is not right way to solve problems.


 Coincidentally I happened to write a blog post on the need of multiple
 lifecycles, few days back, [1].

 [1]
 http://senakafdo.blogspot.co.uk/2014/11/state-of-development-vs-state-of.html
 .


Great :) Will look into your post.

Thanks !


 Thanks,
 Senaka.

 On Tue, Dec 2, 2014 at 4:35 PM, Sagara Gunathunga sag...@wso2.com wrote:


 Current G-Reg admin console is designed to associate only one Lifecycle
 with a registry resource at any given time but it seems we have valid use
 cases which require to associate more than one Lifecycle with a registry
 resource.

 E.g  - ES + G-Reg integration

 - ES has an approval process which define and manage lifecycle of an
 assets within the 'context of store'.

 - Same asset/resource (e.g REST Service ) has governance lifecycle within
 the 'context of G-Reg' (e.g - dev, Q/A, product status).

 Right now both of above Lifecycles have been implemented using SCXML and
 the problem is it's not possible to associate more than one Lifecycle with
 a registry resource. During the last week we had a meeting and identified
 supporting to associate multiple Lifecycles is the best way go forward.

 Further in order to realize this multiple Lifecycles concept properly we
 should think associating more than one Lifecycle result into associating
 multiple 'contexts' to a resource and under each context the resource can
 have independent/dependent lifecycles.Further with this change 'state'
 of a resource should be qualified with a given context, in other words
 question what is the state of resource A should be raised as what is the
 state of a resource A under 'context -X' .

 As an example consider Published a Q/A stage service into the store'

 - Under project or governance context - service state is 'Q/A'
 - Under Store context   - service is  'Published'



 @Senka, I would like to know is there any specific reason that we haven't
 implement this support in past ?  If there is no such barrier we can
 proceed further.

  Thanks !
 --
 Sagara Gunathunga

 Senior Technical Lead; WSO2, Inc.;  http://wso2.com
 V.P Apache Web Services;http://ws.apache.org/
 Linkedin; http://www.linkedin.com/in/ssagara
 Blog ;  http://ssagara.blogspot.com




 --


 *[image: http://wso2.com] http://wso2.comSenaka Fernando*
 Solutions Architect; WSO2 Inc.; http://wso2.com



 *Member; Apache Software Foundation; http://apache.org
 http://apache.orgE-mail: senaka AT wso2.com http://wso2.com**P: +1
 408 754 7388 %2B1%20408%20754%207388; ext: 51736*;


 *M: +44 782 741 1966 %2B44%20782%20741%201966Linked-In:
 http://linkedin.com/in/senakafernando
 http://linkedin.com/in/senakafernando*Lean . Enterprise . Middleware




-- 
Sagara Gunathunga

Senior Technical Lead; WSO2, Inc.;  http://wso2.com
V.P Apache Web Services;http://ws.apache.org/
Linkedin; http://www.linkedin.com/in/ssagara
Blog ;  http://ssagara.blogspot.com
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [POC] Performance evaluation of Hive vs Shark

2014-12-02 Thread David Morales
Hi¡

Please, check the develop branch if you want to see a more realistic view
of our development path. Last commit was about two hours ago :)

Stratio Deep is one of our core modules so there is a core team in Stratio
fully devoted to spark + noSQL integration. In these last months, for
example, we have added mongoDB, ElasticSearch and Aerospike to Stratio
Deep, so you can talk to these databases from Spark just like you do with
HDFS.

Furthermore, we are working on more backends, such as neo4j or couchBase,
for example.


About our benchmarks, you can check out some results in this link:
http://www.stratio.com/deep-vs-datastax/

Please, keep in mind that spark integration with a datastore could be done
in two ways: HCI or native. We are now working on improving native
integration because it's quite more performant. In this way, we are just
working on some other tests with even more impressive results.


Here you can find a technical overview of all our platform.


http://www.slideshare.net/Stratio/stratio-platform-overview-v41

Regards

2014-12-02 11:14 GMT+01:00 Niranda Perera nira...@wso2.com:

 Hi David,

 Sorry to re-initiate this thread. But may I know if you have done any
 benchmarking on Datastax Spark cassandra connector and Stratio Deep-spark
 cassandra integration? Would love to take a look at it.

 I recently checked deep-spark github repo and noticed that there is no
 activity since Oct 29th. May I know what your future plans on this
 particular project?

 Cheers

 On Tue, Aug 26, 2014 at 9:12 PM, David Morales dmora...@stratio.com
 wrote:

 Yes, it is already included in our benchmarks.

 It could be a nice idea to share our findings, let me talk about it here.
 Meanwhile, you can ask us any question by using my mail or this thread, we
 are glad to help you.


 Best regards.


 2014-08-24 15:49 GMT+02:00 Niranda Perera nira...@wso2.com:

 Hi David,

 Thank you for your detailed reply.

 It was great to hear about Stratio-Deep and I must say, it looks very
 interesting. Storage handlers for databases such Cassandra, MongoDB etc
 would be very helpful. We will definitely look up on Stratio-Deep.

 I came across with the Datastax Spark-Cassandra connector (
 https://github.com/datastax/spark-cassandra-connector ). Have you done
 any comparison with your implementation and Datastax's connector?

 And, yes, please do share the performance results with us once it's
 ready.

 On a different note, is there any way for us to interact with Stratio
 dev community, in the form of dev mail lists etc, so that we could mutually
 share our findings?

 Best regards



 On Fri, Aug 22, 2014 at 2:07 PM, David Morales dmora...@stratio.com
 wrote:

 Hi there,

 *1. About the size of deployments.*

 It depends on your use case... specially when you combine spark with a
 datastore. We use to deploy spark with cassandra or mongodb, instead of
 using HDFS for example.

 Spark will be faster if you put the data in memory, so if you need a
 lot of speed (interactive queries, for example), you should have enough
 memory.


 *2. About storage handlers.*

 We have developed the first tight integration between Cassandra and
 Spark, called Stratio Deep, announced in the first spark summit. You can
 check Stratio Deep out here: https://github.com/Stratio/stratio-deep (open,
 apache2 license).

 *Deep is a thin integration layer between Apache Spark and several
 NoSQL datastores. We actually support Apache Cassandra and MongoDB, but in
 the near future we will add support for sever other datastores.*

 Datastax have announce its own driver for spark in the last spark
 summit, but we have been working in our solution for almost a year.

 Furthermore, we are working to extend this solution in order to
 work also with other databases... MongoDB integration is completed right
 now and ElasticSearch will be ready in a few weeks.

 And that is not all, we have also developed an integration with
 Cassandra and Lucene for indexing data (open source, apache2).

 *Stratio Cassandra is a fork of Apache Cassandra
 http://cassandra.apache.org/ where index functionality has been extended
 to provide near real time search such as ElasticSearch or Solr,
 including full text search
 http://en.wikipedia.org/wiki/Full_text_search capabilities and free
 multivariable search. It is achieved through an Apache Lucene
 http://lucene.apache.org/ based implementation of Cassandra secondary
 indexes, where each node of the cluster indexes its own data.*


 We will publish some benchmarks in two weeks, so i will share our
 results here if you are interested.


 If you are more interested in distributed file systems, you should take
 a look on Tachyon: http://tachyon-project.org/index.html


 *3. Spark - Hive compatibility*

 Spark will support anything with the Hadoop InputFormat interface.


 *4. Performance*

 We are working a lot with Cassandra and mongoDB and the performance is
 quite nice. We are finishing right now some benchmarks comparing 

Re: [Architecture] Moving deployment section from configuration xml and Facilitate AppFactory users to deploy apps with their custom app types if its not already available

2014-12-02 Thread Aiyadurai Rajeevan
Hi Dimuthu/All,

In addition to this mail conversation we had discussed this in an internal
forum, Here is the update of thatdiscussion

As of today We are using appfactory.xml file for the runtime configurations
the below fraction is the the configuration properties.

ApplicationType name=*

 ClassName
org.wso2.carbon.appfactory.jenkins.deploy.JenkinsArtifactDeployer
/ClassName

Endpointhttps://sc.s2.AF_HOST:9463/services//Endpoint

RepositoryProvider

 Property name=Class


org.wso2.carbon.appfactory.s4.integration.GITBlitBasedGITRepositoryProvider\

 /Property

 Property name=BaseURLhttps://gitblit.s2.wso2.com:8444/
/Property

  Property name=URLPattern{@stage}/as/Property

Property name=AdminUserNameadmin/Property

 Property name=AdminPasswordadmin/Property

/RepositoryProvider

Properties

Property name=aliasasdev/Property

Property name=cartridgeTypeasdev/Property

Property name=deploymentPolicyaf-deployment/Property

Property name=autoscalePolicyeconomy/Property

Property name=repoURL/Property

Property name=dataCartridgeType/Property

Property name=dataCartridgeAlias/Property

Property name=subscribeOnDeploymentfalse/Property

/Properties

/ApplicationType


*Proposed solution*

*Part 1: -* In the above xml, Content which enclosed within the
*RepositoryProvider* are used to do the Pass artifact storage
configuration. Hence, As suggested we can keep this in the
*org.wso2.carbon.appfactory.jenkins.AppfactoryPluginManager.xml* file.

*Part 2:- *Content which are enclosed within *Properties* tag are used for
the subscription. Hence, Below is the solution which we are proposing. So,
it would be more user friendly.

There can be multi tenant subscriber and single tenant subscriber, Lets
focus on the multi tenant scenario here.

   *Step 1*: Create Tenant

  *Step 2*:Tenant Admin Login

  *Step 3*: Go to subscriber manager, This would be a GUI which let the
user to subscribe the needed Cartridge type, Environment(Dev,Test
Prod), deploymentPolicy
and autoscalePolicy. The GUI shall look like below.




Here We can populate cartridge type, deploymentPolicy and autoscalePolicy
details from Stratos service.

So user can select the needed details in the above GUI and click subscribe,
That will invoke a call to Stratos service for the cartridge allocation and
create Repo URL which will used to commit the code in s2Git. Altogether
there would be three URL for the 3 environments.


Appreciate your views in this approach please.

Thanks  Regards,
S.A.Rajeevan
Software Engineer WSO2 Inc
E-Mail: rajeev...@wso2.com | Mobile : +94776411636

On Tue, Dec 2, 2014 at 12:27 PM, Dimuthu Leelarathne dimut...@wso2.com
wrote:

 Hi Danushka,

 Please see my comments below.

 On Tue, Dec 2, 2014 at 12:01 PM, Danushka Fernando danush...@wso2.com
 wrote:

 HI Dimuthu
 Please find my comments inline


 On Tue, Dec 2, 2014 at 8:45 AM, Dimuthu Leelarathne dimut...@wso2.com
 wrote:

 Hi Rajeevan,

 Please see my comments below.

 On Mon, Dec 1, 2014 at 10:46 PM, Aiyadurai Rajeevan rajeev...@wso2.com
 wrote:

 Hi All,

 We are trying to implement a feature on AF which enables the user to
 deploy their customized app types, Currently this configuration is
 available in *appfactory.xml *under *Deployer* tag the content would
 be as [1], likewise we have for each app types. Hence this wouldn't be
 editable by the users and may not deploy their own app types. If we move
 out this [1] from *appfactory.xml* and put this in a configurable file
 would enable the users to customize their need.

 When we analyzing the [1] we found out , The below content related to
 Pass artifact storage configuration and common to all app types.

 RepositoryProvider

Property name=Class


 org.wso2.carbon.appfactory.s4.integration.GITBlitBasedGITRepositoryProvider

 /Property

Property name=BaseURL
 https://gitblit.s2.wso2.com:8444//Property

 Property name=URLPattern{@stage}/as
 /Property

Property name=AdminUserNameadmin/Property

  Property name=AdminPasswordadmin
 /Property

 /RepositoryProvider


 What is the other xml file are you talking about? Lets call it foo.xml.
 The only other place that require these information other than App Factory
 is Jenkins. So rather than putting these in separate xml file, I will put
 it in the appfactory-plugin.xml file. This is because Jenkins provide and
 inherent way to read appfactory-plugin.xml file and it doesn't give an
 inherent way to read foo.xml file.

 So at this stage the foo.xml file is only used in App Factory. You may
 merge it with appfactory.xml. The next question you may ask configuration
 duplication. I say puppet is the answer.

 If we have code to read foo.xml we 

Re: [Architecture] Moving deployment section from configuration xml and Facilitate AppFactory users to deploy apps with their custom app types if its not already available

2014-12-02 Thread Dimuthu Leelarathne
HI Rajeevan,

No GUI please. We are changing the whole user story here.

thanks,
dimuthu


On Wed, Dec 3, 2014 at 10:54 AM, Aiyadurai Rajeevan rajeev...@wso2.com
wrote:

 Hi Dimuthu/All,

 In addition to this mail conversation we had discussed this in an internal
 forum, Here is the update of thatdiscussion

 As of today We are using appfactory.xml file for the runtime
 configurations the below fraction is the the configuration properties.

 ApplicationType name=*

  ClassName
 org.wso2.carbon.appfactory.jenkins.deploy.JenkinsArtifactDeployer
 /ClassName

 Endpointhttps://sc.s2.AF_HOST:9463/services//Endpoint

 RepositoryProvider

  Property name=Class


 org.wso2.carbon.appfactory.s4.integration.GITBlitBasedGITRepositoryProvider\

  /Property

  Property name=BaseURLhttps://gitblit.s2.wso2.com:8444/
 /Property

   Property name=URLPattern{@stage}/as/Property

 Property name=AdminUserNameadmin/Property

  Property name=AdminPasswordadmin/Property

 /RepositoryProvider

 Properties

 Property name=aliasasdev/Property

 Property name=cartridgeTypeasdev/Property

 Property name=deploymentPolicyaf-deployment/Property

 Property name=autoscalePolicyeconomy/Property

 Property name=repoURL/Property

 Property name=dataCartridgeType/Property

 Property name=dataCartridgeAlias/Property

 Property name=subscribeOnDeploymentfalse/Property

 /Properties

 /ApplicationType


 *Proposed solution*

 *Part 1: -* In the above xml, Content which enclosed within the
 *RepositoryProvider* are used to do the Pass artifact storage
 configuration. Hence, As suggested we can keep this in the
 *org.wso2.carbon.appfactory.jenkins.AppfactoryPluginManager.xml* file.

 *Part 2:- *Content which are enclosed within *Properties* tag are used
 for the subscription. Hence, Below is the solution which we are proposing.
 So, it would be more user friendly.

 There can be multi tenant subscriber and single tenant subscriber, Lets
 focus on the multi tenant scenario here.

*Step 1*: Create Tenant

   *Step 2*:Tenant Admin Login

   *Step 3*: Go to subscriber manager, This would be a GUI which let
 the user to subscribe the needed Cartridge type, Environment(Dev,Test
 Prod), deploymentPolicy and autoscalePolicy. The GUI shall look like
 below.




 Here We can populate cartridge type, deploymentPolicy and autoscalePolicy
 details from Stratos service.

 So user can select the needed details in the above GUI and click
 subscribe, That will invoke a call to Stratos service for the cartridge
 allocation and create Repo URL which will used to commit the code in s2Git.
 Altogether there would be three URL for the 3 environments.


 Appreciate your views in this approach please.

 Thanks  Regards,
 S.A.Rajeevan
 Software Engineer WSO2 Inc
 E-Mail: rajeev...@wso2.com | Mobile : +94776411636

 On Tue, Dec 2, 2014 at 12:27 PM, Dimuthu Leelarathne dimut...@wso2.com
 wrote:

 Hi Danushka,

 Please see my comments below.

 On Tue, Dec 2, 2014 at 12:01 PM, Danushka Fernando danush...@wso2.com
 wrote:

 HI Dimuthu
 Please find my comments inline


 On Tue, Dec 2, 2014 at 8:45 AM, Dimuthu Leelarathne dimut...@wso2.com
 wrote:

 Hi Rajeevan,

 Please see my comments below.

 On Mon, Dec 1, 2014 at 10:46 PM, Aiyadurai Rajeevan rajeev...@wso2.com
  wrote:

 Hi All,

 We are trying to implement a feature on AF which enables the user to
 deploy their customized app types, Currently this configuration is
 available in *appfactory.xml *under *Deployer* tag the content
 would be as [1], likewise we have for each app types. Hence this wouldn't
 be editable by the users and may not deploy their own app types. If we 
 move
 out this [1] from *appfactory.xml* and put this in a configurable
 file would enable the users to customize their need.

 When we analyzing the [1] we found out , The below content related to
 Pass artifact storage configuration and common to all app types.

 RepositoryProvider

Property name=Class


 org.wso2.carbon.appfactory.s4.integration.GITBlitBasedGITRepositoryProvider

 /Property

Property name=BaseURL
 https://gitblit.s2.wso2.com:8444//Property

 Property name=URLPattern{@stage}/as
 /Property

Property name=AdminUserNameadmin/Property

  Property name=AdminPasswordadmin
 /Property

 /RepositoryProvider


 What is the other xml file are you talking about? Lets call it foo.xml.
 The only other place that require these information other than App Factory
 is Jenkins. So rather than putting these in separate xml file, I will put
 it in the appfactory-plugin.xml file. This is because Jenkins provide and
 inherent way to read appfactory-plugin.xml file and it doesn't give an
 inherent way to read foo.xml 

Re: [Architecture] Invitation: CDM Hackathon @ Mon Dec 1, 2014 1:30pm - 5:30pm (Sameera Perera)

2014-12-02 Thread Manoj Gunawardena
Hi All,

Few Corrections.

This is December 2nd meeting minutes. Also need to add two things,

12. In the device table:
 need  to change 'byod' field name to 'ownership'.

13. The current policy definition is just a set of actions that do not
contain any
  applicable conditions, it was suggested to rename it as 'profile'.

Thanks

On Wed, Dec 3, 2014 at 11:06 AM, Manoj Gunawardena man...@wso2.com wrote:

 Hi All,

 *Dec 03rd meeting minutes.*

 *Attendees* : Sameera Perera, Kasun Dananjaya, Prabath Abeysekera, Asok
 Perera, Afkham Azeez, Geeth Munasinghe, Inosh Perera, Sumedha Rubasinghe, 
 Srinath
 Perera, Manoj Gunawardena


 1. Develop components in a separate repo, But for the movement it's
 develop as a module inside CDM. Later move to a separate repo.

 2.Develop common interface in the core, to cater all core
 functionalities.  Develop plugins for each device type.

 Ex -: Android, IOS, windows or other device type

 3. For IOS and android develop a common enrollment data set and common
 method.
 For windows or other specific devices develop specific services.

 4. Policy handling discussion - Not completed. Decided to keep existing
 functionality for 1st release.

 5. Policy rename as profile suggested.

 6. IOT devices enrollment mechanism discussed. The device should develop
 It's own agent to communicate with CDM, In such cases authentication
 mechanism should be implement using OAUTH custom grant type.

 7. Sumeda will do a demonstration on IOT devices connectivity with CDM.

 8. For mobile agents, the existing authentication flow should change.
The authentication mechanism decided to change using service provider.

 9. The existing agent download mechanism has security falls. Decided to
 change and display login page and authenticate user at the downloading.

 10. The team is working on listing interfaces and creating the repo,
 modules and structure
   The new CDM repo.
 https://github.com/geethkokila/product-cdm/


 11. New concept event should add for dynamic policies.


 Kindly append if I have missed anything here

 Thanks
 Manoj





 On Tue, Dec 2, 2014 at 11:05 AM, Sameera Perera samee...@wso2.com wrote:

 Appending

 Day 1 Notes:

 *Goals*

1. Identify a CDM Core that's device/platform agnostic
2. Properly modularize the product (continuing the discussion in the
thread [1])
3. Evaluate how CDM can able to support Dynamic policies and policy
merging which are frequently requested features

 *Terminology*

- Feature (more accurately Device Management Feature): This is an
operation that the CDM can instruct the device to perform. A MDM example 
 of
this would be to Put ringer in silent mode. On a smart thermostat, this
could be Reduce Air Conditioning
- Policy: A policy is a collection of features. The CDM Core,
maintains the policy in a canonical format. Platform bundles may translate
the policy in to a device dependent format.

 Both terms (feature and policy) are misnomers and may better termed as
 operations and profiles. However, we decided to stick to the terms as these
 are standard across the MDM industry.


- Platform Bundle: Refer [1]


 *CDM Core*

- Instructs the platform bundle to apply a policy to the device
- The core determines which policy needs to be applied to the device
through a Trigger. EMM 1.0 used 3 types of triggers: Platform, User and
Role. CDM will introduce Location and Time.
- When device calls back to the CDM, validates whether the device is
violating its policy

 The Core will contain policy management and policy enforcement components
 as well as a policy editor.

 *MDM Module*

- A MDM Module will sit on top of the core to enable feature parity
with EMM 1.1. It will be written specifically with the knowledge that
platform bundles for Android, iOS are available.
- The MDM will define a common set of features that are available
across all phones.

 By introducing this layer of abstraction we are able to keep manage
 mobile devices across mobile platforms as we've done in EMM without
 complicating the device agnostic capabilities of the core. Similar modules
 can be built by 3rd parties to manage other device categories such as
 thermostats, smart TVs etc. from different vendors/platforms.

 *Other notes*

- We have demoted the following concepts to second class citizens and
pulled them out of the core
   1. OS Platform, version
   2. Device Platform, version
   3. Roles

 1 and 2 only matter to the extend that they help us define a set of
 available features. Bundles will be responsible for managing this set based
 on these factors. The core will only be aware of the set.
 Roles were used in the EMM only to select the policy to apply on a user's
 device. We have moved this responsibility to the Trigger (or handler)
 chain. The core will only be aware of the chain, not what logic is applied
 to generate the policy.


Re: [Architecture] Moving deployment section from configuration xml and Facilitate AppFactory users to deploy apps with their custom app types if its not already available

2014-12-02 Thread Aiyadurai Rajeevan
Hi Dimuthu,

Thanks for the suggestion. So, As a conclusion I will go ahead with the
implementation as having a runtime.xml for the whole below peroperties and
populate a map from there.

Runtime

Runtimeappserver/Runtime


ClassNameorg.wso2.carbon.appfactory.jenkins.deploy.JenkinsArtifactDeployer/ClassName

 Endpointhttps://sc.s2.AF_HOST:9463/services//Endpoint

RepositoryProvider


ProviderClassorg.wso2.carbon.appfactory.s4.integration.GITBlitBasedGITRepositoryProvider/ProviderClass

BaseURLhttps://gitblit.s2.wso2.com:8444//BaseURL

URLPattern{@stage}/as/URLPattern

AdminUserNameadmin/AdminUserName

AdminPasswordadmin/AdminPassword

/RepositoryProvider

AliasPrefixas/AliasPrefix

CartridgeTypePrefixas/CartridgeTypePrefix

DeploymentPolicyaf-deployment/DeploymentPolicy

AutoscalePolicyeconomy/AutoscalePolicy

RepoURL/RepoURL

DataCartridgeType/DataCartridgeType

DataCartridgeAlias/DataCartridgeAlias

SubscribeOnDeploymentfalse/SubscribeOnDeployment

/Runtime



Thanks  Regards,
S.A.Rajeevan
Software Engineer WSO2 Inc
E-Mail: rajeev...@wso2.com | Mobile : +94776411636

On Wed, Dec 3, 2014 at 11:03 AM, Dimuthu Leelarathne dimut...@wso2.com
wrote:

 HI Rajeevan,

 No GUI please. We are changing the whole user story here.

 thanks,
 dimuthu


 On Wed, Dec 3, 2014 at 10:54 AM, Aiyadurai Rajeevan rajeev...@wso2.com
 wrote:

 Hi Dimuthu/All,

 In addition to this mail conversation we had discussed this in an
 internal forum, Here is the update of thatdiscussion

 As of today We are using appfactory.xml file for the runtime
 configurations the below fraction is the the configuration properties.

 ApplicationType name=*

  ClassName
 org.wso2.carbon.appfactory.jenkins.deploy.JenkinsArtifactDeployer
 /ClassName

 Endpointhttps://sc.s2.AF_HOST:9463/services//Endpoint

 RepositoryProvider

  Property name=Class


 org.wso2.carbon.appfactory.s4.integration.GITBlitBasedGITRepositoryProvider\

  /Property

  Property name=BaseURLhttps://gitblit.s2.wso2.com:8444/
 /Property

   Property name=URLPattern{@stage}/as/Property

 Property name=AdminUserNameadmin/Property

  Property name=AdminPasswordadmin/Property

 /RepositoryProvider

 Properties

 Property name=aliasasdev/Property

 Property name=cartridgeTypeasdev/Property

 Property name=deploymentPolicyaf-deployment/Property

 Property name=autoscalePolicyeconomy/Property

 Property name=repoURL/Property

 Property name=dataCartridgeType/Property

 Property name=dataCartridgeAlias/Property

 Property name=subscribeOnDeploymentfalse/Property

 /Properties

 /ApplicationType


 *Proposed solution*

 *Part 1: -* In the above xml, Content which enclosed within the
 *RepositoryProvider* are used to do the Pass artifact storage
 configuration. Hence, As suggested we can keep this in the
 *org.wso2.carbon.appfactory.jenkins.AppfactoryPluginManager.xml* file.

 *Part 2:- *Content which are enclosed within *Properties* tag are used
 for the subscription. Hence, Below is the solution which we are proposing.
 So, it would be more user friendly.

 There can be multi tenant subscriber and single tenant subscriber, Lets
 focus on the multi tenant scenario here.

*Step 1*: Create Tenant

   *Step 2*:Tenant Admin Login

   *Step 3*: Go to subscriber manager, This would be a GUI which let
 the user to subscribe the needed Cartridge type, Environment(Dev,Test
 Prod), deploymentPolicy and autoscalePolicy. The GUI shall look like
 below.




 Here We can populate cartridge type, deploymentPolicy and
 autoscalePolicy details from Stratos service.

 So user can select the needed details in the above GUI and click
 subscribe, That will invoke a call to Stratos service for the cartridge
 allocation and create Repo URL which will used to commit the code in s2Git.
 Altogether there would be three URL for the 3 environments.


 Appreciate your views in this approach please.

 Thanks  Regards,
 S.A.Rajeevan
 Software Engineer WSO2 Inc
 E-Mail: rajeev...@wso2.com | Mobile : +94776411636

 On Tue, Dec 2, 2014 at 12:27 PM, Dimuthu Leelarathne dimut...@wso2.com
 wrote:

 Hi Danushka,

 Please see my comments below.

 On Tue, Dec 2, 2014 at 12:01 PM, Danushka Fernando danush...@wso2.com
 wrote:

 HI Dimuthu
 Please find my comments inline


 On Tue, Dec 2, 2014 at 8:45 AM, Dimuthu Leelarathne dimut...@wso2.com
 wrote:

 Hi Rajeevan,

 Please see my comments below.

 On Mon, Dec 1, 2014 at 10:46 PM, Aiyadurai Rajeevan 
 rajeev...@wso2.com wrote:

 Hi All,

 We are trying to implement a feature on AF which enables the user to
 deploy their customized app types, Currently this configuration is
 available in *appfactory.xml *under *Deployer* tag the content
 would be as [1], likewise we have for each app types. Hence this 

Re: [Architecture] Associating multiple lifecycles to a registry resource

2014-12-02 Thread Isuruwan Herath
Hi Senaka/Sagara,

IIUC in current registry API too there is no limitation to add more than
one aspect.

A point to consider in this implementation IMO is: should we consider allow
any dependency between the life cycles attached in each context? As
Senaka mentioned if we allow to attach 1..n number of life cycles which
could possibly so different manipulations on the resource in state
transitions, this could be an issue. One option is we could let different
life cycles behave independently and handle the conflicts in implementation
level.

Thanks!
Isuruwan

On Tue, Dec 2, 2014 at 11:04 PM, Sagara Gunathunga sag...@wso2.com wrote:



 On Tue, Dec 2, 2014 at 10:34 PM, Senaka Fernando sen...@wso2.com wrote:

 Hi Sagara,

 No there is no real-barrier. Even with SCXML this is possible. SCXML is
 just a standard for states and transitions. We create an instance of a
 state engine using a set of resource properties. If you want multiple
 lifecycles, and want to retain the same model, it is a matter of using
 multiple properties. If you group these together, these could end-up being
 a Context that you define. But, when we say multiple we need to be careful
 of whether it is 1 or 2 or 3 or X. That's what makes things complicated.

 Having said that, in the past, we had something called aspect, which was
 later improved to lifecycle (i.e. lifecycle extends aspect), but then
 lifecycle was not build as an extension point and the aspect interface
 itself was useless. So, we ended with just one default lifecycle
 implementation and few extensions based on that, and there was no real need
 and/or support for multiple lifecycles. This is why this was never
 implemented in the past. But, with API-M and ES use-cases we had the need
 but then it was hard to generalize since different products had their own
 versions. It took a while for everybody to reach common ground and I
 believe that we've got there now.


 Thanks for sharing your insights and we are more or less in a common
 ground where everybody agree that inventing lifecycle implementation per
 product is not right way to solve problems.


 Coincidentally I happened to write a blog post on the need of multiple
 lifecycles, few days back, [1].

 [1]
 http://senakafdo.blogspot.co.uk/2014/11/state-of-development-vs-state-of.html
 .


 Great :) Will look into your post.

 Thanks !


 Thanks,
 Senaka.

 On Tue, Dec 2, 2014 at 4:35 PM, Sagara Gunathunga sag...@wso2.com
 wrote:


 Current G-Reg admin console is designed to associate only one Lifecycle
 with a registry resource at any given time but it seems we have valid use
 cases which require to associate more than one Lifecycle with a registry
 resource.

 E.g  - ES + G-Reg integration

 - ES has an approval process which define and manage lifecycle of an
 assets within the 'context of store'.

 - Same asset/resource (e.g REST Service ) has governance lifecycle
 within the 'context of G-Reg' (e.g - dev, Q/A, product status).

 Right now both of above Lifecycles have been implemented using SCXML and
 the problem is it's not possible to associate more than one Lifecycle with
 a registry resource. During the last week we had a meeting and identified
 supporting to associate multiple Lifecycles is the best way go forward.

 Further in order to realize this multiple Lifecycles concept properly we
 should think associating more than one Lifecycle result into associating
 multiple 'contexts' to a resource and under each context the resource can
 have independent/dependent lifecycles.Further with this change 'state'
 of a resource should be qualified with a given context, in other words
 question what is the state of resource A should be raised as what is the
 state of a resource A under 'context -X' .

 As an example consider Published a Q/A stage service into the store'

 - Under project or governance context - service state is 'Q/A'
 - Under Store context   - service is
  'Published'


 @Senka, I would like to know is there any specific reason that we
 haven't implement this support in past ?  If there is no such barrier we
 can proceed further.

  Thanks !
 --
 Sagara Gunathunga

 Senior Technical Lead; WSO2, Inc.;  http://wso2.com
 V.P Apache Web Services;http://ws.apache.org/
 Linkedin; http://www.linkedin.com/in/ssagara
 Blog ;  http://ssagara.blogspot.com




 --


 *[image: http://wso2.com] http://wso2.comSenaka Fernando*
 Solutions Architect; WSO2 Inc.; http://wso2.com



 *Member; Apache Software Foundation; http://apache.org
 http://apache.orgE-mail: senaka AT wso2.com http://wso2.com**P: +1
 408 754 7388 %2B1%20408%20754%207388; ext: 51736*;


 *M: +44 782 741 1966 %2B44%20782%20741%201966Linked-In:
 http://linkedin.com/in/senakafernando
 http://linkedin.com/in/senakafernando*Lean . Enterprise . Middleware




 --
 Sagara Gunathunga

 Senior Technical Lead; WSO2, Inc.;  http://wso2.com
 V.P Apache Web Services;http://ws.apache.org/
 Linkedin;