Re: [Dev] [EMM] Exception when installing ios p2-repository in EMM

2015-12-01 Thread Sashika Wijesinghe
Hi Dilshan,

This issue is still there even after restarting the EMM server.

Regards,


On Wed, Dec 2, 2015 at 12:21 AM, Dilshan Edirisuriya 
wrote:

> Hi Sashika,
>
> Did this issue go away once you restart? At installation time there was a
> similar issue but I believe we have already fixed that.
>
> Regards,
>
> Dilshan
>
> On Tue, Dec 1, 2015 at 6:43 PM, Sashika Wijesinghe 
> wrote:
>
>> Hi All,
>>
>> I want to configure IOS to MDM. I followed below steps to configure IOS.
>>
>>- configure general server configurations as mentioned in doc [1
>>
>>]
>>- Start EMM server and added ios-agent.ipa file to
>>
>> '/repository/deployment/server/jaggeryapps/mdm/units/asset-download-agent-ios/public/asset'
>>path
>>- Installed p2 repository as mentioned in doc [2
>>]
>>
>> [1]
>> https://docs.wso2.com/display/EMM200/General+iOS+Server+Configurations
>> [2] https://docs.wso2.com/display/EMM200/Installing+the+P2+Repository
>>
>> Below exception observed in terminal after installing p2 repository. May
>> I know whether I missed any mandatory configurations?
>>
>> log4j:WARN No appenders could be found for logger
>> (org.apache.cxf.common.logging.LogUtils).
>> log4j:WARN Please initialize the log4j system properly.
>> [2015-12-01 18:10:13,701] ERROR
>> {org.apache.catalina.core.ApplicationContext} -  StandardWrapper.Throwable
>> org.springframework.beans.factory.BeanCreationException: Error creating
>> bean with name 'enrollmentService': Cannot resolve reference to bean
>> 'enrollmentServiceBean' while setting bean property 'serviceBeans' with key
>> [0]; nested exception is
>> org.springframework.beans.factory.BeanCreationException: Error creating
>> bean with name 'enrollmentServiceBean' defined in URL
>> [jndi:/localhost/ios-enrollment/WEB-INF/cxf-servlet.xml]: Instantiation of
>> bean failed; nested exception is java.lang.NoClassDefFoundError:
>> org/wso2/carbon/device/mgt/ios/core/exception/IOSEnrollmentException
>> at
>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328)
>> at
>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106)
>> at
>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:353)
>> at
>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:153)
>> at
>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1327)
>> at
>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1085)
>> at
>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:516)
>> at
>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:455)
>> at
>> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
>> at
>> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
>> at
>> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
>> at
>> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192)
>> at
>> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
>> at
>> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
>> at
>> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
>> at
>> org.apache.cxf.transport.servlet.CXFServlet.createSpringContext(CXFServlet.java:151)
>> at org.apache.cxf.transport.servlet.CXFServlet.loadBus(CXFServlet.java:74)
>> at
>> org.apache.cxf.transport.servlet.CXFNonSpringServlet.init(CXFNonSpringServlet.java:76)
>> at
>> org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1284)
>> at
>> org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1197)
>> at
>> org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1087)
>> at
>> org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5262)
>> at
>> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5550)
>> at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
>> at
>> 

Re: [Dev] [EMM] Exception when installing ios p2-repository in EMM

2015-12-01 Thread Inosh Perera
Hi Shashika,

Yes. There are changes.

Regards,
Inosh

On Wed, Dec 2, 2015 at 8:32 AM, Sashika Wijesinghe  wrote:

> Hi Inosh,
>
> I'm using the same p2-repository which I used for the pack given before
> the alpha release. Is there any changes to the p2 repository?
>
> Regards,
>
> On Wed, Dec 2, 2015 at 8:15 AM, Inosh Perera  wrote:
>
>> Hi Sashika,
>>
>> Could you please tell if you have taken a pull and built the proprietary
>> plugin and taken the p2-repo from that recently?
>>
>> Regards,
>> Inosh
>>
>> On Wed, Dec 2, 2015 at 8:12 AM, Sashika Wijesinghe 
>> wrote:
>>
>>> Hi Dilshan,
>>>
>>> This issue is still there even after restarting the EMM server.
>>>
>>> Regards,
>>>
>>>
>>> On Wed, Dec 2, 2015 at 12:21 AM, Dilshan Edirisuriya 
>>> wrote:
>>>
 Hi Sashika,

 Did this issue go away once you restart? At installation time there was
 a similar issue but I believe we have already fixed that.

 Regards,

 Dilshan

 On Tue, Dec 1, 2015 at 6:43 PM, Sashika Wijesinghe 
 wrote:

> Hi All,
>
> I want to configure IOS to MDM. I followed below steps to configure
> IOS.
>
>- configure general server configurations as mentioned in doc [1
>
> 
>]
>- Start EMM server and added ios-agent.ipa file to
>
> '/repository/deployment/server/jaggeryapps/mdm/units/asset-download-agent-ios/public/asset'
>path
>- Installed p2 repository as mentioned in doc [2
>
>]
>
> [1]
> https://docs.wso2.com/display/EMM200/General+iOS+Server+Configurations
> [2] https://docs.wso2.com/display/EMM200/Installing+the+P2+Repository
>
> Below exception observed in terminal after installing p2 repository.
> May I know whether I missed any mandatory configurations?
>
> log4j:WARN No appenders could be found for logger
> (org.apache.cxf.common.logging.LogUtils).
> log4j:WARN Please initialize the log4j system properly.
> [2015-12-01 18:10:13,701] ERROR
> {org.apache.catalina.core.ApplicationContext} -  StandardWrapper.Throwable
> org.springframework.beans.factory.BeanCreationException: Error
> creating bean with name 'enrollmentService': Cannot resolve reference to
> bean 'enrollmentServiceBean' while setting bean property 'serviceBeans'
> with key [0]; nested exception is
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean with name 'enrollmentServiceBean' defined in URL
> [jndi:/localhost/ios-enrollment/WEB-INF/cxf-servlet.xml]: Instantiation of
> bean failed; nested exception is java.lang.NoClassDefFoundError:
> org/wso2/carbon/device/mgt/ios/core/exception/IOSEnrollmentException
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328)
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106)
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:353)
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:153)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1327)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1085)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:516)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:455)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192)
> at
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
> at
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
> at
> 

Re: [Dev] [EMM] Exception when installing ios p2-repository in EMM

2015-12-01 Thread Inosh Perera
Hi Sashika,

Could you please tell if you have taken a pull and built the proprietary
plugin and taken the p2-repo from that recently?

Regards,
Inosh

On Wed, Dec 2, 2015 at 8:12 AM, Sashika Wijesinghe  wrote:

> Hi Dilshan,
>
> This issue is still there even after restarting the EMM server.
>
> Regards,
>
>
> On Wed, Dec 2, 2015 at 12:21 AM, Dilshan Edirisuriya 
> wrote:
>
>> Hi Sashika,
>>
>> Did this issue go away once you restart? At installation time there was a
>> similar issue but I believe we have already fixed that.
>>
>> Regards,
>>
>> Dilshan
>>
>> On Tue, Dec 1, 2015 at 6:43 PM, Sashika Wijesinghe 
>> wrote:
>>
>>> Hi All,
>>>
>>> I want to configure IOS to MDM. I followed below steps to configure IOS.
>>>
>>>- configure general server configurations as mentioned in doc [1
>>>
>>>]
>>>- Start EMM server and added ios-agent.ipa file to
>>>
>>> '/repository/deployment/server/jaggeryapps/mdm/units/asset-download-agent-ios/public/asset'
>>>path
>>>- Installed p2 repository as mentioned in doc [2
>>>]
>>>
>>> [1]
>>> https://docs.wso2.com/display/EMM200/General+iOS+Server+Configurations
>>> [2] https://docs.wso2.com/display/EMM200/Installing+the+P2+Repository
>>>
>>> Below exception observed in terminal after installing p2 repository. May
>>> I know whether I missed any mandatory configurations?
>>>
>>> log4j:WARN No appenders could be found for logger
>>> (org.apache.cxf.common.logging.LogUtils).
>>> log4j:WARN Please initialize the log4j system properly.
>>> [2015-12-01 18:10:13,701] ERROR
>>> {org.apache.catalina.core.ApplicationContext} -  StandardWrapper.Throwable
>>> org.springframework.beans.factory.BeanCreationException: Error creating
>>> bean with name 'enrollmentService': Cannot resolve reference to bean
>>> 'enrollmentServiceBean' while setting bean property 'serviceBeans' with key
>>> [0]; nested exception is
>>> org.springframework.beans.factory.BeanCreationException: Error creating
>>> bean with name 'enrollmentServiceBean' defined in URL
>>> [jndi:/localhost/ios-enrollment/WEB-INF/cxf-servlet.xml]: Instantiation of
>>> bean failed; nested exception is java.lang.NoClassDefFoundError:
>>> org/wso2/carbon/device/mgt/ios/core/exception/IOSEnrollmentException
>>> at
>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328)
>>> at
>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106)
>>> at
>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:353)
>>> at
>>> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:153)
>>> at
>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1327)
>>> at
>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1085)
>>> at
>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:516)
>>> at
>>> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:455)
>>> at
>>> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
>>> at
>>> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
>>> at
>>> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
>>> at
>>> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192)
>>> at
>>> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
>>> at
>>> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
>>> at
>>> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
>>> at
>>> org.apache.cxf.transport.servlet.CXFServlet.createSpringContext(CXFServlet.java:151)
>>> at
>>> org.apache.cxf.transport.servlet.CXFServlet.loadBus(CXFServlet.java:74)
>>> at
>>> org.apache.cxf.transport.servlet.CXFNonSpringServlet.init(CXFNonSpringServlet.java:76)
>>> at
>>> org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1284)
>>> at
>>> org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1197)
>>> at
>>> 

Re: [Dev] [EMM] Exception when installing ios p2-repository in EMM

2015-12-01 Thread Sashika Wijesinghe
Hi Inosh,

I'm using the same p2-repository which I used for the pack given before the
alpha release. Is there any changes to the p2 repository?

Regards,

On Wed, Dec 2, 2015 at 8:15 AM, Inosh Perera  wrote:

> Hi Sashika,
>
> Could you please tell if you have taken a pull and built the proprietary
> plugin and taken the p2-repo from that recently?
>
> Regards,
> Inosh
>
> On Wed, Dec 2, 2015 at 8:12 AM, Sashika Wijesinghe 
> wrote:
>
>> Hi Dilshan,
>>
>> This issue is still there even after restarting the EMM server.
>>
>> Regards,
>>
>>
>> On Wed, Dec 2, 2015 at 12:21 AM, Dilshan Edirisuriya 
>> wrote:
>>
>>> Hi Sashika,
>>>
>>> Did this issue go away once you restart? At installation time there was
>>> a similar issue but I believe we have already fixed that.
>>>
>>> Regards,
>>>
>>> Dilshan
>>>
>>> On Tue, Dec 1, 2015 at 6:43 PM, Sashika Wijesinghe 
>>> wrote:
>>>
 Hi All,

 I want to configure IOS to MDM. I followed below steps to configure IOS.

- configure general server configurations as mentioned in doc [1

]
- Start EMM server and added ios-agent.ipa file to

 '/repository/deployment/server/jaggeryapps/mdm/units/asset-download-agent-ios/public/asset'
path
- Installed p2 repository as mentioned in doc [2
]

 [1]
 https://docs.wso2.com/display/EMM200/General+iOS+Server+Configurations
 [2] https://docs.wso2.com/display/EMM200/Installing+the+P2+Repository

 Below exception observed in terminal after installing p2 repository.
 May I know whether I missed any mandatory configurations?

 log4j:WARN No appenders could be found for logger
 (org.apache.cxf.common.logging.LogUtils).
 log4j:WARN Please initialize the log4j system properly.
 [2015-12-01 18:10:13,701] ERROR
 {org.apache.catalina.core.ApplicationContext} -  StandardWrapper.Throwable
 org.springframework.beans.factory.BeanCreationException: Error creating
 bean with name 'enrollmentService': Cannot resolve reference to bean
 'enrollmentServiceBean' while setting bean property 'serviceBeans' with key
 [0]; nested exception is
 org.springframework.beans.factory.BeanCreationException: Error creating
 bean with name 'enrollmentServiceBean' defined in URL
 [jndi:/localhost/ios-enrollment/WEB-INF/cxf-servlet.xml]: Instantiation of
 bean failed; nested exception is java.lang.NoClassDefFoundError:
 org/wso2/carbon/device/mgt/ios/core/exception/IOSEnrollmentException
 at
 org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328)
 at
 org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106)
 at
 org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:353)
 at
 org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:153)
 at
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1327)
 at
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1085)
 at
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:516)
 at
 org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:455)
 at
 org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
 at
 org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
 at
 org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
 at
 org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192)
 at
 org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
 at
 org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
 at
 org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
 at
 org.apache.cxf.transport.servlet.CXFServlet.createSpringContext(CXFServlet.java:151)
 at
 org.apache.cxf.transport.servlet.CXFServlet.loadBus(CXFServlet.java:74)
 at
 

Re: [Dev] [APIM] 1.10.0 - 401 Error when publishing an API

2015-12-01 Thread Pubudu Priyashan
Hi Nuwan,

Thanks for your assistance today debugging this issue. As we have now
identified it is a bug, the fix will be tested with ticket [1].

[1] https://wso2.org/jira/browse/APIMANAGER-4290

Cheers,
Pubudu.

Pubudu D.P
Senior Software Engineer - QA Team | WSO2 inc.
Mobile : +94775464547

On Tue, Dec 1, 2015 at 9:57 AM, Nuwan Dias  wrote:

> Hi Pubudu,
>
> The reason for the publishing failure is an authentication failure. The
> Publisher fails to authenticate with the Gateway to access its admin
> services. Could the admin passwords changed inconsistently?
>
> The Exception on the Gateway is not related to the error on publisher. Its
> an issue related to svn dep-sync. I'm not sure what that is.
>
> Although these have occurred when you tried to move to Oracle from MySQL,
> none of the errors are DB specific. There aren't any SQL related error logs
> in the traces you've attached. My guess is that they have been introduced
> during the migration process from MySQL to Oracle.
>
> If we cannot get this figured out soon can we sit together and have a
> look? It would make things fast.
>
> Thanks,
> NuwanD.
>
> On Mon, Nov 30, 2015 at 11:45 PM, Pubudu Priyashan 
> wrote:
>
>> Also, gateway worker nodes have given the attached exception after
>> the server was started.
>>
>> Pubudu D.P
>> Senior Software Engineer - QA Team | WSO2 inc.
>> Mobile : +94775464547
>>
>> On Mon, Nov 30, 2015 at 11:35 PM, Pubudu Priyashan 
>> wrote:
>>
>>> Hi API-M team,
>>>
>>> We've been experiencing the attached exception in publisher node while
>>> trying to point the dep-sync svn location to a new folder path. We got rid
>>> of the same issue in mysql setup. But for Oracle 12c we are unable fix it
>>> with the same set of steps. Please find the steps we followed below.
>>>
>>> 1. Drop DBs and re-create the same DBs (apim, um, reg, stats)
>>> 2. Delete H2 dbs from all nodes ($home/repository/databases)
>>> 3. Delete $home/repository/deployment/server folder and replace the same
>>> from a fresh pack
>>> 4. Create a new svn folder and point it on carbon.xml in gateway manager
>>> and both gateway worker nodes.
>>> 5. Make sure there are no .svn folders within the file structure of
>>> gateway manager and gateway workers
>>> 6. Start all the nodes with -Dsetup
>>>
>>> This sequence worked for mysql, but for Oracle 12c set up, attached
>>> exception keeps appearing on publisher node when trying to publish an api
>>> from publisher to store. On UI it's giving attached error message. Could
>>> you please point out if we are missing something or could it be an issue
>>> related to Oracle db?
>>>
>>> Cheers,
>>> Pubudu D.P
>>> Senior Software Engineer - QA Team | WSO2 inc.
>>> Mobile : +94775464547
>>>
>>
>>
>
>
> --
> Nuwan Dias
>
> Technical Lead - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Axis2-transports-1.1.1-wso2v1 Released

2015-12-01 Thread Jagath Sisirakumara Ariyarathne
Hi,

Axis2-transports-1.1.1-wso2v1 Released.

org.apache.axis2.transport
axis2-transports
1.1.1-wso2v1

Thanks.
-- 
Jagath Ariyarathne
Technical Lead
WSO2 Inc.  http://wso2.com/
Email: jaga...@wso2.com
Mob  : +94 77 386 7048
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Publishing carbon logs to DAS

2015-12-01 Thread Sinthuja Ragendran
Hi,

The way forward to go on log monitoring is with Log Analytics solution that
we are working on. Anyhow the old logging publishing method is broken and
cannot be used with latest carbon release products, because that's tightly
coupled with cassandra, Hadoop, etc, and hence we can't use that with
current DAS. Anyhow once the LA solution/log publisher (based on log stash
publisher) is available, it can be used with existing WSO2 products also.

Thanks,
Sinthuja.


On Wed, Dec 2, 2015 at 11:29 AM, Sriskandarajah Suhothayan 
wrote:

> Hi DAS team
>
> The current log publishing is broken.
> Whats the recommended log publishing approach going forward?
>
> Suho
>
> On Wed, Dec 2, 2015 at 11:27 AM, Imesh Gunaratne  wrote:
>
>> Hi Suho/Anjana,
>>
>> I noticed that we are working on a feature called Log Analyzer. Is this
>> for centralized logging?
>> If not what's the approach we are taking for $subject with DAS?
>>
>> Thanks
>>
>> On Wed, Dec 2, 2015 at 11:16 AM, Anuruddha Liyanarachchi <
>> anurudd...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> I am trying to publish carbon logs to DAS and I am facing following
>>> problems.
>>>
>>> *In carbon 4.2.0 products (APIM 1.9.1) :*
>>> For each day stream definitions are created [1], therefore I can't use a
>>> common event receiver to persist data.
>>>
>>>
>>> *In carbon 4.4.0 products (ESB 4.9.0) :*
>>> Throws class not found error [2].
>>>
>>> Is there a way to solve these issues ?
>>>
>>>
>>> [1]log.0.AM.2015.12.02:1.0.0
>>> 
>>> log.0.AM.2015.12.01:1.0.0
>>> 
>>>
>>> [2]
>>> log4j:ERROR Could not instantiate class
>>> [org.wso2.carbon.logging.service.appender.LogEventAppender].
>>> java.lang.ClassNotFoundException:
>>> org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
>>> by org.wso2.carbon.logging_4.4.1
>>> at
>>> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
>>> at
>>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
>>> at
>>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
>>> at
>>> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:191)
>>> at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
>>> at
>>> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
>>> at
>>> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
>>> at
>>> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
>>> at
>>> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>>> at
>>> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>>> at
>>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>>> at
>>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>>> at
>>> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>>> at org.apache.log4j.LogManager.(LogManager.java:127)
>>> at
>>> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
>>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
>>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
>>> at com.atomikos.logging.Slf4jLogger.(Slf4jLogger.java:8)
>>> at
>>> com.atomikos.logging.Slf4JLoggerFactoryDelegate.createLogger(Slf4JLoggerFactoryDelegate.java:7)
>>> at com.atomikos.logging.LoggerFactory.createLogger(LoggerFactory.java:12)
>>> at com.atomikos.logging.LoggerFactory.(LoggerFactory.java:52)
>>> at
>>> com.atomikos.transactions.internal.AtomikosActivator.(AtomikosActivator.java:47)
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> at java.lang.Class.newInstance(Class.java:379)
>>> at
>>> org.eclipse.osgi.framework.internal.core.AbstractBundle.loadBundleActivator(AbstractBundle.java:167)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:679)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
>>> at
>>> org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
>>> at
>>> 

Re: [Dev] Publishing carbon logs to DAS

2015-12-01 Thread Sriskandarajah Suhothayan
Since Log Analytics solution will take some time to come
Can we release log publishing part of the Log Analytics solution ASAP such
that others can publish log to DAS.

Suho

On Wed, Dec 2, 2015 at 12:08 PM, Malith Dhanushka  wrote:

> Yes. Log analyzer which is being written on top of DAS platform will be
> based on log stash http publisher.
>
> Thanks
>
> On Wed, Dec 2, 2015 at 11:58 AM, Sinthuja Ragendran 
> wrote:
>
>> Hi,
>>
>> The way forward to go on log monitoring is with Log Analytics solution
>> that we are working on. Anyhow the old logging publishing method is broken
>> and cannot be used with latest carbon release products, because that's
>> tightly coupled with cassandra, Hadoop, etc, and hence we can't use that
>> with current DAS. Anyhow once the LA solution/log publisher (based on log
>> stash publisher) is available, it can be used with existing WSO2 products
>> also.
>>
>> Thanks,
>> Sinthuja.
>>
>>
>> On Wed, Dec 2, 2015 at 11:29 AM, Sriskandarajah Suhothayan > > wrote:
>>
>>> Hi DAS team
>>>
>>> The current log publishing is broken.
>>> Whats the recommended log publishing approach going forward?
>>>
>>> Suho
>>>
>>> On Wed, Dec 2, 2015 at 11:27 AM, Imesh Gunaratne  wrote:
>>>
 Hi Suho/Anjana,

 I noticed that we are working on a feature called Log Analyzer. Is this
 for centralized logging?
 If not what's the approach we are taking for $subject with DAS?

 Thanks

 On Wed, Dec 2, 2015 at 11:16 AM, Anuruddha Liyanarachchi <
 anurudd...@wso2.com> wrote:

> Hi,
>
> I am trying to publish carbon logs to DAS and I am facing following
> problems.
>
> *In carbon 4.2.0 products (APIM 1.9.1) :*
> For each day stream definitions are created [1], therefore I can't use
> a common event receiver to persist data.
>
>
> *In carbon 4.4.0 products (ESB 4.9.0) :*
> Throws class not found error [2].
>
> Is there a way to solve these issues ?
>
>
> [1]log.0.AM.2015.12.02:1.0.0
> 
> log.0.AM.2015.12.01:1.0.0
> 
>
> [2]
> log4j:ERROR Could not instantiate class
> [org.wso2.carbon.logging.service.appender.LogEventAppender].
> java.lang.ClassNotFoundException:
> org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
> by org.wso2.carbon.logging_4.4.1
> at
> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
> at
> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
> at
> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
> at
> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:191)
> at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
> at
> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
> at
> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
> at
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
> at
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
> at
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
> at
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
> at
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
> at
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
> at org.apache.log4j.LogManager.(LogManager.java:127)
> at
> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
> at com.atomikos.logging.Slf4jLogger.(Slf4jLogger.java:8)
> at
> com.atomikos.logging.Slf4JLoggerFactoryDelegate.createLogger(Slf4JLoggerFactoryDelegate.java:7)
> at
> com.atomikos.logging.LoggerFactory.createLogger(LoggerFactory.java:12)
> at com.atomikos.logging.LoggerFactory.(LoggerFactory.java:52)
> at
> com.atomikos.transactions.internal.AtomikosActivator.(AtomikosActivator.java:47)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> 

Re: [Dev] Publishing carbon logs to DAS

2015-12-01 Thread Malith Dhanushka
Yes. Log analyzer which is being written on top of DAS platform will be
based on log stash http publisher.

Thanks

On Wed, Dec 2, 2015 at 11:58 AM, Sinthuja Ragendran 
wrote:

> Hi,
>
> The way forward to go on log monitoring is with Log Analytics solution
> that we are working on. Anyhow the old logging publishing method is broken
> and cannot be used with latest carbon release products, because that's
> tightly coupled with cassandra, Hadoop, etc, and hence we can't use that
> with current DAS. Anyhow once the LA solution/log publisher (based on log
> stash publisher) is available, it can be used with existing WSO2 products
> also.
>
> Thanks,
> Sinthuja.
>
>
> On Wed, Dec 2, 2015 at 11:29 AM, Sriskandarajah Suhothayan 
> wrote:
>
>> Hi DAS team
>>
>> The current log publishing is broken.
>> Whats the recommended log publishing approach going forward?
>>
>> Suho
>>
>> On Wed, Dec 2, 2015 at 11:27 AM, Imesh Gunaratne  wrote:
>>
>>> Hi Suho/Anjana,
>>>
>>> I noticed that we are working on a feature called Log Analyzer. Is this
>>> for centralized logging?
>>> If not what's the approach we are taking for $subject with DAS?
>>>
>>> Thanks
>>>
>>> On Wed, Dec 2, 2015 at 11:16 AM, Anuruddha Liyanarachchi <
>>> anurudd...@wso2.com> wrote:
>>>
 Hi,

 I am trying to publish carbon logs to DAS and I am facing following
 problems.

 *In carbon 4.2.0 products (APIM 1.9.1) :*
 For each day stream definitions are created [1], therefore I can't use
 a common event receiver to persist data.


 *In carbon 4.4.0 products (ESB 4.9.0) :*
 Throws class not found error [2].

 Is there a way to solve these issues ?


 [1]log.0.AM.2015.12.02:1.0.0
 
 log.0.AM.2015.12.01:1.0.0
 

 [2]
 log4j:ERROR Could not instantiate class
 [org.wso2.carbon.logging.service.appender.LogEventAppender].
 java.lang.ClassNotFoundException:
 org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
 by org.wso2.carbon.logging_4.4.1
 at
 org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
 at
 org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
 at
 org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
 at
 org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:191)
 at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
 at
 org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
 at
 org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
 at
 org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
 at
 org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
 at
 org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
 at
 org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
 at
 org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
 at
 org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
 at org.apache.log4j.LogManager.(LogManager.java:127)
 at
 org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
 at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
 at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
 at com.atomikos.logging.Slf4jLogger.(Slf4jLogger.java:8)
 at
 com.atomikos.logging.Slf4JLoggerFactoryDelegate.createLogger(Slf4JLoggerFactoryDelegate.java:7)
 at
 com.atomikos.logging.LoggerFactory.createLogger(LoggerFactory.java:12)
 at com.atomikos.logging.LoggerFactory.(LoggerFactory.java:52)
 at
 com.atomikos.transactions.internal.AtomikosActivator.(AtomikosActivator.java:47)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at java.lang.Class.newInstance(Class.java:379)
 at
 org.eclipse.osgi.framework.internal.core.AbstractBundle.loadBundleActivator(AbstractBundle.java:167)
 at
 

Re: [Dev] Publishing carbon logs to DAS

2015-12-01 Thread Imesh Gunaratne
Hi Suho/Anjana,

I noticed that we are working on a feature called Log Analyzer. Is this for
centralized logging?
If not what's the approach we are taking for $subject with DAS?

Thanks

On Wed, Dec 2, 2015 at 11:16 AM, Anuruddha Liyanarachchi <
anurudd...@wso2.com> wrote:

> Hi,
>
> I am trying to publish carbon logs to DAS and I am facing following
> problems.
>
> *In carbon 4.2.0 products (APIM 1.9.1) :*
> For each day stream definitions are created [1], therefore I can't use a
> common event receiver to persist data.
>
>
> *In carbon 4.4.0 products (ESB 4.9.0) :*
> Throws class not found error [2].
>
> Is there a way to solve these issues ?
>
>
> [1]log.0.AM.2015.12.02:1.0.0
> 
> log.0.AM.2015.12.01:1.0.0
> 
>
> [2]
> log4j:ERROR Could not instantiate class
> [org.wso2.carbon.logging.service.appender.LogEventAppender].
> java.lang.ClassNotFoundException:
> org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
> by org.wso2.carbon.logging_4.4.1
> at
> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
> at
> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
> at
> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
> at
> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:191)
> at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
> at
> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
> at
> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
> at
> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
> at
> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
> at
> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
> at
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
> at
> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
> at
> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
> at org.apache.log4j.LogManager.(LogManager.java:127)
> at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
> at com.atomikos.logging.Slf4jLogger.(Slf4jLogger.java:8)
> at
> com.atomikos.logging.Slf4JLoggerFactoryDelegate.createLogger(Slf4JLoggerFactoryDelegate.java:7)
> at com.atomikos.logging.LoggerFactory.createLogger(LoggerFactory.java:12)
> at com.atomikos.logging.LoggerFactory.(LoggerFactory.java:52)
> at
> com.atomikos.transactions.internal.AtomikosActivator.(AtomikosActivator.java:47)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at java.lang.Class.newInstance(Class.java:379)
> at
> org.eclipse.osgi.framework.internal.core.AbstractBundle.loadBundleActivator(AbstractBundle.java:167)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:679)
> at
> org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
> at
> org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
> at
> org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
> at
> org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
> at
> org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
> at
> org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
> at
> org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
> at
> org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
> at
> org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
> at
> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
> at
> org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
> log4j:ERROR Could not instantiate appender named "LOGEVENT".
>
> --
> *Thanks and Regards,*
> Anuruddha Lanka Liyanarachchi

Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Anuruddha Premalal
Hi Anjana,

As per the initial design we have two payload parameters (loggroup,
logstream) for each and every log. So it's mandatory for the user to
configure this additional parameters in the logstash (logstash support
additional parameters). We are storing this data in a separate data-table,
for UI related functionalities (file upload); and yes, we can cache them as
you've mentioned.



On Wed, Dec 2, 2015 at 11:09 AM, Madhawa Gunasekara 
wrote:

> Hi All,
>
> I think we can get some information by uploading sample logs from the
> agent. then we can analyze that sample log and find the exact fields that
> can appear in logs. then we can configure the agent according to the
> findings. from sample log file we can analyze rare logs and frequently logs
> and so on. this feature is available in splunk.
>
> Thanks,
> Madhawa
>
> On Wed, Dec 2, 2015 at 10:17 AM, Sachith Withana  wrote:
>
>> Now that we are using logstash out of the box, without the DASConnector,
>> it won't do that.
>>
>> The logstash would just start publishing and with the current design,
>> AFAIK the schema setting would be handled by the LAS server,
>>
>> BTW for that requirement, can we provide a way to allow indexing all the
>> columns?
>>
>> On Wed, Dec 2, 2015 at 10:11 AM, Anjana Fernando  wrote:
>>
>>> Hi Sachith,
>>>
>>> Doesn't the agent have the knowledge of the log types/categories and
>>> their field information when it is initializing? .. as in, as I understood,
>>> we give what fields needs to be sent out in the configurations, isn't that
>>> the case? ..
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 2, 2015 at 10:01 AM, Sachith Withana 
>>> wrote:
>>>
 Hi All,

 There might be a slight issue. We wouldn't know the arbitrary fields
 before the log agent starts publishing, since the agent only publishes and
 we don't have control over which fields would be sent ( unless we configure
 all the agents ourselves). So we would have to check for each event, if
 there are new fields apart from that are there in the schema. This is
 undesirable.

 And as Anjana pointed out we don't have a way to specify to index all
 the arbitrary values unless we set the schema accordingly.

 Is it possible to specify in the schema to index everything?

 On Wed, Dec 2, 2015 at 9:38 AM, Anjana Fernando 
 wrote:

> Hi Malith,
>
> The functionality which you're requesting is very specific, and from
> DAS side, it doesn't make sense to implement this in a generic way, which
> is not used usually. And it is anyway not the way, the log analyzer should
> use it. The different log sources, will know their fields before they send
> out data, it doesn't have to be checked every time an event is published. 
> A
> log source would instruct the log analyzer backend API, the new fields,
> this specific log source will be sending, and with the earlier message, 
> the
> backend service will set the global table's schema properly, and then the
> remote log agent will be sending out log records to be processed by the
> server.
>
> Cheers,
> Anjana.
>
> On Tue, Dec 1, 2015 at 6:44 PM, Malith Dhanushka 
> wrote:
>
>> Hi Anjana,
>>
>> Yes. Requirement is for the internal log related REST API which is
>> being written using osgi services. In the perspective of log analysis 
>> data,
>> we have one master table to persist all the log events from different log
>> sources. The way log data comes in to log REST API is as arbitrary 
>> fields.
>> So different log sources have different set of arbitrary fields which 
>> leads
>> log REST API to change the schema of master table every time it receives
>> log events from a new/updated log source. That's what i meant inaccurate
>> which can be solved much cleaner way by having that flag to index or not 
>> to
>> index arbitrary fields for a particular stream.
>>
>> Thanks,
>> Malith
>>
>> On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando 
>> wrote:
>>
>>> Hi Malith,
>>>
>>> No, it cannot be done like that. How the indexing and all happens
>>> is, it looks up the table schema for a table and do the indexing 
>>> according
>>> to that. So the table schema must be set before hand. It is not a 
>>> dynamic
>>> thing that can be set, when arbitrary fields are sent to the receiver, 
>>> and
>>> it cannot always load the current schema and set it always for each 
>>> event,
>>> even though we can cache that information and do some operations, but 
>>> that
>>> gets complicated. So the idea is, it is the responsibility of the 
>>> client to
>>> set the target table's schema properly before hand, which may or may not

Re: [Dev] US Election 2016 Tweet Analyze System

2015-12-01 Thread Yasara Dissanayake
Hi ,
Left Conner:
Top 3 election candidates are displayed on the top and big letters and
other candidates are displayed in left side bar. We can visit anyones page
and this is the Trump page.

Middle:
Community Graph:
Here display the Trump's community graph(that nodes represent the twitter
account and also color of node indicates the different candidates and color
shade is use to indicate the number of tweets produce by that account and
sized of the node indicate the re tweet count of the that account gets) is
at the middle. dinali is  working on this.

Re-tweet List
List of top tweets displayed below the community graph based on the rank
which directly proportion to the re-tweet count and inversely proportion to
the life time of that tweets.

Right Conner:

Result of positive and negative sentiment analysis result is displayed
below the photograph of owner of that page. Indent to display result using
graph. Yudhanjaya is  working on this.

Below that display the owner's(here Trump's) unique hash tags which based
on the popular tweets(Please consider this hashTags are updated with time).

Doughnut graph is use to display the  current winning percentage of that
candidate compared to other candidates. Currently have only rough draft to
be implemented after machine learning part .



@Nirmal thank you. Still we are integrating our parts and still database is
not completed yet. thank you for the correction it should be sentiment
analysis. Hashtags are based on the popular tweets at that time. It change
with the list of tweets which we are selected as most popular. Sentiment
analysis not integrate to the dashboard yet percentage for Hillary Clinton
is hard code value and we'll correct it.And upload the revised version
soon. Thank you for the comments.

regards,
Yasara

On Wed, Dec 2, 2015 at 10:09 AM, Nirmal Fernando  wrote:

> Hi Yasara,
>
> Please explain the UI (as the UI is at very early stages, it's not easy to
> grasp stuff) :-)
>
> On Wed, Dec 2, 2015 at 9:47 AM, Yasara Dissanayake 
> wrote:
>
>> Hi,
>>
>> This is the snap shots of the final Integration of  the website.
>>
>> Please leave your comments.
>>
>> regards.
>>
>> On Tue, Dec 1, 2015 at 1:34 PM, Yudhanjaya Wijeratne > > wrote:
>>
>>> +1 :)
>>>
>>> On Tue, Dec 1, 2015 at 1:10 PM, Srinath Perera  wrote:
>>>
 I might WFH. Shall we meet Thursday 11am?

 On Tue, Dec 1, 2015 at 12:20 PM, Yudhanjaya Wijeratne <
 yudhanj...@wso2.com> wrote:

> Hi Srinath,
>
> +1 to all. I think sentiment analysis will take the form of a x-y
> graph charting the ups and downs. Shall I come to Trace tomorrow morning?
>
> Thanks,
> Yudha
>
> On Tue, Dec 1, 2015 at 11:21 AM, Srinath Perera 
> wrote:
>
>> Hi Yudhanjaya,
>>
>> Yasara and Dinali have the basics for twitter graph and most
>> important tweets in place. We need to design the story around this. ( I 
>> am
>> ccing Dakshika so we can get UX feedback from him).
>>
>> Dakshika, we are trying to build a website to analyze the US election
>> data for twitter.
>>
>> IMO we have not figured out the story yet, although we have
>> individual pieces. Following are my comments.
>>
>>
>>1. Looking at Twitter graph, I feel showing number of tweet each
>>user did does not tell anything useful.
>>2. I feel twitter graph should include all tweeps, not tweeter
>>graph for one candidate. ( we can do color coding to show what are 
>> tweeps
>>for focus candidate)
>>3. I agree users want to know about one candidate. But I think we
>>need to show the data in contrast. Shall we show each candidate's 
>> data in
>>contrast to the first. ( For first we contrast with second)
>>
>> We also need to do sentimental analysis one and figure out where it
>> fit in.
>>
>> When you will be in Trace? We should meet and discuss.
>>
>> Thanks
>> Srinath
>>
>>
>>
>> On Fri, Nov 20, 2015 at 8:34 AM, Yudhanjaya Wijeratne <
>> yudhanj...@wso2.com> wrote:
>>
>>> Srinath,
>>>
>>> +1. RT's would show influence best.
>>>
>>>
>>>
>>> On Fri, Nov 20, 2015 at 8:32 AM, Srinath Perera 
>>> wrote:
>>>
 Hi Yudhanjaya,

 On Thu, Nov 19, 2015 at 3:40 PM, Yudhanjaya Wijeratne <
 yudhanj...@wso2.com> wrote:

> Hi Srinath,
>
> Regarding Dinali's graph, we had a chat and realized that using
> the width of the edge makes the graph harder to read as smaller 
> connections
> are hidden. What if we did it this way:
>
> *Distance between nodes = 1 / RTs between nodes*
>

 Actually we do not need to do anything. Force base layouts we use
 will put 

Re: [Dev] Publishing carbon logs to DAS

2015-12-01 Thread Sinthuja Ragendran
On Wed, Dec 2, 2015 at 12:10 PM, Sriskandarajah Suhothayan 
wrote:

> Since Log Analytics solution will take some time to come
> Can we release log publishing part of the Log Analytics solution ASAP
> such that others can publish log to DAS.
>

Just publishing the logs to DAS  is not going to add any big advantage
without having good dashboard, etc isn't it?
AFAIR we decided to have a fully featured log analytics solution and not
have any partial stuff released.

Thanks,
Sinthuja.


> Suho
>
> On Wed, Dec 2, 2015 at 12:08 PM, Malith Dhanushka  wrote:
>
>> Yes. Log analyzer which is being written on top of DAS platform will be
>> based on log stash http publisher.
>>
>> Thanks
>>
>> On Wed, Dec 2, 2015 at 11:58 AM, Sinthuja Ragendran 
>> wrote:
>>
>>> Hi,
>>>
>>> The way forward to go on log monitoring is with Log Analytics solution
>>> that we are working on. Anyhow the old logging publishing method is broken
>>> and cannot be used with latest carbon release products, because that's
>>> tightly coupled with cassandra, Hadoop, etc, and hence we can't use that
>>> with current DAS. Anyhow once the LA solution/log publisher (based on log
>>> stash publisher) is available, it can be used with existing WSO2 products
>>> also.
>>>
>>> Thanks,
>>> Sinthuja.
>>>
>>>
>>> On Wed, Dec 2, 2015 at 11:29 AM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
 Hi DAS team

 The current log publishing is broken.
 Whats the recommended log publishing approach going forward?

 Suho

 On Wed, Dec 2, 2015 at 11:27 AM, Imesh Gunaratne 
 wrote:

> Hi Suho/Anjana,
>
> I noticed that we are working on a feature called Log Analyzer. Is
> this for centralized logging?
> If not what's the approach we are taking for $subject with DAS?
>
> Thanks
>
> On Wed, Dec 2, 2015 at 11:16 AM, Anuruddha Liyanarachchi <
> anurudd...@wso2.com> wrote:
>
>> Hi,
>>
>> I am trying to publish carbon logs to DAS and I am facing following
>> problems.
>>
>> *In carbon 4.2.0 products (APIM 1.9.1) :*
>> For each day stream definitions are created [1], therefore I can't
>> use a common event receiver to persist data.
>>
>>
>> *In carbon 4.4.0 products (ESB 4.9.0) :*
>> Throws class not found error [2].
>>
>> Is there a way to solve these issues ?
>>
>>
>> [1]log.0.AM.2015.12.02:1.0.0
>> 
>> log.0.AM.2015.12.01:1.0.0
>> 
>>
>> [2]
>> log4j:ERROR Could not instantiate class
>> [org.wso2.carbon.logging.service.appender.LogEventAppender].
>> java.lang.ClassNotFoundException:
>> org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
>> by org.wso2.carbon.logging_4.4.1
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
>> at
>> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:191)
>> at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
>> at
>> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
>> at
>> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
>> at
>> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
>> at
>> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>> at
>> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>> at
>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>> at
>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>> at
>> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>> at org.apache.log4j.LogManager.(LogManager.java:127)
>> at
>> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
>> at com.atomikos.logging.Slf4jLogger.(Slf4jLogger.java:8)
>> at
>> com.atomikos.logging.Slf4JLoggerFactoryDelegate.createLogger(Slf4JLoggerFactoryDelegate.java:7)
>> at
>> 

Re: [Dev] WSO2 Committers += Ruwan Abeykoon

2015-12-01 Thread Rasika Perera
Congratualtions Ruwan !!!

On Tue, Dec 1, 2015 at 2:52 PM, Malintha Adikari  wrote:

> Congratulations Ruwan.
>
> On Tue, Dec 1, 2015 at 1:30 PM, Dinusha Senanayaka 
> wrote:
>
>> Hi All,
>>
>> It is my pleasure to welcome Ruwan Abeykoon as a WSO2 Committer.  Ruwan,
>> congratulations and keep up the good work.
>>
>> Regards,
>> Dinusha.
>>
>> --
>> Dinusha Dilrukshi
>> Associate Technical Lead
>> WSO2 Inc.: http://wso2.com/
>> Mobile: +94725255071
>> Blog: http://dinushasblog.blogspot.com/
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Malintha Adikari*
> Software Engineer
> WSO2 Inc.; http://wso2.com
> lean.enterprise.middleware
>
> Mobile: +94 71 2312958
> Blog:http://malinthas.blogspot.com
> Page:   http://about.me/malintha
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
With Regards,

*Rasika Perera*
Software Engineer
M: +94 71 680 9060 E: rasi...@wso2.com
LinkedIn: http://lk.linkedin.com/in/rasika90

WSO2 Inc. www.wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] US Election 2016 Tweet Analyze System

2015-12-01 Thread Dakshika Jayathilaka
Hi Srinath,

Seems I missed this thread. Anyway shall we meet to build some good story +
design concept.

Regards,

*Dakshika Jayathilaka*
PMC Member & Committer of Apache Stratos
Senior Software Engineer
WSO2, Inc.
lean.enterprise.middleware
0771100911

On Wed, Dec 2, 2015 at 11:49 AM, Yasara Dissanayake  wrote:

> Hi ,
> Left Conner:
> Top 3 election candidates are displayed on the top and big letters and
> other candidates are displayed in left side bar. We can visit anyones page
> and this is the Trump page.
>
> Middle:
> Community Graph:
> Here display the Trump's community graph(that nodes represent the twitter
> account and also color of node indicates the different candidates and
> color shade is use to indicate the number of tweets produce by that
> account and sized of the node indicate the re tweet count of the that
> account gets) is at the middle. dinali is  working on this.
>
> Re-tweet List
> List of top tweets displayed below the community graph based on the rank
> which directly proportion to the re-tweet count and inversely proportion to
> the life time of that tweets.
>
> Right Conner:
>
> Result of positive and negative sentiment analysis result is displayed
> below the photograph of owner of that page. Indent to display result using
> graph. Yudhanjaya is  working on this.
>
> Below that display the owner's(here Trump's) unique hash tags which based
> on the popular tweets(Please consider this hashTags are updated with time).
>
> Doughnut graph is use to display the  current winning percentage of that
> candidate compared to other candidates. Currently have only rough draft to
> be implemented after machine learning part .
>
>
>
> @Nirmal thank you. Still we are integrating our parts and still database
> is not completed yet. thank you for the correction it should be sentiment
> analysis. Hashtags are based on the popular tweets at that time. It change
> with the list of tweets which we are selected as most popular. Sentiment
> analysis not integrate to the dashboard yet percentage for Hillary Clinton
> is hard code value and we'll correct it.And upload the revised version
> soon. Thank you for the comments.
>
> regards,
> Yasara
>
> On Wed, Dec 2, 2015 at 10:09 AM, Nirmal Fernando  wrote:
>
>> Hi Yasara,
>>
>> Please explain the UI (as the UI is at very early stages, it's not easy
>> to grasp stuff) :-)
>>
>> On Wed, Dec 2, 2015 at 9:47 AM, Yasara Dissanayake 
>> wrote:
>>
>>> Hi,
>>>
>>> This is the snap shots of the final Integration of  the website.
>>>
>>> Please leave your comments.
>>>
>>> regards.
>>>
>>> On Tue, Dec 1, 2015 at 1:34 PM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 +1 :)

 On Tue, Dec 1, 2015 at 1:10 PM, Srinath Perera 
 wrote:

> I might WFH. Shall we meet Thursday 11am?
>
> On Tue, Dec 1, 2015 at 12:20 PM, Yudhanjaya Wijeratne <
> yudhanj...@wso2.com> wrote:
>
>> Hi Srinath,
>>
>> +1 to all. I think sentiment analysis will take the form of a x-y
>> graph charting the ups and downs. Shall I come to Trace tomorrow morning?
>>
>> Thanks,
>> Yudha
>>
>> On Tue, Dec 1, 2015 at 11:21 AM, Srinath Perera 
>> wrote:
>>
>>> Hi Yudhanjaya,
>>>
>>> Yasara and Dinali have the basics for twitter graph and most
>>> important tweets in place. We need to design the story around this. ( I 
>>> am
>>> ccing Dakshika so we can get UX feedback from him).
>>>
>>> Dakshika, we are trying to build a website to analyze the US
>>> election data for twitter.
>>>
>>> IMO we have not figured out the story yet, although we have
>>> individual pieces. Following are my comments.
>>>
>>>
>>>1. Looking at Twitter graph, I feel showing number of tweet each
>>>user did does not tell anything useful.
>>>2. I feel twitter graph should include all tweeps, not tweeter
>>>graph for one candidate. ( we can do color coding to show what are 
>>> tweeps
>>>for focus candidate)
>>>3. I agree users want to know about one candidate. But I think
>>>we need to show the data in contrast. Shall we show each candidate's 
>>> data
>>>in contrast to the first. ( For first we contrast with second)
>>>
>>> We also need to do sentimental analysis one and figure out where it
>>> fit in.
>>>
>>> When you will be in Trace? We should meet and discuss.
>>>
>>> Thanks
>>> Srinath
>>>
>>>
>>>
>>> On Fri, Nov 20, 2015 at 8:34 AM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 Srinath,

 +1. RT's would show influence best.



 On Fri, Nov 20, 2015 at 8:32 AM, Srinath Perera 
 wrote:

> Hi Yudhanjaya,
>
> On Thu, Nov 19, 2015 at 3:40 PM, 

Re: [Dev] Publishing carbon logs to DAS

2015-12-01 Thread Sriskandarajah Suhothayan
Hi DAS team

The current log publishing is broken.
Whats the recommended log publishing approach going forward?

Suho

On Wed, Dec 2, 2015 at 11:27 AM, Imesh Gunaratne  wrote:

> Hi Suho/Anjana,
>
> I noticed that we are working on a feature called Log Analyzer. Is this
> for centralized logging?
> If not what's the approach we are taking for $subject with DAS?
>
> Thanks
>
> On Wed, Dec 2, 2015 at 11:16 AM, Anuruddha Liyanarachchi <
> anurudd...@wso2.com> wrote:
>
>> Hi,
>>
>> I am trying to publish carbon logs to DAS and I am facing following
>> problems.
>>
>> *In carbon 4.2.0 products (APIM 1.9.1) :*
>> For each day stream definitions are created [1], therefore I can't use a
>> common event receiver to persist data.
>>
>>
>> *In carbon 4.4.0 products (ESB 4.9.0) :*
>> Throws class not found error [2].
>>
>> Is there a way to solve these issues ?
>>
>>
>> [1]log.0.AM.2015.12.02:1.0.0
>> 
>> log.0.AM.2015.12.01:1.0.0
>> 
>>
>> [2]
>> log4j:ERROR Could not instantiate class
>> [org.wso2.carbon.logging.service.appender.LogEventAppender].
>> java.lang.ClassNotFoundException:
>> org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
>> by org.wso2.carbon.logging_4.4.1
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
>> at
>> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:191)
>> at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
>> at
>> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
>> at
>> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
>> at
>> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
>> at
>> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>> at
>> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>> at
>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>> at
>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>> at
>> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>> at org.apache.log4j.LogManager.(LogManager.java:127)
>> at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
>> at com.atomikos.logging.Slf4jLogger.(Slf4jLogger.java:8)
>> at
>> com.atomikos.logging.Slf4JLoggerFactoryDelegate.createLogger(Slf4JLoggerFactoryDelegate.java:7)
>> at com.atomikos.logging.LoggerFactory.createLogger(LoggerFactory.java:12)
>> at com.atomikos.logging.LoggerFactory.(LoggerFactory.java:52)
>> at
>> com.atomikos.transactions.internal.AtomikosActivator.(AtomikosActivator.java:47)
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>> at java.lang.Class.newInstance(Class.java:379)
>> at
>> org.eclipse.osgi.framework.internal.core.AbstractBundle.loadBundleActivator(AbstractBundle.java:167)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:679)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
>> at
>> org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
>> at
>> org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
>> at
>> org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
>> at
>> org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
>> at
>> org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
>> at
>> org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
>> at
>> org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
>> at
>> org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
>> at

Re: [Dev] [EMM] Doubt on Server Url to be taken from carbon.xml HostName value or some other configuration.

2015-12-01 Thread Chamara Ariyarathne
I disagree with that. Using the carbon.xml HostName value is not just for
constructing a url in a mail. It was only one requirement.

There are further issues encountered due to not using carbon.xml HostName
value in the product.
https://wso2.org/jira/browse/EMM-1017

On Tue, Dec 1, 2015 at 9:09 PM, Dulitha Wijewantha  wrote:

>
>
> On Thu, Nov 26, 2015 at 8:33 AM, Afkham Azeez  wrote:
>
>> I think your requirement is to to send a URL to the client in an email.
>> The best option is the define the entire URL as some config element and use
>> that without complicating stuff so much.
>>
>
> ​+1 for this.
> ​
>
>
>>
>>
>
>> On Thu, Nov 26, 2015 at 6:53 PM, Geeth Munasinghe  wrote:
>>
>>>
>>>
>>> On Thu, Nov 26, 2015 at 11:56 AM, Sameera Jayasoma 
>>> wrote:
>>>
 At the moment carbon.xml contains proxy host, proxy context path of the
 worker cluster. But proxy port of the worker cluster is missing. Therefore
 we need to add this to carbon.xml.

 Suggestion is to put following properties under the "Ports" element.

 80
 443

 WDYT?

>>>
>>> +1
>>>
>>> If the both worker and manager nodes are exposed globally, we are able
>>> to get the host name from carbon.xml and proxy port from
>>> catalina-server.xml.
>>>
>>> But there is a deployment scenario where proxy port cannot be taken from
>>> catalin-server.xml.
>>>
>>> Our use case is EMM administrator add users and sends emails with the
>>> instructions to enroll the mobile device. We use the manager node to add
>>> user and send the email. But devices will be enrolled to the worker node.
>>> So email sent by the manager node contains the url of the worker nodes.
>>> That means it has the proxy hostname and the proxy port of the worker. So
>>> in a setup where manager node is not exposed to the outside world, only
>>> worker nodes are exposed globally through the LB, then proxy port is not
>>> configured in the manager node. Manager node can be accessed only from
>>> internal network which is valid use case for many companies where security
>>> is much concerned. In this case we are not able to get the proxy port of
>>> the worker nodes from manager nodes.
>>>
>>> I think above parameters would fix our problem. I have created a jira
>>> [1] for this.
>>>
>>> [1] https://wso2.org/jira/browse/CARBON-15659
>>>
>>> Thanks
>>> Geeth
>>>
>>>
 Thanks,
 Sameera.

 On Tue, Nov 24, 2015 at 10:34 AM, Sameera Jayasoma 
 wrote:

> +1. We should use carbon.xml at all cost otherwise we are adding
> unnecessary overhead in configuring the products. You can see how we
> generate other URLs. We do have few util methods.  Please reuse the util
> methods.
>
> When you calculate the URL, you need to consider following parameters.
>
> hostname
> proxy port or port
> proxy path etc
>
> Thanks,
> Sameera.
>
> On Tue, Nov 24, 2015 at 8:17 AM, Selvaratnam Uthaiyashankar <
> shan...@wso2.com> wrote:
>
>> I agree with Chamara. We have a way to configure public hostname
>> (HostName, MgtHostName in carbon.xml) and port (proxy port in
>> tomcat/catalina-server.xml). This is what used in generating service
>> endpoints, WSDL URLs etc. when a server is fronted with LB. I don't see 
>> any
>> necessary for EMM to have a new configuration.
>>
>> On Tue, Nov 24, 2015 at 12:41 AM, Geeth Munasinghe 
>> wrote:
>>
>>>
>>>
>>> On Tue, Nov 24, 2015 at 12:12 AM, Chamara Ariyarathne <
>>> chama...@wso2.com> wrote:
>>>
 Hi Milan. Thanks for the information. We will try this tomorrow.
 But our purpose is to replace this whole url with a configured host 
 name.

 However Geeth, I think the EMM team having to introduce a new
 config to put the globally exposed server url deviates from the 
 purpose of
 having HostName and MgtHostname properties in the carbon.xml..

>>>
>>> Chamara,
>>> I think I disagree with on that point. I dont think carbon hostname
>>> or mgt host name cannot be used for globally exposing the server url.
>>> AFAIK there is no place to put the port number in carbon.xml. There is 
>>> no
>>> point of having just a host name without the port number. The carbon.xml
>>> host name will be the server ip address or the host name of the server
>>> which the product is running as clearly mentioned in the document [1].
>>>
>>> As another reference, AFAIK in ESB, we use WSDLPrefix [2] in order
>>> to change the address endpoint of generated wsdls to LB's address when 
>>> ESB
>>> is fronted by a LB.
>>>
>>> So I think introducing a new config to put the LB host name and port
>>> is valid.
>>>
>>> [1] https://docs.wso2.com/display/Carbon440/Configuring+carbon.xml

[Dev] A script to invoke JMX operations on Metrics Manager MBean in a WSO2 server

2015-12-01 Thread Isuru Perera
Hi,

I wrote a script to enable/disable metrics. The requirement came from
Chathurike and he wanted to disable Metrics during warm up period when
doing performance tests.

The script is available at [1] and I wrote a blog post on it [2]. The
script will work only in Java 8.

Thanks!

Best Regards,

[1]
https://gist.github.com/chrishantha/2fd43ba4ada79d2f6bdc#file-metrics_jmx_operations-js
[2] http://isuru-perera.blogspot.com/2015/12/running-java-in-script.html

-- 
Isuru Perera
Associate Technical Lead | WSO2, Inc. | http://wso2.com/
Lean . Enterprise . Middleware

about.me/chrishantha
Contact: +IsuruPereraWSO2 
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] WSO2 Committers += Pubudu Gunatilaka

2015-12-01 Thread Lahiru Sandaruwan
Congratz Pubudu!

On Mon, Nov 30, 2015 at 10:47 PM, Imesh Gunaratne  wrote:

> Hi Devs,
>
> It's my pleasure to announce Pubudu as a WSO2 committer. Pubudu has made
> great contributions to Apache Stratos & WSO2 Private PaaS. As a recognition
> of his work he has been voted as a WSO2 committer.
>
> Pubudu, welcome aboard! Keep up the good work!
>
> Thanks
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.gunaratne.org
> Lean . Enterprise . Middleware
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
--
Lahiru Sandaruwan
Committer and PMC member, Apache Stratos,
Senior Software Engineer,
WSO2 Inc., http://wso2.com
lean.enterprise.middleware

phone: +94773325954
email: lahi...@wso2.com blog: http://lahiruwrites.blogspot.com/
linked-in: http://lk.linkedin.com/pub/lahiru-sandaruwan/16/153/146
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Publishing carbon logs to DAS

2015-12-01 Thread Anuruddha Premalal
Hi Suho,

In the log analysis solution we are using a Http publisher(OOB from
logstash) to publish data to a REST end point.  Rather than coupling
publishing agent to log4j appender, it would be a much cleaner way, there
are couple of other options too, fluentd for example.

If users need to develop their own log analytics solutions, they should be
able to use their own publishers and use DAS features for analytics.  WDYT?


On Wed, Dec 2, 2015 at 12:10 PM, Sriskandarajah Suhothayan 
wrote:

> Since Log Analytics solution will take some time to come
> Can we release log publishing part of the Log Analytics solution ASAP
> such that others can publish log to DAS.
>
> Suho
>
> On Wed, Dec 2, 2015 at 12:08 PM, Malith Dhanushka  wrote:
>
>> Yes. Log analyzer which is being written on top of DAS platform will be
>> based on log stash http publisher.
>>
>> Thanks
>>
>> On Wed, Dec 2, 2015 at 11:58 AM, Sinthuja Ragendran 
>> wrote:
>>
>>> Hi,
>>>
>>> The way forward to go on log monitoring is with Log Analytics solution
>>> that we are working on. Anyhow the old logging publishing method is broken
>>> and cannot be used with latest carbon release products, because that's
>>> tightly coupled with cassandra, Hadoop, etc, and hence we can't use that
>>> with current DAS. Anyhow once the LA solution/log publisher (based on log
>>> stash publisher) is available, it can be used with existing WSO2 products
>>> also.
>>>
>>> Thanks,
>>> Sinthuja.
>>>
>>>
>>> On Wed, Dec 2, 2015 at 11:29 AM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
 Hi DAS team

 The current log publishing is broken.
 Whats the recommended log publishing approach going forward?

 Suho

 On Wed, Dec 2, 2015 at 11:27 AM, Imesh Gunaratne 
 wrote:

> Hi Suho/Anjana,
>
> I noticed that we are working on a feature called Log Analyzer. Is
> this for centralized logging?
> If not what's the approach we are taking for $subject with DAS?
>
> Thanks
>
> On Wed, Dec 2, 2015 at 11:16 AM, Anuruddha Liyanarachchi <
> anurudd...@wso2.com> wrote:
>
>> Hi,
>>
>> I am trying to publish carbon logs to DAS and I am facing following
>> problems.
>>
>> *In carbon 4.2.0 products (APIM 1.9.1) :*
>> For each day stream definitions are created [1], therefore I can't
>> use a common event receiver to persist data.
>>
>>
>> *In carbon 4.4.0 products (ESB 4.9.0) :*
>> Throws class not found error [2].
>>
>> Is there a way to solve these issues ?
>>
>>
>> [1]log.0.AM.2015.12.02:1.0.0
>> 
>> log.0.AM.2015.12.01:1.0.0
>> 
>>
>> [2]
>> log4j:ERROR Could not instantiate class
>> [org.wso2.carbon.logging.service.appender.LogEventAppender].
>> java.lang.ClassNotFoundException:
>> org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
>> by org.wso2.carbon.logging_4.4.1
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
>> at
>> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:191)
>> at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
>> at
>> org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
>> at
>> org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
>> at
>> org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
>> at
>> org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
>> at
>> org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
>> at
>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
>> at
>> org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
>> at
>> org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
>> at org.apache.log4j.LogManager.(LogManager.java:127)
>> at
>> org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
>> at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
>> at 

Re: [Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Malmee Weerasinghe
@Akila: No, we haven't implemented that feature, to convert cartridge
subscription artifacts to application sign ups and domain mapping
subscriptions.

We still need to check the tool on an actual PPaaS 4.0.0 before locating it
in the repo as we implemented it by mocking the end points. Will send a PR
once we are done.

Thanks






On Tue, Dec 1, 2015 at 8:54 PM, Imesh Gunaratne  wrote:

>
> On Tue, Dec 1, 2015 at 7:38 PM, Gayan Gunarathne  wrote:
>
>>
>> On Tue, Dec 1, 2015 at 6:46 PM, Akila Ravihansa Perera <
>> raviha...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> +1 for the proposed folder structure.
>>>
>>> @Gayan: Currently that tool only exports existing cartridge
>>> subscriptions and domain mappings. It doesn't do any conversion or
>>> migration (although it is called migration-tool);
>>>
>>
>> In that case, if we put this under tools/migration it will be misleading
>> for the end user.
>>
>
> Yes, that's why we need to rename it. +1 for calling it
> subscription-exporter.
>
> Thanks
>
>>
>>
>>> which is why it should be renamed to subscription-manager. I actually
>>> prefer the name "subscription-exporter". We had to create an external tool
>>> since regular API didn't have any methods to expose those information. We
>>> should include those API methods to the regular API in the new PPaaS
>>> release.
>>>
>>> @Malmee: I've restructured the folder structure in [1]. You can create a
>>> new folder named "artifact-converter" at tools/migration/ to host your
>>> tool. Please send a PR with those changes.
>>>
>>> On a side note; does your tool support converting cartridge subscription
>>> artifacts to application signups and domain mapping subscriptions?
>>>
>>> [1]
>>> https://github.com/wso2/product-private-paas/tree/master/tools/migration
>>>
>>> Thanks.
>>>
>>> On Tue, Dec 1, 2015 at 6:23 PM, Imesh Gunaratne  wrote:
>>>


 On Tue, Dec 1, 2015 at 5:23 PM, Gayan Gunarathne 
 wrote:

>
> On Tue, Dec 1, 2015 at 4:46 PM, Imesh Gunaratne 
> wrote:
>
>> Shall we have something like below:
>>
>> └── tools
>> └── migration
>> ├── artifact-converter
>> └── subcription-manager
>>
>
> I think it is a subscription-converter not subscription-manager.It
> will convert the 4.0.0 subscriptions to the 4.1.0. So shall we call it as
> subscription-converter?
>

 AFAIK it does not convert subscriptions, it just export them.

 Thanks

>
>> @Akila: We might need to rename the existing subscription management
>> tool.
>>
>> Thanks
>>
>> On Tue, Dec 1, 2015 at 3:37 PM, Malmee Weerasinghe 
>> wrote:
>>
>>> Hi Akila,
>>> We need to locate the Artifact Converter Tool which converts PPaaS
>>> 4.0.0 artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.
>>>
>>> As Artifact Converter Tool and paas-migration/4.0.0 tool have quite
>>> similar functionality, can we create a new folder in 'tools' and move
>>> paas-migration/4.0.0 tool to it and locate together with Artifact
>>> Converter Tool. Do you have any suggestions?
>>>
>>> Thank you
>>> --
>>> Malmee Weerasinghe
>>> WSO2 Intern
>>> mobile : (+94)* 71 7601905* |   email :   
>>> mal...@wso2.com
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.gunaratne.org
>> Lean . Enterprise . Middleware
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
>
> Gayan Gunarathne
> Technical Lead, WSO2 Inc. (http://wso2.com)
> Committer & PMC Member, Apache Stratos
> email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
>
>
>



 --
 *Imesh Gunaratne*
 Senior Technical Lead
 WSO2 Inc: http://wso2.com
 T: +94 11 214 5345 M: +94 77 374 2057
 W: http://imesh.gunaratne.org
 Lean . Enterprise . Middleware


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


>>>
>>>
>>> --
>>> Akila Ravihansa Perera
>>> WSO2 Inc.;  http://wso2.com/
>>>
>>> Blog: http://ravihansa3000.blogspot.com
>>>
>>
>>
>>
>> --
>>
>> Gayan Gunarathne
>> Technical Lead, WSO2 Inc. (http://wso2.com)
>> Committer & PMC Member, Apache Stratos
>> email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
>>
>>
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.gunaratne.org
> Lean . Enterprise . Middleware
>
>
> 

Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Anjana Fernando
Hi Malith,

The functionality which you're requesting is very specific, and from DAS
side, it doesn't make sense to implement this in a generic way, which is
not used usually. And it is anyway not the way, the log analyzer should use
it. The different log sources, will know their fields before they send out
data, it doesn't have to be checked every time an event is published. A log
source would instruct the log analyzer backend API, the new fields, this
specific log source will be sending, and with the earlier message, the
backend service will set the global table's schema properly, and then the
remote log agent will be sending out log records to be processed by the
server.

Cheers,
Anjana.

On Tue, Dec 1, 2015 at 6:44 PM, Malith Dhanushka  wrote:

> Hi Anjana,
>
> Yes. Requirement is for the internal log related REST API which is being
> written using osgi services. In the perspective of log analysis data, we
> have one master table to persist all the log events from different log
> sources. The way log data comes in to log REST API is as arbitrary fields.
> So different log sources have different set of arbitrary fields which leads
> log REST API to change the schema of master table every time it receives
> log events from a new/updated log source. That's what i meant inaccurate
> which can be solved much cleaner way by having that flag to index or not to
> index arbitrary fields for a particular stream.
>
> Thanks,
> Malith
>
> On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando  wrote:
>
>> Hi Malith,
>>
>> No, it cannot be done like that. How the indexing and all happens is, it
>> looks up the table schema for a table and do the indexing according to
>> that. So the table schema must be set before hand. It is not a dynamic
>> thing that can be set, when arbitrary fields are sent to the receiver, and
>> it cannot always load the current schema and set it always for each event,
>> even though we can cache that information and do some operations, but that
>> gets complicated. So the idea is, it is the responsibility of the client to
>> set the target table's schema properly before hand, which may or may not
>> include arbitrary fields, and then send the data.
>>
>> Also, if this requirement is for the log analytics solution work, as
>> we've discussed before, there should be a whole new remote API for that,
>> and that API can do these operations inside the server, using the OSGi
>> services, and not the original DAS REST API. So those operations will
>> happen automatically while keeping the remote log related API clean.
>>
>> Cheers,
>> Anjana.
>>
>> On Tue, Dec 1, 2015 at 5:13 PM, Malith Dhanushka  wrote:
>>
>>> Hi Folks,
>>>
>>> Currently indexing arbitrary fields is being achieved by dynamically
>>> updating analytics table schema through analytics REST API. This is not an
>>> accurate solution for a frequently updating schema. So the ideal solution
>>> would be to have a flag in data bridge event sink configuration to
>>> enable/disable indexing for all arbitrary fields.
>>>
>>> WDUT?
>>>
>>> Thanks,
>>> Malith
>>> --
>>> Malith Dhanushka
>>> Senior Software Engineer - Data Technologies
>>> *WSO2, Inc. : wso2.com *
>>> *Mobile*  : +94 716 506 693
>>>
>>
>>
>>
>> --
>> *Anjana Fernando*
>> Senior Technical Lead
>> WSO2 Inc. | http://wso2.com
>> lean . enterprise . middleware
>>
>
>
>
> --
> Malith Dhanushka
> Senior Software Engineer - Data Technologies
> *WSO2, Inc. : wso2.com *
> *Mobile*  : +94 716 506 693
>



-- 
*Anjana Fernando*
Senior Technical Lead
WSO2 Inc. | http://wso2.com
lean . enterprise . middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Sachith Withana
Hi All,

There might be a slight issue. We wouldn't know the arbitrary fields before
the log agent starts publishing, since the agent only publishes and we
don't have control over which fields would be sent ( unless we configure
all the agents ourselves). So we would have to check for each event, if
there are new fields apart from that are there in the schema. This is
undesirable.

And as Anjana pointed out we don't have a way to specify to index all the
arbitrary values unless we set the schema accordingly.

Is it possible to specify in the schema to index everything?

On Wed, Dec 2, 2015 at 9:38 AM, Anjana Fernando  wrote:

> Hi Malith,
>
> The functionality which you're requesting is very specific, and from DAS
> side, it doesn't make sense to implement this in a generic way, which is
> not used usually. And it is anyway not the way, the log analyzer should use
> it. The different log sources, will know their fields before they send out
> data, it doesn't have to be checked every time an event is published. A log
> source would instruct the log analyzer backend API, the new fields, this
> specific log source will be sending, and with the earlier message, the
> backend service will set the global table's schema properly, and then the
> remote log agent will be sending out log records to be processed by the
> server.
>
> Cheers,
> Anjana.
>
> On Tue, Dec 1, 2015 at 6:44 PM, Malith Dhanushka  wrote:
>
>> Hi Anjana,
>>
>> Yes. Requirement is for the internal log related REST API which is being
>> written using osgi services. In the perspective of log analysis data, we
>> have one master table to persist all the log events from different log
>> sources. The way log data comes in to log REST API is as arbitrary fields.
>> So different log sources have different set of arbitrary fields which leads
>> log REST API to change the schema of master table every time it receives
>> log events from a new/updated log source. That's what i meant inaccurate
>> which can be solved much cleaner way by having that flag to index or not to
>> index arbitrary fields for a particular stream.
>>
>> Thanks,
>> Malith
>>
>> On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando  wrote:
>>
>>> Hi Malith,
>>>
>>> No, it cannot be done like that. How the indexing and all happens is, it
>>> looks up the table schema for a table and do the indexing according to
>>> that. So the table schema must be set before hand. It is not a dynamic
>>> thing that can be set, when arbitrary fields are sent to the receiver, and
>>> it cannot always load the current schema and set it always for each event,
>>> even though we can cache that information and do some operations, but that
>>> gets complicated. So the idea is, it is the responsibility of the client to
>>> set the target table's schema properly before hand, which may or may not
>>> include arbitrary fields, and then send the data.
>>>
>>> Also, if this requirement is for the log analytics solution work, as
>>> we've discussed before, there should be a whole new remote API for that,
>>> and that API can do these operations inside the server, using the OSGi
>>> services, and not the original DAS REST API. So those operations will
>>> happen automatically while keeping the remote log related API clean.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Tue, Dec 1, 2015 at 5:13 PM, Malith Dhanushka 
>>> wrote:
>>>
 Hi Folks,

 Currently indexing arbitrary fields is being achieved by dynamically
 updating analytics table schema through analytics REST API. This is not an
 accurate solution for a frequently updating schema. So the ideal solution
 would be to have a flag in data bridge event sink configuration to
 enable/disable indexing for all arbitrary fields.

 WDUT?

 Thanks,
 Malith
 --
 Malith Dhanushka
 Senior Software Engineer - Data Technologies
 *WSO2, Inc. : wso2.com *
 *Mobile*  : +94 716 506 693

>>>
>>>
>>>
>>> --
>>> *Anjana Fernando*
>>> Senior Technical Lead
>>> WSO2 Inc. | http://wso2.com
>>> lean . enterprise . middleware
>>>
>>
>>
>>
>> --
>> Malith Dhanushka
>> Senior Software Engineer - Data Technologies
>> *WSO2, Inc. : wso2.com *
>> *Mobile*  : +94 716 506 693
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 
Sachith Withana
Software Engineer; WSO2 Inc.; http://wso2.com
E-mail: sachith AT wso2.com
M: +94715518127
Linked-In: https://lk.linkedin.com/in/sachithwithana
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Anjana Fernando
Hi Sachith,

Doesn't the agent have the knowledge of the log types/categories and their
field information when it is initializing? .. as in, as I understood, we
give what fields needs to be sent out in the configurations, isn't that the
case? ..

Cheers,
Anjana.

On Wed, Dec 2, 2015 at 10:01 AM, Sachith Withana  wrote:

> Hi All,
>
> There might be a slight issue. We wouldn't know the arbitrary fields
> before the log agent starts publishing, since the agent only publishes and
> we don't have control over which fields would be sent ( unless we configure
> all the agents ourselves). So we would have to check for each event, if
> there are new fields apart from that are there in the schema. This is
> undesirable.
>
> And as Anjana pointed out we don't have a way to specify to index all the
> arbitrary values unless we set the schema accordingly.
>
> Is it possible to specify in the schema to index everything?
>
> On Wed, Dec 2, 2015 at 9:38 AM, Anjana Fernando  wrote:
>
>> Hi Malith,
>>
>> The functionality which you're requesting is very specific, and from DAS
>> side, it doesn't make sense to implement this in a generic way, which is
>> not used usually. And it is anyway not the way, the log analyzer should use
>> it. The different log sources, will know their fields before they send out
>> data, it doesn't have to be checked every time an event is published. A log
>> source would instruct the log analyzer backend API, the new fields, this
>> specific log source will be sending, and with the earlier message, the
>> backend service will set the global table's schema properly, and then the
>> remote log agent will be sending out log records to be processed by the
>> server.
>>
>> Cheers,
>> Anjana.
>>
>> On Tue, Dec 1, 2015 at 6:44 PM, Malith Dhanushka  wrote:
>>
>>> Hi Anjana,
>>>
>>> Yes. Requirement is for the internal log related REST API which is being
>>> written using osgi services. In the perspective of log analysis data, we
>>> have one master table to persist all the log events from different log
>>> sources. The way log data comes in to log REST API is as arbitrary fields.
>>> So different log sources have different set of arbitrary fields which leads
>>> log REST API to change the schema of master table every time it receives
>>> log events from a new/updated log source. That's what i meant inaccurate
>>> which can be solved much cleaner way by having that flag to index or not to
>>> index arbitrary fields for a particular stream.
>>>
>>> Thanks,
>>> Malith
>>>
>>> On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando  wrote:
>>>
 Hi Malith,

 No, it cannot be done like that. How the indexing and all happens is,
 it looks up the table schema for a table and do the indexing according to
 that. So the table schema must be set before hand. It is not a dynamic
 thing that can be set, when arbitrary fields are sent to the receiver, and
 it cannot always load the current schema and set it always for each event,
 even though we can cache that information and do some operations, but that
 gets complicated. So the idea is, it is the responsibility of the client to
 set the target table's schema properly before hand, which may or may not
 include arbitrary fields, and then send the data.

 Also, if this requirement is for the log analytics solution work, as
 we've discussed before, there should be a whole new remote API for that,
 and that API can do these operations inside the server, using the OSGi
 services, and not the original DAS REST API. So those operations will
 happen automatically while keeping the remote log related API clean.

 Cheers,
 Anjana.

 On Tue, Dec 1, 2015 at 5:13 PM, Malith Dhanushka 
 wrote:

> Hi Folks,
>
> Currently indexing arbitrary fields is being achieved by dynamically
> updating analytics table schema through analytics REST API. This is not an
> accurate solution for a frequently updating schema. So the ideal solution
> would be to have a flag in data bridge event sink configuration to
> enable/disable indexing for all arbitrary fields.
>
> WDUT?
>
> Thanks,
> Malith
> --
> Malith Dhanushka
> Senior Software Engineer - Data Technologies
> *WSO2, Inc. : wso2.com *
> *Mobile*  : +94 716 506 693
>



 --
 *Anjana Fernando*
 Senior Technical Lead
 WSO2 Inc. | http://wso2.com
 lean . enterprise . middleware

>>>
>>>
>>>
>>> --
>>> Malith Dhanushka
>>> Senior Software Engineer - Data Technologies
>>> *WSO2, Inc. : wso2.com *
>>> *Mobile*  : +94 716 506 693
>>>
>>
>>
>>
>> --
>> *Anjana Fernando*
>> Senior Technical Lead
>> WSO2 Inc. | http://wso2.com
>> lean . enterprise . middleware
>>
>
>
>
> --
> Sachith Withana
> Software Engineer; WSO2 Inc.; 

Re: [Dev] US Election 2016 Tweet Analyze System

2015-12-01 Thread Nirmal Fernando
That's helpful, thanks. Few questions;

1. Community graph is for the selected candidate ?
2. Why there's only one tweet in most popular tweets?
3. Positive/Negative is the sentiment analysis right not semantic? What
does test means in that table?
4. If this page is for a selected candidate, why we have hash tags of each
candidate?
5. Page is about Trump, prediction *percentage* is for Hillary Clinton :-)

On Wed, Dec 2, 2015 at 10:27 AM, Dinali Dabarera  wrote:

> Hi,
>
> I hope this will give you a clear picture of GUI.
>
> In the community graph,
>
>
>
> On Wed, Dec 2, 2015 at 10:09 AM, Nirmal Fernando  wrote:
>
>> Hi Yasara,
>>
>> Please explain the UI (as the UI is at very early stages, it's not easy
>> to grasp stuff) :-)
>>
>> On Wed, Dec 2, 2015 at 9:47 AM, Yasara Dissanayake 
>> wrote:
>>
>>> Hi,
>>>
>>> This is the snap shots of the final Integration of  the website.
>>>
>>> Please leave your comments.
>>>
>>> regards.
>>>
>>> On Tue, Dec 1, 2015 at 1:34 PM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 +1 :)

 On Tue, Dec 1, 2015 at 1:10 PM, Srinath Perera 
 wrote:

> I might WFH. Shall we meet Thursday 11am?
>
> On Tue, Dec 1, 2015 at 12:20 PM, Yudhanjaya Wijeratne <
> yudhanj...@wso2.com> wrote:
>
>> Hi Srinath,
>>
>> +1 to all. I think sentiment analysis will take the form of a x-y
>> graph charting the ups and downs. Shall I come to Trace tomorrow morning?
>>
>> Thanks,
>> Yudha
>>
>> On Tue, Dec 1, 2015 at 11:21 AM, Srinath Perera 
>> wrote:
>>
>>> Hi Yudhanjaya,
>>>
>>> Yasara and Dinali have the basics for twitter graph and most
>>> important tweets in place. We need to design the story around this. ( I 
>>> am
>>> ccing Dakshika so we can get UX feedback from him).
>>>
>>> Dakshika, we are trying to build a website to analyze the US
>>> election data for twitter.
>>>
>>> IMO we have not figured out the story yet, although we have
>>> individual pieces. Following are my comments.
>>>
>>>
>>>1. Looking at Twitter graph, I feel showing number of tweet each
>>>user did does not tell anything useful.
>>>2. I feel twitter graph should include all tweeps, not tweeter
>>>graph for one candidate. ( we can do color coding to show what are 
>>> tweeps
>>>for focus candidate)
>>>3. I agree users want to know about one candidate. But I think
>>>we need to show the data in contrast. Shall we show each candidate's 
>>> data
>>>in contrast to the first. ( For first we contrast with second)
>>>
>>> We also need to do sentimental analysis one and figure out where it
>>> fit in.
>>>
>>> When you will be in Trace? We should meet and discuss.
>>>
>>> Thanks
>>> Srinath
>>>
>>>
>>>
>>> On Fri, Nov 20, 2015 at 8:34 AM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 Srinath,

 +1. RT's would show influence best.



 On Fri, Nov 20, 2015 at 8:32 AM, Srinath Perera 
 wrote:

> Hi Yudhanjaya,
>
> On Thu, Nov 19, 2015 at 3:40 PM, Yudhanjaya Wijeratne <
> yudhanj...@wso2.com> wrote:
>
>> Hi Srinath,
>>
>> Regarding Dinali's graph, we had a chat and realized that using
>> the width of the edge makes the graph harder to read as smaller 
>> connections
>> are hidden. What if we did it this way:
>>
>> *Distance between nodes = 1 / RTs between nodes*
>>
>
> Actually we do not need to do anything. Force base layouts we use
> will put connected nodes closer and not connected nodes further.
>
>
>>
>> This will bring together accounts that often retweet a node's
>> content. Less enthusiastic retweeters are further and further out.
>>
>> *Redness = no of RT's done by a node*
>>
>> High RT accounts, like bots, will show up in stages of red
>>
> +1
>
>>
>> *Size of node = % of original tweets in sample space OR number of
>> RTs received by that node*
>>
>> Content creators and popular influencers are larger
>>
>
> I would say let's go with RT's received by that node. IMO that is
> the best measure of influence.
>
>
>>
>> Therefore we'll end up with a graph of large nodes (popular
>> influencers) surrounded closely by nodes that RT them a lot and at 
>> the
>> edges of this little community will be the nodes that don't RT them 
>> all
>> that often.
>>
>> What do you think?

Re: [Dev] [ESB] Getting the error in IBM MQ custom inbound endpoint

2015-12-01 Thread Sajini De Silva
Hi Krishanthi,

As I know you can use .binding file to connect to a remote IBM MQ server
from ESB.

Thank you,
Sajini

On Tue, Dec 1, 2015 at 11:04 AM, Kirishanthy Tharmalingam <
kirishan...@wso2.com> wrote:

> Hi,
>
> In the JMS transport of esb 490 for IBM MQ to connect  a queue manager in
> binding mode (IBM MQ using .bindings file), Here the application must run
> on the same system on which the queue manager is running.
> Create and configure connection factories and destinations as administered
> objects in a JNDI namespace.
>
> But in our case we can connect to queue manager in client mode, Here the
> MQ application can run on the same system or on a different system. So ESB
> can listening the messages from different system.
> And also create connection factories and destinations dynamically at run
> time, we don't need to create initial context factory for JMS
> administration.
>
>
> On Mon, Nov 30, 2015 at 12:19 PM, Malaka Silva  wrote:
>
>> Kirishanthy I guess two approaches you mentioned are included in [1] and
>> [2]
>>
>> IMO [2] is designed to support single consumer or producer and is not
>> good for our use case.
>>
>> With [1] better we analysis the user cases (Usages) over default JMS
>> transport / inbound of esb 490.
>>
>> [1]
>> https://hursleyonwmq.wordpress.com/2007/05/29/simplest-sample-applications-using-websphere-mq-jms/
>> [2]
>> https://endrasenn.wordpress.com/2010/01/27/readwrite-to-ibm-mq-sample-java-code/
>>
>> On Mon, Nov 30, 2015 at 11:13 AM, Kirishanthy Tharmalingam <
>> kirishan...@wso2.com> wrote:
>>
>>> Hi Malaka,
>>>
>>> In the default esb JMS transport for IBM MQ connect to a queue manager
>>> in bindings mode but I used to connect to a queue manager in client mode so
>>> we can run the application and queue manager on same system or on different
>>> systems [1].
>>>
>>> And also I create connection factories and destinations dynamically at
>>> run time, instead of retrieving from JNDI namespace [2]. Otherwise it is
>>> the same thing.
>>>
>>> If we used the Message Queue Interface [3] (native Websphere MQ API),
>>> there is a problem to setting the MQEnvironment properties for connection.
>>> MQEnvironment.class contains the static variables.
>>>
>>> Is there any other way to solve this issue?
>>>
>>> [1]
>>> https://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.dev.doc/q031720_.htm
>>>
>>> [2]
>>> http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.dev.doc/q032190_.htm
>>>
>>> [3]
>>> http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.dev.doc/q030520_.htm
>>>
>>> On Mon, Nov 30, 2015 at 9:51 AM, Malaka Silva  wrote:
>>>
 Kirishanthy by default esb facilitates the connectivity to broker using
 JMS transport (AXIS2 and Inbound)

 The code you demonstrated seems to do the same AFAIK.

 If that is the case no advantage of having a native inbound/connector
 for this? Pls correct me if I am wrong here.

 [1]
 https://docs.wso2.com/display/ESB490/Configure+with+IBM+WebSphere+MQ
 [2]
 http://mrmalakasilva.blogspot.com/2013/10/connecting-mechanisms-other-than.html

 On Sun, Nov 29, 2015 at 7:05 PM, Rajjaz Mohammed 
 wrote:

> Hi Kirishanthy,
> that error code JMSCC0091 may show because of some property missing[1]
> and check [2] also there may be a chance to load different library from
> osgi bundle because of property value missing.
>
> [1]
> https://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.1.0/com.ibm.mq.javadoc.doc/WMQJMSClasses/errorcodes.html
> [2]
> http://stackoverflow.com/questions/23887147/unable-to-connect-to-websphere-mq-manager-using-xms
>
> On Sat, Nov 28, 2015 at 5:39 PM, Malaka Silva  wrote:
>
>> Seems like an osgi issue. Some how some classes are not getting
>> loaded. Try manually creating osgi bundles and check.
>>
>> Had a similar error when doing the EJB connector.
>>
>> On Sat, Nov 28, 2015 at 3:09 PM, Kirishanthy Tharmalingam <
>> kirishan...@wso2.com> wrote:
>>
>>>
>>> Hi All,
>>>
>>> I add all the jars which are mentioned in [1] to ESB
>>> repository/components/lib. I got the error [2] 'com.ibm.msg.client.wmq'
>>> couldn't be loaded when I create the custom inbound, But simple java 
>>> client
>>> works fine with same libraries. What could be the reason? Please advice 
>>> me
>>> to solve this issue.
>>>
>>> [1]
>>> https://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.5.0/com.ibm.mq.dev.doc/q120070_.htm
>>>
>>> [2]
>>>
>>> [2015-11-28 13:34:21,669] ERROR - ibmMqConsumer JMSCC0091: The
>>> provider factory for connection type 'com.ibm.msg.client.wmq' could not 
>>> be
>>> loaded.
>>> [2015-11-28 13:34:21,669] ERROR - TaskQuartzJobAdapter Error in
>>> executing task: JMSCC0091: The provider 

Re: [Dev] US Election 2016 Tweet Analyze System

2015-12-01 Thread Dinali Dabarera
Sir,

I have not completed implementing the community graph of all the candidates
I will finish it by today and show you tomorrow at the meeting.

Regards!

On Wed, Dec 2, 2015 at 10:33 AM, Srinath Perera  wrote:

> Dinali, we want to show all tweeps in a one twitter graph with different
> colors given to different candidate as I mentioned in earlier mail. (
> instead of putting colors by number of tweets by each user).
>
>
>
>
>
>
>
>1.
>
>
> On Wed, Dec 2, 2015 at 10:27 AM, Dinali Dabarera  wrote:
>
>> Hi,
>>
>> I hope this will give you a clear picture of GUI.
>>
>> In the community graph,
>>
>>
>>
>> On Wed, Dec 2, 2015 at 10:09 AM, Nirmal Fernando  wrote:
>>
>>> Hi Yasara,
>>>
>>> Please explain the UI (as the UI is at very early stages, it's not easy
>>> to grasp stuff) :-)
>>>
>>> On Wed, Dec 2, 2015 at 9:47 AM, Yasara Dissanayake 
>>> wrote:
>>>
 Hi,

 This is the snap shots of the final Integration of  the website.

 Please leave your comments.

 regards.

 On Tue, Dec 1, 2015 at 1:34 PM, Yudhanjaya Wijeratne <
 yudhanj...@wso2.com> wrote:

> +1 :)
>
> On Tue, Dec 1, 2015 at 1:10 PM, Srinath Perera 
> wrote:
>
>> I might WFH. Shall we meet Thursday 11am?
>>
>> On Tue, Dec 1, 2015 at 12:20 PM, Yudhanjaya Wijeratne <
>> yudhanj...@wso2.com> wrote:
>>
>>> Hi Srinath,
>>>
>>> +1 to all. I think sentiment analysis will take the form of a x-y
>>> graph charting the ups and downs. Shall I come to Trace tomorrow 
>>> morning?
>>>
>>> Thanks,
>>> Yudha
>>>
>>> On Tue, Dec 1, 2015 at 11:21 AM, Srinath Perera 
>>> wrote:
>>>
 Hi Yudhanjaya,

 Yasara and Dinali have the basics for twitter graph and most
 important tweets in place. We need to design the story around this. ( 
 I am
 ccing Dakshika so we can get UX feedback from him).

 Dakshika, we are trying to build a website to analyze the US
 election data for twitter.

 IMO we have not figured out the story yet, although we have
 individual pieces. Following are my comments.


1. Looking at Twitter graph, I feel showing number of tweet
each user did does not tell anything useful.
2. I feel twitter graph should include all tweeps, not tweeter
graph for one candidate. ( we can do color coding to show what are 
 tweeps
for focus candidate)
3. I agree users want to know about one candidate. But I think
we need to show the data in contrast. Shall we show each 
 candidate's data
in contrast to the first. ( For first we contrast with second)

 We also need to do sentimental analysis one and figure out where it
 fit in.

 When you will be in Trace? We should meet and discuss.

 Thanks
 Srinath



 On Fri, Nov 20, 2015 at 8:34 AM, Yudhanjaya Wijeratne <
 yudhanj...@wso2.com> wrote:

> Srinath,
>
> +1. RT's would show influence best.
>
>
>
> On Fri, Nov 20, 2015 at 8:32 AM, Srinath Perera 
> wrote:
>
>> Hi Yudhanjaya,
>>
>> On Thu, Nov 19, 2015 at 3:40 PM, Yudhanjaya Wijeratne <
>> yudhanj...@wso2.com> wrote:
>>
>>> Hi Srinath,
>>>
>>> Regarding Dinali's graph, we had a chat and realized that using
>>> the width of the edge makes the graph harder to read as smaller 
>>> connections
>>> are hidden. What if we did it this way:
>>>
>>> *Distance between nodes = 1 / RTs between nodes*
>>>
>>
>> Actually we do not need to do anything. Force base layouts we use
>> will put connected nodes closer and not connected nodes further.
>>
>>
>>>
>>> This will bring together accounts that often retweet a node's
>>> content. Less enthusiastic retweeters are further and further out.
>>>
>>> *Redness = no of RT's done by a node*
>>>
>>> High RT accounts, like bots, will show up in stages of red
>>>
>> +1
>>
>>>
>>> *Size of node = % of original tweets in sample space OR number
>>> of RTs received by that node*
>>>
>>> Content creators and popular influencers are larger
>>>
>>
>> I would say let's go with RT's received by that node. IMO that is
>> the best measure of influence.
>>
>>
>>>
>>> Therefore we'll end up with a graph of large nodes (popular
>>> influencers) surrounded closely by 

Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Madhawa Gunasekara
Hi All,

I think we can get some information by uploading sample logs from the
agent. then we can analyze that sample log and find the exact fields that
can appear in logs. then we can configure the agent according to the
findings. from sample log file we can analyze rare logs and frequently logs
and so on. this feature is available in splunk.

Thanks,
Madhawa

On Wed, Dec 2, 2015 at 10:17 AM, Sachith Withana  wrote:

> Now that we are using logstash out of the box, without the DASConnector,
> it won't do that.
>
> The logstash would just start publishing and with the current design,
> AFAIK the schema setting would be handled by the LAS server,
>
> BTW for that requirement, can we provide a way to allow indexing all the
> columns?
>
> On Wed, Dec 2, 2015 at 10:11 AM, Anjana Fernando  wrote:
>
>> Hi Sachith,
>>
>> Doesn't the agent have the knowledge of the log types/categories and
>> their field information when it is initializing? .. as in, as I understood,
>> we give what fields needs to be sent out in the configurations, isn't that
>> the case? ..
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 2, 2015 at 10:01 AM, Sachith Withana 
>> wrote:
>>
>>> Hi All,
>>>
>>> There might be a slight issue. We wouldn't know the arbitrary fields
>>> before the log agent starts publishing, since the agent only publishes and
>>> we don't have control over which fields would be sent ( unless we configure
>>> all the agents ourselves). So we would have to check for each event, if
>>> there are new fields apart from that are there in the schema. This is
>>> undesirable.
>>>
>>> And as Anjana pointed out we don't have a way to specify to index all
>>> the arbitrary values unless we set the schema accordingly.
>>>
>>> Is it possible to specify in the schema to index everything?
>>>
>>> On Wed, Dec 2, 2015 at 9:38 AM, Anjana Fernando  wrote:
>>>
 Hi Malith,

 The functionality which you're requesting is very specific, and from
 DAS side, it doesn't make sense to implement this in a generic way, which
 is not used usually. And it is anyway not the way, the log analyzer should
 use it. The different log sources, will know their fields before they send
 out data, it doesn't have to be checked every time an event is published. A
 log source would instruct the log analyzer backend API, the new fields,
 this specific log source will be sending, and with the earlier message, the
 backend service will set the global table's schema properly, and then the
 remote log agent will be sending out log records to be processed by the
 server.

 Cheers,
 Anjana.

 On Tue, Dec 1, 2015 at 6:44 PM, Malith Dhanushka 
 wrote:

> Hi Anjana,
>
> Yes. Requirement is for the internal log related REST API which is
> being written using osgi services. In the perspective of log analysis 
> data,
> we have one master table to persist all the log events from different log
> sources. The way log data comes in to log REST API is as arbitrary fields.
> So different log sources have different set of arbitrary fields which 
> leads
> log REST API to change the schema of master table every time it receives
> log events from a new/updated log source. That's what i meant inaccurate
> which can be solved much cleaner way by having that flag to index or not 
> to
> index arbitrary fields for a particular stream.
>
> Thanks,
> Malith
>
> On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando 
> wrote:
>
>> Hi Malith,
>>
>> No, it cannot be done like that. How the indexing and all happens is,
>> it looks up the table schema for a table and do the indexing according to
>> that. So the table schema must be set before hand. It is not a dynamic
>> thing that can be set, when arbitrary fields are sent to the receiver, 
>> and
>> it cannot always load the current schema and set it always for each 
>> event,
>> even though we can cache that information and do some operations, but 
>> that
>> gets complicated. So the idea is, it is the responsibility of the client 
>> to
>> set the target table's schema properly before hand, which may or may not
>> include arbitrary fields, and then send the data.
>>
>> Also, if this requirement is for the log analytics solution work, as
>> we've discussed before, there should be a whole new remote API for that,
>> and that API can do these operations inside the server, using the OSGi
>> services, and not the original DAS REST API. So those operations will
>> happen automatically while keeping the remote log related API clean.
>>
>> Cheers,
>> Anjana.
>>
>> On Tue, Dec 1, 2015 at 5:13 PM, Malith Dhanushka 
>> wrote:
>>
>>> Hi Folks,
>>>
>>> Currently 

Re: [Dev] US Election 2016 Tweet Analyze System

2015-12-01 Thread Srinath Perera
Dinali, we want to show all tweeps in a one twitter graph with different
colors given to different candidate as I mentioned in earlier mail. (
instead of putting colors by number of tweets by each user).







   1.


On Wed, Dec 2, 2015 at 10:27 AM, Dinali Dabarera  wrote:

> Hi,
>
> I hope this will give you a clear picture of GUI.
>
> In the community graph,
>
>
>
> On Wed, Dec 2, 2015 at 10:09 AM, Nirmal Fernando  wrote:
>
>> Hi Yasara,
>>
>> Please explain the UI (as the UI is at very early stages, it's not easy
>> to grasp stuff) :-)
>>
>> On Wed, Dec 2, 2015 at 9:47 AM, Yasara Dissanayake 
>> wrote:
>>
>>> Hi,
>>>
>>> This is the snap shots of the final Integration of  the website.
>>>
>>> Please leave your comments.
>>>
>>> regards.
>>>
>>> On Tue, Dec 1, 2015 at 1:34 PM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 +1 :)

 On Tue, Dec 1, 2015 at 1:10 PM, Srinath Perera 
 wrote:

> I might WFH. Shall we meet Thursday 11am?
>
> On Tue, Dec 1, 2015 at 12:20 PM, Yudhanjaya Wijeratne <
> yudhanj...@wso2.com> wrote:
>
>> Hi Srinath,
>>
>> +1 to all. I think sentiment analysis will take the form of a x-y
>> graph charting the ups and downs. Shall I come to Trace tomorrow morning?
>>
>> Thanks,
>> Yudha
>>
>> On Tue, Dec 1, 2015 at 11:21 AM, Srinath Perera 
>> wrote:
>>
>>> Hi Yudhanjaya,
>>>
>>> Yasara and Dinali have the basics for twitter graph and most
>>> important tweets in place. We need to design the story around this. ( I 
>>> am
>>> ccing Dakshika so we can get UX feedback from him).
>>>
>>> Dakshika, we are trying to build a website to analyze the US
>>> election data for twitter.
>>>
>>> IMO we have not figured out the story yet, although we have
>>> individual pieces. Following are my comments.
>>>
>>>
>>>1. Looking at Twitter graph, I feel showing number of tweet each
>>>user did does not tell anything useful.
>>>2. I feel twitter graph should include all tweeps, not tweeter
>>>graph for one candidate. ( we can do color coding to show what are 
>>> tweeps
>>>for focus candidate)
>>>3. I agree users want to know about one candidate. But I think
>>>we need to show the data in contrast. Shall we show each candidate's 
>>> data
>>>in contrast to the first. ( For first we contrast with second)
>>>
>>> We also need to do sentimental analysis one and figure out where it
>>> fit in.
>>>
>>> When you will be in Trace? We should meet and discuss.
>>>
>>> Thanks
>>> Srinath
>>>
>>>
>>>
>>> On Fri, Nov 20, 2015 at 8:34 AM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 Srinath,

 +1. RT's would show influence best.



 On Fri, Nov 20, 2015 at 8:32 AM, Srinath Perera 
 wrote:

> Hi Yudhanjaya,
>
> On Thu, Nov 19, 2015 at 3:40 PM, Yudhanjaya Wijeratne <
> yudhanj...@wso2.com> wrote:
>
>> Hi Srinath,
>>
>> Regarding Dinali's graph, we had a chat and realized that using
>> the width of the edge makes the graph harder to read as smaller 
>> connections
>> are hidden. What if we did it this way:
>>
>> *Distance between nodes = 1 / RTs between nodes*
>>
>
> Actually we do not need to do anything. Force base layouts we use
> will put connected nodes closer and not connected nodes further.
>
>
>>
>> This will bring together accounts that often retweet a node's
>> content. Less enthusiastic retweeters are further and further out.
>>
>> *Redness = no of RT's done by a node*
>>
>> High RT accounts, like bots, will show up in stages of red
>>
> +1
>
>>
>> *Size of node = % of original tweets in sample space OR number of
>> RTs received by that node*
>>
>> Content creators and popular influencers are larger
>>
>
> I would say let's go with RT's received by that node. IMO that is
> the best measure of influence.
>
>
>>
>> Therefore we'll end up with a graph of large nodes (popular
>> influencers) surrounded closely by nodes that RT them a lot and at 
>> the
>> edges of this little community will be the nodes that don't RT them 
>> all
>> that often.
>>
>> What do you think?
>>
>> Best,
>> Yudha
>>
>> On Thu, Nov 19, 2015 at 11:02 AM, Dinali Dabarera <
>> din...@wso2.com> wrote:
>>
>>> Yes I will do sir. 

[Dev] Publishing carbon logs to DAS

2015-12-01 Thread Anuruddha Liyanarachchi
Hi,

I am trying to publish carbon logs to DAS and I am facing following
problems.

*In carbon 4.2.0 products (APIM 1.9.1) :*
For each day stream definitions are created [1], therefore I can't use a
common event receiver to persist data.


*In carbon 4.4.0 products (ESB 4.9.0) :*
Throws class not found error [2].

Is there a way to solve these issues ?


[1]log.0.AM.2015.12.02:1.0.0

log.0.AM.2015.12.01:1.0.0


[2]
log4j:ERROR Could not instantiate class
[org.wso2.carbon.logging.service.appender.LogEventAppender].
java.lang.ClassNotFoundException:
org.wso2.carbon.logging.service.appender.LogEventAppender cannot be found
by org.wso2.carbon.logging_4.4.1
at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:455)
at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
at
org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:191)
at org.apache.log4j.helpers.Loader.loadClass(Loader.java:198)
at
org.apache.log4j.helpers.OptionConverter.instantiateByClassName(OptionConverter.java:327)
at
org.apache.log4j.helpers.OptionConverter.instantiateByKey(OptionConverter.java:124)
at
org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:785)
at
org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)
at
org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)
at
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)
at
org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)
at
org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)
at org.apache.log4j.LogManager.(LogManager.java:127)
at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
at com.atomikos.logging.Slf4jLogger.(Slf4jLogger.java:8)
at
com.atomikos.logging.Slf4JLoggerFactoryDelegate.createLogger(Slf4JLoggerFactoryDelegate.java:7)
at com.atomikos.logging.LoggerFactory.createLogger(LoggerFactory.java:12)
at com.atomikos.logging.LoggerFactory.(LoggerFactory.java:52)
at
com.atomikos.transactions.internal.AtomikosActivator.(AtomikosActivator.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:379)
at
org.eclipse.osgi.framework.internal.core.AbstractBundle.loadBundleActivator(AbstractBundle.java:167)
at
org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:679)
at
org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381)
at
org.eclipse.osgi.framework.internal.core.AbstractBundle.resume(AbstractBundle.java:390)
at
org.eclipse.osgi.framework.internal.core.Framework.resumeBundle(Framework.java:1176)
at
org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:559)
at
org.eclipse.osgi.framework.internal.core.StartLevelManager.resumeBundles(StartLevelManager.java:544)
at
org.eclipse.osgi.framework.internal.core.StartLevelManager.incFWSL(StartLevelManager.java:457)
at
org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:243)
at
org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:438)
at
org.eclipse.osgi.framework.internal.core.StartLevelManager.dispatchEvent(StartLevelManager.java:1)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
log4j:ERROR Could not instantiate appender named "LOGEVENT".

-- 
*Thanks and Regards,*
Anuruddha Lanka Liyanarachchi
Software Engineer - WSO2
Mobile : +94 (0) 712762611
Tel  : +94 112 145 345
a nurudd...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] US Election 2016 Tweet Analyze System

2015-12-01 Thread Nirmal Fernando
Hi Yasara,

Please explain the UI (as the UI is at very early stages, it's not easy to
grasp stuff) :-)

On Wed, Dec 2, 2015 at 9:47 AM, Yasara Dissanayake  wrote:

> Hi,
>
> This is the snap shots of the final Integration of  the website.
>
> Please leave your comments.
>
> regards.
>
> On Tue, Dec 1, 2015 at 1:34 PM, Yudhanjaya Wijeratne 
> wrote:
>
>> +1 :)
>>
>> On Tue, Dec 1, 2015 at 1:10 PM, Srinath Perera  wrote:
>>
>>> I might WFH. Shall we meet Thursday 11am?
>>>
>>> On Tue, Dec 1, 2015 at 12:20 PM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 Hi Srinath,

 +1 to all. I think sentiment analysis will take the form of a x-y graph
 charting the ups and downs. Shall I come to Trace tomorrow morning?

 Thanks,
 Yudha

 On Tue, Dec 1, 2015 at 11:21 AM, Srinath Perera 
 wrote:

> Hi Yudhanjaya,
>
> Yasara and Dinali have the basics for twitter graph and most important
> tweets in place. We need to design the story around this. ( I am
> ccing Dakshika so we can get UX feedback from him).
>
> Dakshika, we are trying to build a website to analyze the US election
> data for twitter.
>
> IMO we have not figured out the story yet, although we have individual
> pieces. Following are my comments.
>
>
>1. Looking at Twitter graph, I feel showing number of tweet each
>user did does not tell anything useful.
>2. I feel twitter graph should include all tweeps, not tweeter
>graph for one candidate. ( we can do color coding to show what are 
> tweeps
>for focus candidate)
>3. I agree users want to know about one candidate. But I think we
>need to show the data in contrast. Shall we show each candidate's data 
> in
>contrast to the first. ( For first we contrast with second)
>
> We also need to do sentimental analysis one and figure out where it
> fit in.
>
> When you will be in Trace? We should meet and discuss.
>
> Thanks
> Srinath
>
>
>
> On Fri, Nov 20, 2015 at 8:34 AM, Yudhanjaya Wijeratne <
> yudhanj...@wso2.com> wrote:
>
>> Srinath,
>>
>> +1. RT's would show influence best.
>>
>>
>>
>> On Fri, Nov 20, 2015 at 8:32 AM, Srinath Perera 
>> wrote:
>>
>>> Hi Yudhanjaya,
>>>
>>> On Thu, Nov 19, 2015 at 3:40 PM, Yudhanjaya Wijeratne <
>>> yudhanj...@wso2.com> wrote:
>>>
 Hi Srinath,

 Regarding Dinali's graph, we had a chat and realized that using the
 width of the edge makes the graph harder to read as smaller 
 connections are
 hidden. What if we did it this way:

 *Distance between nodes = 1 / RTs between nodes*

>>>
>>> Actually we do not need to do anything. Force base layouts we use
>>> will put connected nodes closer and not connected nodes further.
>>>
>>>

 This will bring together accounts that often retweet a node's
 content. Less enthusiastic retweeters are further and further out.

 *Redness = no of RT's done by a node*

 High RT accounts, like bots, will show up in stages of red

>>> +1
>>>

 *Size of node = % of original tweets in sample space OR number of
 RTs received by that node*

 Content creators and popular influencers are larger

>>>
>>> I would say let's go with RT's received by that node. IMO that is
>>> the best measure of influence.
>>>
>>>

 Therefore we'll end up with a graph of large nodes (popular
 influencers) surrounded closely by nodes that RT them a lot and at the
 edges of this little community will be the nodes that don't RT them all
 that often.

 What do you think?

 Best,
 Yudha

 On Thu, Nov 19, 2015 at 11:02 AM, Dinali Dabarera 
 wrote:

> Yes I will do sir. But I am doing some research on d3 by adding
> more features and directing getting data from DAS. I hope to do this 
> as it
> will help in the future.
>
> On Thu, Nov 19, 2015 at 9:38 AM, Srinath Perera 
> wrote:
>
>> We must use d3 or a d3 based one, as that WSO2 platform uses.
>>
>> Sample I shared shows how to do it with d3.
>>
>> Thanks
>> Srinath
>>
>> On Wed, Nov 18, 2015 at 3:36 PM, Dinali Dabarera > > wrote:
>>
>>> Hi,
>>> I have created two tables on DAS which collects fresh data
>>> daily(more than 1000,000) and run a script which is scheduled 
>>> hourly to
>>> 

Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Sachith Withana
Now that we are using logstash out of the box, without the DASConnector, it
won't do that.

The logstash would just start publishing and with the current design, AFAIK
the schema setting would be handled by the LAS server,

BTW for that requirement, can we provide a way to allow indexing all the
columns?

On Wed, Dec 2, 2015 at 10:11 AM, Anjana Fernando  wrote:

> Hi Sachith,
>
> Doesn't the agent have the knowledge of the log types/categories and their
> field information when it is initializing? .. as in, as I understood, we
> give what fields needs to be sent out in the configurations, isn't that the
> case? ..
>
> Cheers,
> Anjana.
>
> On Wed, Dec 2, 2015 at 10:01 AM, Sachith Withana  wrote:
>
>> Hi All,
>>
>> There might be a slight issue. We wouldn't know the arbitrary fields
>> before the log agent starts publishing, since the agent only publishes and
>> we don't have control over which fields would be sent ( unless we configure
>> all the agents ourselves). So we would have to check for each event, if
>> there are new fields apart from that are there in the schema. This is
>> undesirable.
>>
>> And as Anjana pointed out we don't have a way to specify to index all the
>> arbitrary values unless we set the schema accordingly.
>>
>> Is it possible to specify in the schema to index everything?
>>
>> On Wed, Dec 2, 2015 at 9:38 AM, Anjana Fernando  wrote:
>>
>>> Hi Malith,
>>>
>>> The functionality which you're requesting is very specific, and from DAS
>>> side, it doesn't make sense to implement this in a generic way, which is
>>> not used usually. And it is anyway not the way, the log analyzer should use
>>> it. The different log sources, will know their fields before they send out
>>> data, it doesn't have to be checked every time an event is published. A log
>>> source would instruct the log analyzer backend API, the new fields, this
>>> specific log source will be sending, and with the earlier message, the
>>> backend service will set the global table's schema properly, and then the
>>> remote log agent will be sending out log records to be processed by the
>>> server.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Tue, Dec 1, 2015 at 6:44 PM, Malith Dhanushka 
>>> wrote:
>>>
 Hi Anjana,

 Yes. Requirement is for the internal log related REST API which is
 being written using osgi services. In the perspective of log analysis data,
 we have one master table to persist all the log events from different log
 sources. The way log data comes in to log REST API is as arbitrary fields.
 So different log sources have different set of arbitrary fields which leads
 log REST API to change the schema of master table every time it receives
 log events from a new/updated log source. That's what i meant inaccurate
 which can be solved much cleaner way by having that flag to index or not to
 index arbitrary fields for a particular stream.

 Thanks,
 Malith

 On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando 
 wrote:

> Hi Malith,
>
> No, it cannot be done like that. How the indexing and all happens is,
> it looks up the table schema for a table and do the indexing according to
> that. So the table schema must be set before hand. It is not a dynamic
> thing that can be set, when arbitrary fields are sent to the receiver, and
> it cannot always load the current schema and set it always for each event,
> even though we can cache that information and do some operations, but that
> gets complicated. So the idea is, it is the responsibility of the client 
> to
> set the target table's schema properly before hand, which may or may not
> include arbitrary fields, and then send the data.
>
> Also, if this requirement is for the log analytics solution work, as
> we've discussed before, there should be a whole new remote API for that,
> and that API can do these operations inside the server, using the OSGi
> services, and not the original DAS REST API. So those operations will
> happen automatically while keeping the remote log related API clean.
>
> Cheers,
> Anjana.
>
> On Tue, Dec 1, 2015 at 5:13 PM, Malith Dhanushka 
> wrote:
>
>> Hi Folks,
>>
>> Currently indexing arbitrary fields is being achieved by dynamically
>> updating analytics table schema through analytics REST API. This is not 
>> an
>> accurate solution for a frequently updating schema. So the ideal solution
>> would be to have a flag in data bridge event sink configuration to
>> enable/disable indexing for all arbitrary fields.
>>
>> WDUT?
>>
>> Thanks,
>> Malith
>> --
>> Malith Dhanushka
>> Senior Software Engineer - Data Technologies
>> *WSO2, Inc. : wso2.com *
>> *Mobile*  : +94 716 506 693
>>

Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Anjana Fernando
On Wed, Dec 2, 2015 at 10:17 AM, Sachith Withana  wrote:

> Now that we are using logstash out of the box, without the DASConnector,
> it won't do that.
>
> The logstash would just start publishing and with the current design,
> AFAIK the schema setting would be handled by the LAS server,
>

Oh yeah, I see ..


>
> BTW for that requirement, can we provide a way to allow indexing all the
> columns?
>

Well .. we can .. I guess this is the same that Malith request in the first
mail. Only thing is, we have to change the internals/architecture of how we
do indexing currently, the current logic is, we check the input value
against the table schema, and do the required indexing. For example, if
facets are defined, data types etc.. so if we are just saying, to index all
fields, it will be a new path there, and also we have to introduce a new
special flag for a table to say, index all. Also, we should need some
mechanism of figuring out the fields of a specific log type in the server,
where at least with the table schema, we knew what are all the fields
that's there for all the log types. Ideally, we need to store some metadata
somewhere saying, for this specific log type, these are the fields and so
on. Do we get some kind of a log category/type information with the
standard logstash HTTP connector? .. any other schema setting, storing of
metadata can be done in the server side, and we can cache it in-memory to
do fast lookups and modifications of the schema (together with some cluster
messaging to keep it in-sync with other nodes).

Or else, maybe we are again back to writing our own logstash adapter which
will make the whole thing much simpler? ..

Cheers,
Anjana.


>
> On Wed, Dec 2, 2015 at 10:11 AM, Anjana Fernando  wrote:
>
>> Hi Sachith,
>>
>> Doesn't the agent have the knowledge of the log types/categories and
>> their field information when it is initializing? .. as in, as I understood,
>> we give what fields needs to be sent out in the configurations, isn't that
>> the case? ..
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 2, 2015 at 10:01 AM, Sachith Withana 
>> wrote:
>>
>>> Hi All,
>>>
>>> There might be a slight issue. We wouldn't know the arbitrary fields
>>> before the log agent starts publishing, since the agent only publishes and
>>> we don't have control over which fields would be sent ( unless we configure
>>> all the agents ourselves). So we would have to check for each event, if
>>> there are new fields apart from that are there in the schema. This is
>>> undesirable.
>>>
>>> And as Anjana pointed out we don't have a way to specify to index all
>>> the arbitrary values unless we set the schema accordingly.
>>>
>>> Is it possible to specify in the schema to index everything?
>>>
>>> On Wed, Dec 2, 2015 at 9:38 AM, Anjana Fernando  wrote:
>>>
 Hi Malith,

 The functionality which you're requesting is very specific, and from
 DAS side, it doesn't make sense to implement this in a generic way, which
 is not used usually. And it is anyway not the way, the log analyzer should
 use it. The different log sources, will know their fields before they send
 out data, it doesn't have to be checked every time an event is published. A
 log source would instruct the log analyzer backend API, the new fields,
 this specific log source will be sending, and with the earlier message, the
 backend service will set the global table's schema properly, and then the
 remote log agent will be sending out log records to be processed by the
 server.

 Cheers,
 Anjana.

 On Tue, Dec 1, 2015 at 6:44 PM, Malith Dhanushka 
 wrote:

> Hi Anjana,
>
> Yes. Requirement is for the internal log related REST API which is
> being written using osgi services. In the perspective of log analysis 
> data,
> we have one master table to persist all the log events from different log
> sources. The way log data comes in to log REST API is as arbitrary fields.
> So different log sources have different set of arbitrary fields which 
> leads
> log REST API to change the schema of master table every time it receives
> log events from a new/updated log source. That's what i meant inaccurate
> which can be solved much cleaner way by having that flag to index or not 
> to
> index arbitrary fields for a particular stream.
>
> Thanks,
> Malith
>
> On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando 
> wrote:
>
>> Hi Malith,
>>
>> No, it cannot be done like that. How the indexing and all happens is,
>> it looks up the table schema for a table and do the indexing according to
>> that. So the table schema must be set before hand. It is not a dynamic
>> thing that can be set, when arbitrary fields are sent to the receiver, 
>> and
>> it cannot always load the current 

Re: [Dev] [BPS] [Cluster] Error while deploying BPEL & HT packages

2015-12-01 Thread Chamila Wijayarathna
Hi Hasitha,

I added the suggested solution to prevent above error message at [1].
But I am still getting following error message which seems to be a part of
previous error message.

TID: [-1234] [] [2015-12-02 11:01:04,552] ERROR
{org.apache.ode.bpel.compiler.bom.BpelObjectFactory$BOMSAXErrorHandler} -
 
null:file:///home/chamila/IS/packs/4114/2/wso2is-5.1.0-SNAPSHOT/repository/bpel/-1234/wf1-1/wf1.bpel:51:17:cvc-complex-type.2.4.a:
Invalid content was found starting with element 'extensions'. One of '{"
http://docs.oasis-open.org/wsbpel/2.0/process/executable":import, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":partnerLinks, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":messageExchanges,
"http://docs.oasis-open.org/wsbpel/2.0/process/executable":variables, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":correlationSets, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":faultHandlers, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":eventHandlers, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":assign, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":compensate, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":compensateScope, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":empty, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":exit, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":extensionActivity,
"http://docs.oasis-open.org/wsbpel/2.0/process/executable":flow, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":forEach, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":if, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":invoke, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":pick, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":receive, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":repeatUntil, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":reply, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":rethrow, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":scope, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":sequence, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":throw, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":validate, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":wait, "
http://docs.oasis-open.org/wsbpel/2.0/process/executable":while}' is
expected.
TID: [-1234] [] [2015-12-02 11:01:04,560] ERROR
{org.apache.ode.bpel.compiler.bom.BpelObjectFactory$BOMSAXErrorHandler} -
 
null:file:///home/chamila/IS/packs/4114/2/wso2is-5.1.0-SNAPSHOT/repository/bpel/-1234/wf1-1/wf1.bpel:304:84:cvc-elt.4.2:
Cannot resolve 'p:tExpression' to a type definition for element
'p:searchBy'.

Thank You!

1.
https://github.com/cdwijayarathna/carbon-identity/commit/b3d0b2b45b3d9f9f878af38dcc9650893c23632e

On Tue, Nov 3, 2015 at 8:56 AM, Harsha Thirimanna  wrote:

> On Tue, Nov 3, 2015 at 8:38 AM, Hasitha Aravinda  wrote:
>
>> Hi Harsha,
>>
>>
>>
>> On Mon, Nov 2, 2015 at 11:46 PM, Harsha Thirimanna 
>> wrote:
>>
>>> Hi Vinod/Nandika,
>>> Can you please help to get resolve this BPEL package ? You guys always
>>> helped us to fix these kind of things :)
>>>
>>
>> ​Sure, We can guide you on this. It is a simple fix. Remove all xml:space
>> in BPEL file. If you know xml, you can fix it in no time.
>>
> ​Thanks​
>
> ​Hasitha​
>
>>
>> Btw
>>
>> ​w​
>> e need some one from IS team to maintain these process ? (like what we do
>> in APIM
>> ​​
>> ) Not knowing
>> ​ BPEL or xml is not an excuse and
>>  It is time to learn BPEL
>> ​/BPMN​
>> ​;​
>> )
>> ​
>>
>>
> ​Sorry for make misunderstand because we didn't tell we don't know. We
> will try to fix these our self and if there any issues, will ask from BPS
> team.
>
>>
>>> @Hasitha
>>> If these elements and attributes adding by Dev Studio and those are not
>>> expected by the BPEL compiler, IMO , we should eliminate it by the editor.
>>> So it is not an issue in IS side. Better to discuss with the relevant party
>>> and avoid this happen in future also.  Didn't you get this complain before
>>> when these BPEL created using dev studio and deploy it to the BPS ?
>>>
>>
>> ​As I mentioned earlier, that is not a bug in ether BPS or DevS.
>>
> ​If this is not a bug, then no issue :)​
>
>
>> Those are added as place holders preserve white spaces temporally,
>> because IDEs are not intelligent enough to predict what developers want to
>> do next and
>> It is a xml standard. (Read http://www.xmlplease.com/xml/xmlspace/,
>> http://www.w3.org/TR/xml/#sec-white-space ).
>>
>> But this attribute is not define in BPEL schema and BPEL compiler
>> strictly validating bpel files against BPEL schema. But In BPS we continue
>> BPEL compilation even on those validation
>> errors(-Dorg.apache.ode.compiler.failOnValidationErrors=false) because m
>> ost of
>> ​times, 

Re: [Dev] [EMM] Exception when installing ios p2-repository in EMM

2015-12-01 Thread Dilshan Edirisuriya
Hi Sashika,

Did this issue go away once you restart? At installation time there was a
similar issue but I believe we have already fixed that.

Regards,

Dilshan

On Tue, Dec 1, 2015 at 6:43 PM, Sashika Wijesinghe  wrote:

> Hi All,
>
> I want to configure IOS to MDM. I followed below steps to configure IOS.
>
>- configure general server configurations as mentioned in doc [1
>
>]
>- Start EMM server and added ios-agent.ipa file to
>
> '/repository/deployment/server/jaggeryapps/mdm/units/asset-download-agent-ios/public/asset'
>path
>- Installed p2 repository as mentioned in doc [2
>]
>
> [1] https://docs.wso2.com/display/EMM200/General+iOS+Server+Configurations
> [2] https://docs.wso2.com/display/EMM200/Installing+the+P2+Repository
>
> Below exception observed in terminal after installing p2 repository. May I
> know whether I missed any mandatory configurations?
>
> log4j:WARN No appenders could be found for logger
> (org.apache.cxf.common.logging.LogUtils).
> log4j:WARN Please initialize the log4j system properly.
> [2015-12-01 18:10:13,701] ERROR
> {org.apache.catalina.core.ApplicationContext} -  StandardWrapper.Throwable
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean with name 'enrollmentService': Cannot resolve reference to bean
> 'enrollmentServiceBean' while setting bean property 'serviceBeans' with key
> [0]; nested exception is
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean with name 'enrollmentServiceBean' defined in URL
> [jndi:/localhost/ios-enrollment/WEB-INF/cxf-servlet.xml]: Instantiation of
> bean failed; nested exception is java.lang.NoClassDefFoundError:
> org/wso2/carbon/device/mgt/ios/core/exception/IOSEnrollmentException
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328)
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106)
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:353)
> at
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:153)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1327)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1085)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:516)
> at
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:455)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
> at
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
> at
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192)
> at
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
> at
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
> at
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
> at
> org.apache.cxf.transport.servlet.CXFServlet.createSpringContext(CXFServlet.java:151)
> at org.apache.cxf.transport.servlet.CXFServlet.loadBus(CXFServlet.java:74)
> at
> org.apache.cxf.transport.servlet.CXFNonSpringServlet.init(CXFNonSpringServlet.java:76)
> at
> org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1284)
> at
> org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1197)
> at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1087)
> at
> org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5262)
> at
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5550)
> at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
> at
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
> at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
> at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
> at
> org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:290)
> at
> 

Re: [Dev] [ESB][Connector] Redmine connector: Not retrieving all projects

2015-12-01 Thread Sriashalya Srivathsan
Hi Malaka,
I uploaded the connector

On Mon, Nov 30, 2015 at 4:44 PM, Sriashalya Srivathsan 
wrote:

> I'll re upload the connector, once the jenkins build is finished.
>
> On Mon, Nov 30, 2015 at 4:27 PM, Malaka Silva  wrote:
>
>> Need to re-upload the connector as well
>>
>> On Mon, Nov 30, 2015 at 10:30 AM, Shakila Sivagnanarajah <
>> shak...@wso2.com> wrote:
>>
>>> Merged
>>>
>>> On Mon, Nov 30, 2015 at 9:41 AM, Sriashalya Srivathsan >> > wrote:
>>>
 Hi Malaka,
 I'll upload the connector once she fixed Shakila's issues.

 Thank you

 On Sat, Nov 28, 2015 at 7:06 PM, Shakila Sivagnanarajah <
 shak...@wso2.com> wrote:

> Hi Malaka,
>
> I added some comments and Lakmini is checking that.
>
> Thank you
>
> On Sat, Nov 28, 2015 at 5:29 PM, Malaka Silva  wrote:
>
>> Ashalya can you check and upload the connector on Monday pls.
>>
>> On Fri, Nov 27, 2015 at 4:24 PM, Lakmini Chathurika > > wrote:
>>
>>> Hi all,
>>>
>>> I fixed the above issue. Modified the Redmine Connector in ESB.
>>> I have created the JIRA[1] and add the PR[2].
>>>  Could you please review and merge this.
>>>
>>> [1].https://wso2.org/jira/browse/ESBCONNECT-74
>>> [2].https://github.com/wso2/esb-connectors/pull/424
>>>
>>> Thanks and Regards.
>>> Lakmini.
>>>
>>>
>>> On Fri, Nov 27, 2015 at 11:48 AM, Lakmini Chathurika <
>>> lakm...@wso2.com> wrote:
>>>
 Hi Shakila,

  It works.

 Thanks.
 Lakmini.


 On Fri, Nov 27, 2015 at 11:41 AM, Shakila Sivagnanarajah <
 shak...@wso2.com> wrote:

> Hi Lakmini,
>
> You can use 'limit' and 'offset' fields to list more projects.
>
> Try like this:
> https://redmine-upgrade.private.wso2.com/projects.json?limit=50=2
>
> Thank you
>
> On Fri, Nov 27, 2015 at 11:35 AM, Lakmini Chathurika <
> lakm...@wso2.com> wrote:
>
>> Hi Shakila,
>>
>> "*include**: fetch associated data (optional). Possible values:
>> trackers, issue_categories, enabled_modules (since 2.6.0). Values 
>> should be
>> separated by a comma* ","."
>>
>> When we use this , it gives only the additional details of the
>> products.
>> The missing products are not existing in the response.
>>
>> Thanks.
>> Lakmini.
>>
>> On Fri, Nov 27, 2015 at 11:18 AM, Shakila Sivagnanarajah <
>> shak...@wso2.com> wrote:
>>
>>> Hi Lakmini,
>>>
>>> What are the fields that you expect? There is an optional
>>> parameter named as 'include' for this method. You have to mention 
>>> the
>>> requested fields. And the fields should be separated by a comma
>>> ",".
>>> You can find it in [1].
>>> [1]
>>> http://www.redmine.org/projects/redmine/wiki/Rest_Projects#Listing-projects
>>>
>>> Thank you
>>>
>>> On Fri, Nov 27, 2015 at 10:34 AM, Sriashalya Srivathsan <
>>> asha...@wso2.com> wrote:
>>>
 Hi Lakmini,

 Have you got all products details when you directly invoke
 through REST call?

 On Fri, Nov 27, 2015 at 10:26 AM, Lakmini Chathurika <
 lakm...@wso2.com> wrote:

> Hi all,
>
> I wrote a proxy service to get  the product list of WSO2 from
> the Redmine REST API through ESB Redmine Connector. My proxy 
> service is as
> follows.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> * xmlns="http://ws.apache.org/ns/synapse
> "   name="RedmineTest1"
> transports="https,http"   statistics="disable"
> trace="disable"   startOnLoad="true">   
>   value="https://redmine-upgrade.private.wso2.com
> "/>  name="apiKey" value="x"/>  name="responseType"
> value="json"/> 
> {$ctx:apiUrl}
> {$ctx:apiKey}
> {$ctx:responseType}
>   
>   *
>  In the response some product details are missing.But when
> look from 

Re: [Dev] [ESB][Connector] Redmine connector: Not retrieving all projects

2015-12-01 Thread Malaka Silva
thx

On Tue, Dec 1, 2015 at 2:17 PM, Sriashalya Srivathsan 
wrote:

> Hi Malaka,
> I uploaded the connector
>
> On Mon, Nov 30, 2015 at 4:44 PM, Sriashalya Srivathsan 
> wrote:
>
>> I'll re upload the connector, once the jenkins build is finished.
>>
>> On Mon, Nov 30, 2015 at 4:27 PM, Malaka Silva  wrote:
>>
>>> Need to re-upload the connector as well
>>>
>>> On Mon, Nov 30, 2015 at 10:30 AM, Shakila Sivagnanarajah <
>>> shak...@wso2.com> wrote:
>>>
 Merged

 On Mon, Nov 30, 2015 at 9:41 AM, Sriashalya Srivathsan <
 asha...@wso2.com> wrote:

> Hi Malaka,
> I'll upload the connector once she fixed Shakila's issues.
>
> Thank you
>
> On Sat, Nov 28, 2015 at 7:06 PM, Shakila Sivagnanarajah <
> shak...@wso2.com> wrote:
>
>> Hi Malaka,
>>
>> I added some comments and Lakmini is checking that.
>>
>> Thank you
>>
>> On Sat, Nov 28, 2015 at 5:29 PM, Malaka Silva 
>> wrote:
>>
>>> Ashalya can you check and upload the connector on Monday pls.
>>>
>>> On Fri, Nov 27, 2015 at 4:24 PM, Lakmini Chathurika <
>>> lakm...@wso2.com> wrote:
>>>
 Hi all,

 I fixed the above issue. Modified the Redmine Connector in ESB.
 I have created the JIRA[1] and add the PR[2].
  Could you please review and merge this.

 [1].https://wso2.org/jira/browse/ESBCONNECT-74
 [2].https://github.com/wso2/esb-connectors/pull/424

 Thanks and Regards.
 Lakmini.


 On Fri, Nov 27, 2015 at 11:48 AM, Lakmini Chathurika <
 lakm...@wso2.com> wrote:

> Hi Shakila,
>
>  It works.
>
> Thanks.
> Lakmini.
>
>
> On Fri, Nov 27, 2015 at 11:41 AM, Shakila Sivagnanarajah <
> shak...@wso2.com> wrote:
>
>> Hi Lakmini,
>>
>> You can use 'limit' and 'offset' fields to list more projects.
>>
>> Try like this:
>> https://redmine-upgrade.private.wso2.com/projects.json?limit=50=2
>>
>> Thank you
>>
>> On Fri, Nov 27, 2015 at 11:35 AM, Lakmini Chathurika <
>> lakm...@wso2.com> wrote:
>>
>>> Hi Shakila,
>>>
>>> "*include**: fetch associated data (optional). Possible values:
>>> trackers, issue_categories, enabled_modules (since 2.6.0). Values 
>>> should be
>>> separated by a comma* ","."
>>>
>>> When we use this , it gives only the additional details of the
>>> products.
>>> The missing products are not existing in the response.
>>>
>>> Thanks.
>>> Lakmini.
>>>
>>> On Fri, Nov 27, 2015 at 11:18 AM, Shakila Sivagnanarajah <
>>> shak...@wso2.com> wrote:
>>>
 Hi Lakmini,

 What are the fields that you expect? There is an optional
 parameter named as 'include' for this method. You have to mention 
 the
 requested fields. And the fields should be separated by a
 comma ",".
 You can find it in [1].
 [1]
 http://www.redmine.org/projects/redmine/wiki/Rest_Projects#Listing-projects

 Thank you

 On Fri, Nov 27, 2015 at 10:34 AM, Sriashalya Srivathsan <
 asha...@wso2.com> wrote:

> Hi Lakmini,
>
> Have you got all products details when you directly invoke
> through REST call?
>
> On Fri, Nov 27, 2015 at 10:26 AM, Lakmini Chathurika <
> lakm...@wso2.com> wrote:
>
>> Hi all,
>>
>> I wrote a proxy service to get  the product list of WSO2 from
>> the Redmine REST API through ESB Redmine Connector. My proxy 
>> service is as
>> follows.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *> xmlns="http://ws.apache.org/ns/synapse
>> "   name="RedmineTest1"
>> transports="https,http"   statistics="disable"
>> trace="disable"   startOnLoad="true">   
>>  > value="https://redmine-upgrade.private.wso2.com
>> "/> > name="apiKey" value="x"/> > name="responseType"
>> 

Re: [Dev] Android EdgeAnalyticsService (mobile CEP) Restructuring

2015-12-01 Thread Janitha Samarasinghe
Hi,

I attempted to run Siddhi 3.0.4 on Android. But encountered the following
problems:

   1. Gradle build fails due to duplicate files in META-INF folder in the
   jars:
  1. siddhi-core-3.0.4-SNAPSHOT.jar
  2. siddhi-query-api-3.0.4-SNAPSHOT.jar
  3. siddhi-query-compiler-3.0.4-SNAPSHOT.jar
   - *By deleting the files NOTICE, LICENSE and DEPENDENCIES files located
  in the META-INF floder in all but one of the .jars, the build was a
  success.*
   2. Once built, the Android application cannot run any of the samples as
   it freezes while logging:
  - E/File: fail readDirectory() errno=13
  - The permissions to read and write from and to external storage have
  been given in AndroidManifest.xml as:








On Fri, Nov 27, 2015 at 1:53 PM, Sriskandarajah Suhothayan 
wrote:

> Were you able to run Siddhi 3.0 in Android.
> Try that and escalate the issues ASAP
>
> Suho
>
> On Fri, Nov 27, 2015 at 12:27 PM, Janitha Samarasinghe 
> wrote:
>
>> Hi all,
>>
>> The way in which the existing Android EdgeAnalyticsService is implemented
>> is not structured as an Android system service. We are currently
>> re-structuring it to be usable just as an Android system service. This is
>> what getting the service on a client would look like once completed:
>>
>>
>>1. public class MainActivity extends WSO2Activity {
>>2.
>>3. EdgeAnalyticsService edgeAnalyticsService;
>>4.
>>5. @Override
>>6. protected void onCreate(Bundle savedInstanceState) {
>>7. super.onCreate(savedInstanceState);
>>8. setContentView(R.layout.activity_main);
>>9.
>>10. edgeAnalyticsService = (EdgeAnalyticsService)
>> getWSO2Service(WSO2Context.EDGE_ANALYTICS_SERVICE);
>>11. }
>>12. ...
>>
>>
>> Currently we are creating interfaces in order to establish communication
>> between the service and the clients
>>
>> Thanks and Regards,
>> --
>> Janitha Samarasinghe
>> Intern Software Engineer
>> WSO2 Inc: http://wso2.com
>> phone: +94716517331
>>
>
>
>
> --
>
> *S. Suhothayan*
> Technical Lead & Team Lead of WSO2 Complex Event Processor
> *WSO2 Inc. *http://wso2.com
> * *
> lean . enterprise . middleware
>
>
> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
> http://suhothayan.blogspot.com/ twitter:
> http://twitter.com/suhothayan  | linked-in:
> http://lk.linkedin.com/in/suhothayan *
>



-- 
Janitha Samarasinghe
Intern Software Engineer
WSO2 Inc: http://wso2.com
phone: +94716517331
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] WSO2 Committers += Ruwan Abeykoon

2015-12-01 Thread Malintha Adikari
Congratulations Ruwan.

On Tue, Dec 1, 2015 at 1:30 PM, Dinusha Senanayaka  wrote:

> Hi All,
>
> It is my pleasure to welcome Ruwan Abeykoon as a WSO2 Committer.  Ruwan,
> congratulations and keep up the good work.
>
> Regards,
> Dinusha.
>
> --
> Dinusha Dilrukshi
> Associate Technical Lead
> WSO2 Inc.: http://wso2.com/
> Mobile: +94725255071
> Blog: http://dinushasblog.blogspot.com/
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Malintha Adikari*
Software Engineer
WSO2 Inc.; http://wso2.com
lean.enterprise.middleware

Mobile: +94 71 2312958
Blog:http://malinthas.blogspot.com
Page:   http://about.me/malintha
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Test Automation] Mocking Thrift server for an integration test

2015-12-01 Thread Thanuja Uruththirakodeeswaran
Hi Lasantha,

I'm using the  ThriftTestServer [1] to check data publisher functionality
in a java test class. I have set the TrustStore paramby
DataPublisherTestUtil.setTrustStoreParams(); after setting KeyStore param
in my local code.

After start the ThriftTestServer instance, I'm trying to create a publisher
object and while doing that I'm getting the below error:

[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift Server started at localhost
[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift SSL port : 7712
[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift port : 7612
[main] INFO
 org.apache.stratos.cloud.controller.statistics.publisher.ThriftTestServer
 - Test Server Started
[main] INFO  org.wso2.carbon.databridge.agent.thrift.AgentHolder  - Agent
created !
[main] INFO  org.apache.stratos.common.threading.StratosThreadPool  -
Thread pool created: [type] Executor Service [id]
cloud.controller.stats.publisher.thread.pool [size] 10
[pool-5-thread-1] ERROR
org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher  - Error while
connection to event receiver
org.wso2.carbon.databridge.agent.thrift.exception.AgentException: Cannot
borrow client for TCP,localhost:7613,TCP,localhost:7713
at
org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:58)
at
org.wso2.carbon.databridge.agent.thrift.DataPublisher.start(DataPublisher.java:273)
at
org.wso2.carbon.databridge.agent.thrift.DataPublisher.(DataPublisher.java:161)
at
org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher$ReceiverConnectionWorker.run(AsyncDataPublisher.java:787)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Could not
connect to 172.17.8.1 on port 7713
at
org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:212)
at
org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:166)
at
org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:90)
at
org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:48)
at
org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
at
org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:50)
... 8 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625)
at sun.security.ssl.SSLSocketImpl.(SSLSocketImpl.java:413)
at
sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
at
org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:208)
... 13 more


What is the reason for this? Could you please point what I'm doing wrong.

Thanks.

[1].
https://github.com/apache/stratos/blob/master/products/python-cartridge-agent/modules/integration/test-common/src/main/java/org/apache/stratos/python/cartridge/agent/integration/common/ThriftTestServer.java

On Tue, Sep 1, 2015 at 1:26 AM, Akila Ravihansa Perera 
wrote:

> Hi Lasantha,
>
> This is exactly what I needed. Had to struggle a bit to connect to the
> test server from a Python client but managed to do that after couple of
> tweaks. I faced an issue since we cannot define the cipher set to be used
> in ThriftTestServer. Therefore, ThriftDataReceiver will get initialized
> with default set of parameters for TSSLTransportParameters. I'd like to
> suggest that we provide a method to customize these SSL parameters.
>
> Thanks a lot for the prompt response. This was really helpful :)
>
> On Sun, Aug 30, 2015 at 3:20 PM, Lasantha Fernando 
> wrote:
>
>> Hi Akila,
>>
>> There is a ThriftTestServer we've written for tests in
>> carbon-analytics-common. You can find an example here [1]. Also you can
>> find other examples in the databridge-agent test cases. Can you go through
>> them and see if that fits your purpose?
>>
>> [1]
>> 

Re: [Dev] [Test Automation] Mocking Thrift server for an integration test

2015-12-01 Thread Lasantha Fernando
Hi Thanuja,

Looking at the logs above, it seems that the ThriftDataReceiver started on
port 7612, with SSL port 7712.

[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift SSL port : 7712
[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift port : 7612
[main] INFO
 org.apache.stratos.cloud.controller.statistics.publisher.ThriftTestServer
 - Test Server Started

But the agent is trying to connect to port 7613,7713.

[pool-5-thread-1] ERROR
org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher  - Error while
connection to event receiver
org.wso2.carbon.databridge.agent.thrift.exception.AgentException: Cannot
borrow client for TCP,localhost:7613,TCP,localhost:7713

Can you go through the code and verify that the agent is sending to the
same port on which agent is started? It is probably a minor issue when
setting the offsets.

Thanks,
Lasantha


On 1 December 2015 at 15:09, Thanuja Uruththirakodeeswaran <
thanu...@wso2.com> wrote:

> Hi Lasantha,
>
> I'm using the  ThriftTestServer [1] to check data publisher functionality
> in a java test class. I have set the TrustStore paramby
> DataPublisherTestUtil.setTrustStoreParams(); after setting KeyStore param
> in my local code.
>
> After start the ThriftTestServer instance, I'm trying to create a
> publisher object and while doing that I'm getting the below error:
>
> [main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
>  - Thrift Server started at localhost
> [main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
>  - Thrift SSL port : 7712
> [main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
>  - Thrift port : 7612
> [main] INFO
>  org.apache.stratos.cloud.controller.statistics.publisher.ThriftTestServer
>  - Test Server Started
> [main] INFO  org.wso2.carbon.databridge.agent.thrift.AgentHolder  - Agent
> created !
> [main] INFO  org.apache.stratos.common.threading.StratosThreadPool  -
> Thread pool created: [type] Executor Service [id]
> cloud.controller.stats.publisher.thread.pool [size] 10
> [pool-5-thread-1] ERROR
> org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher  - Error while
> connection to event receiver
> org.wso2.carbon.databridge.agent.thrift.exception.AgentException: Cannot
> borrow client for TCP,localhost:7613,TCP,localhost:7713
> at
> org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:58)
> at
> org.wso2.carbon.databridge.agent.thrift.DataPublisher.start(DataPublisher.java:273)
> at
> org.wso2.carbon.databridge.agent.thrift.DataPublisher.(DataPublisher.java:161)
> at
> org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher$ReceiverConnectionWorker.run(AsyncDataPublisher.java:787)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.thrift.transport.TTransportException: Could not
> connect to 172.17.8.1 on port 7713
> at
> org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:212)
> at
> org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:166)
> at
> org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:90)
> at
> org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:48)
> at
> org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
> at
> org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:50)
> ... 8 more
> Caused by: java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625)
> at sun.security.ssl.SSLSocketImpl.(SSLSocketImpl.java:413)
> at
> sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
> at
> org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:208)
> ... 13 more
>
>
> What is the reason for this? Could you please point what I'm doing wrong.
>
> Thanks.
>
> [1].
> 

Re: [Dev] [Test Automation] Mocking Thrift server for an integration test

2015-12-01 Thread Thanuja Uruththirakodeeswaran
Hi Lasantha,

Sorry, I tried this with different ports and when changing the client port
to 7613, forgot to change the server port to 7613. I've corrected it and
attached the new log.

[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift Server started at localhost
[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift SSL port : 7713
[main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
 - Thrift port : 7613
[main] INFO
 org.apache.stratos.cloud.controller.statistics.publisher.ThriftTestServer
 - Test Server Started
[main] INFO  org.wso2.carbon.databridge.agent.thrift.AgentHolder  - Agent
created !
[main] INFO  org.apache.stratos.common.threading.StratosThreadPool  -
Thread pool created: [type] Executor Service [id]
cloud.controller.stats.publisher.thread.pool [size] 10
[pool-5-thread-1] ERROR
org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher  - Error while
connection to event receiver
org.wso2.carbon.databridge.agent.thrift.exception.AgentException: Cannot
borrow client for TCP,localhost:7613,TCP,localhost:7713
at
org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:58)
at
org.wso2.carbon.databridge.agent.thrift.DataPublisher.start(DataPublisher.java:273)
at
org.wso2.carbon.databridge.agent.thrift.DataPublisher.(DataPublisher.java:161)
at
org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher$ReceiverConnectionWorker.run(AsyncDataPublisher.java:787)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Could not
connect to 172.17.8.1 on port 7713
at
org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:212)
at
org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:166)
at
org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:90)
at
org.wso2.carbon.databridge.agent.thrift.internal.pool.client.secure.SecureClientPoolFactory.makeObject(SecureClientPoolFactory.java:48)
at
org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
at
org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:50)
... 8 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625)
at sun.security.ssl.SSLSocketImpl.(SSLSocketImpl.java:413)
at
sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
at
org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:208)
... 13 more

Thanks.

On Tue, Dec 1, 2015 at 3:14 PM, Lasantha Fernando  wrote:

> Hi Thanuja,
>
> Looking at the logs above, it seems that the ThriftDataReceiver started on
> port 7612, with SSL port 7712.
>
> [main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
>  - Thrift SSL port : 7712
> [main] INFO  org.wso2.carbon.databridge.receiver.thrift.ThriftDataReceiver
>  - Thrift port : 7612
> [main] INFO
>  org.apache.stratos.cloud.controller.statistics.publisher.ThriftTestServer
>  - Test Server Started
>
> But the agent is trying to connect to port 7613,7713.
>
> [pool-5-thread-1] ERROR
> org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher  - Error while
> connection to event receiver
> org.wso2.carbon.databridge.agent.thrift.exception.AgentException: Cannot
> borrow client for TCP,localhost:7613,TCP,localhost:7713
>
> Can you go through the code and verify that the agent is sending to the
> same port on which agent is started? It is probably a minor issue when
> setting the offsets.
>
> Thanks,
> Lasantha
>
>
> On 1 December 2015 at 15:09, Thanuja Uruththirakodeeswaran <
> thanu...@wso2.com> wrote:
>
>> Hi Lasantha,
>>
>> I'm using the  ThriftTestServer [1] to check data publisher functionality
>> in a java test class. I have set the TrustStore paramby
>> DataPublisherTestUtil.setTrustStoreParams(); after setting KeyStore
>> param in my local code.
>>
>> After start the ThriftTestServer instance, I'm trying to create a
>> publisher object and while 

[Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Malmee Weerasinghe
Hi Akila,
We need to locate the Artifact Converter Tool which converts PPaaS 4.0.0
artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.

As Artifact Converter Tool and paas-migration/4.0.0 tool have quite similar
functionality, can we create a new folder in 'tools' and move
paas-migration/4.0.0 tool to it and locate together with Artifact Converter
Tool. Do you have any suggestions?

Thank you
-- 
Malmee Weerasinghe
WSO2 Intern
mobile : (+94)* 71 7601905* |   email :   
mal...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] WSO2 Committers += Ruwan Abeykoon

2015-12-01 Thread Dinusha Senanayaka
Hi All,

It is my pleasure to welcome Ruwan Abeykoon as a WSO2 Committer.  Ruwan,
congratulations and keep up the good work.

Regards,
Dinusha.

-- 
Dinusha Dilrukshi
Associate Technical Lead
WSO2 Inc.: http://wso2.com/
Mobile: +94725255071
Blog: http://dinushasblog.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Cleaning up ML REST API

2015-12-01 Thread Thamali Wijewardhana
On Thu, Nov 26, 2015 at 7:40 PM, Frank Leymann  wrote:

> Dear all, sorry for the delay  :-(
>
> What about one of the following time slots:
>
> Tuesday, Dec 1, 4pm Colombo Time
> Wednesday, Dec 2, 4pm Colombo Time
> Friday, Dec 4, 4pm Colombo Time
>
> I will be available later than 4pm but this won't be convenient for you in
> Colombo.
>
>
>
> Best regards,
> Frank
>
> 2015-11-25 8:43 GMT+01:00 Nirmal Fernando :
>
>> Hi Frank,
>>
>> Could you please let us know few time slots?
>>
>> On Mon, Nov 23, 2015 at 9:29 AM, Nirmal Fernando  wrote:
>>
>>> Absolutely. We'll wait till Frank confirms a time. Thanks.
>>>
>>> On Sun, Nov 22, 2015 at 10:18 PM, Sanjeewa Malalgoda 
>>> wrote:
>>>
 Hi nirmal,
 please invite apim rest api team as well. we would like to join this
 discussion.

 Thanks
 sanjeewa.

 sent from my phone
 On Nov 22, 2015 7:00 PM, "Nirmal Fernando"  wrote:

> Thanks Frank for the response. +1 for having a call. Could you please
> propose few time slots?
>
> On Sun, Nov 22, 2015 at 6:55 PM, Frank Leymann  wrote:
>
>> Dear Thamali,
>>
>> we (APIM, ES Publisher,... teams) developed some guidelines on making
>> all of our APIs more consistent. For example, versioning (major, minor,
>> patch) as part of the URL context etc.  Also, you are not using PUT but
>> always POST - this has some implications a bunch of REST-folks are 
>> serious
>> about. Similarly, the use of proper HTTP headers is a REST issue to 
>> reduce
>> the amount of data transferred, to avoid potential concurrency problems
>> etc.
>>
>> Should we have a call to discuss the API and check where we can help?
>>
>>
>>
>> Best regards,
>> Frank
>>
>> 2015-11-18 12:10 GMT+01:00 Nirmal Fernando :
>>
>>> Thanks Thamali! Please try to generate the Swagger definition for ML
>>> API as the next step.
>>>
>>> On Wed, Nov 18, 2015 at 12:21 PM, Thamali Wijewardhana <
>>> tham...@wso2.com> wrote:
>>>
 REST API standards define the way to produce a RESTful API. For an
 API to become a RESTful API, it should confirm to those REST
 standards.This document includes a set of  improvements to make the 
 WSO2
 API, a RESTful API.



 https://docs.google.com/spreadsheets/d/1HYiS-TpqYaZTtBLLSIeYZ_nvZkbt7zAFwetnHLe4vg8/edit#gid=0



>>>
>>>
>>> --
>>>
>>> Thanks & regards,
>>> Nirmal
>>>
>>> Team Lead - WSO2 Machine Learner
>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>>> Mobile: +94715779733
>>> Blog: http://nirmalfdo.blogspot.com/
>>>
>>>
>>>
>>
>
>
> --
>
> Thanks & regards,
> Nirmal
>
> Team Lead - WSO2 Machine Learner
> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
> Mobile: +94715779733
> Blog: http://nirmalfdo.blogspot.com/
>
>
>
>>>
>>>
>>> --
>>>
>>> Thanks & regards,
>>> Nirmal
>>>
>>> Team Lead - WSO2 Machine Learner
>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>>> Mobile: +94715779733
>>> Blog: http://nirmalfdo.blogspot.com/
>>>
>>>
>>>
>>
>>
>> --
>>
>> Thanks & regards,
>> Nirmal
>>
>> Team Lead - WSO2 Machine Learner
>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>> Mobile: +94715779733
>> Blog: http://nirmalfdo.blogspot.com/
>>
>>
>>
>


swagger.yml
Description: application/yaml
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Android EdgeAnalyticsService (mobile CEP) Restructuring

2015-12-01 Thread Sriskandarajah Suhothayan
Can you share me the project git hub link I'll try to simulate this.

Suho

On Tue, Dec 1, 2015 at 2:41 PM, Janitha Samarasinghe 
wrote:

> Hi,
>
> I attempted to run Siddhi 3.0.4 on Android. But encountered the following
> problems:
>
>1. Gradle build fails due to duplicate files in META-INF folder in the
>jars:
>   1. siddhi-core-3.0.4-SNAPSHOT.jar
>   2. siddhi-query-api-3.0.4-SNAPSHOT.jar
>   3. siddhi-query-compiler-3.0.4-SNAPSHOT.jar
>- *By deleting the files NOTICE, LICENSE and DEPENDENCIES files
>   located in the META-INF floder in all but one of the .jars, the build 
> was a
>   success.*
>2. Once built, the Android application cannot run any of the samples
>as it freezes while logging:
>   - E/File: fail readDirectory() errno=13
>   - The permissions to read and write from and to external storage
>   have been given in AndroidManifest.xml as:
>
>  android:name="android.permission.READ_EXTERNAL_STORAGE" />
>
>  android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
>
>
>
>
> On Fri, Nov 27, 2015 at 1:53 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> Were you able to run Siddhi 3.0 in Android.
>> Try that and escalate the issues ASAP
>>
>> Suho
>>
>> On Fri, Nov 27, 2015 at 12:27 PM, Janitha Samarasinghe 
>> wrote:
>>
>>> Hi all,
>>>
>>> The way in which the existing Android EdgeAnalyticsService is
>>> implemented is not structured as an Android system service. We are
>>> currently re-structuring it to be usable just as an Android system service.
>>> This is what getting the service on a client would look like once completed:
>>>
>>>
>>>1. public class MainActivity extends WSO2Activity {
>>>2.
>>>3. EdgeAnalyticsService edgeAnalyticsService;
>>>4.
>>>5. @Override
>>>6. protected void onCreate(Bundle savedInstanceState) {
>>>7. super.onCreate(savedInstanceState);
>>>8. setContentView(R.layout.activity_main);
>>>9.
>>>10. edgeAnalyticsService = (EdgeAnalyticsService)
>>> getWSO2Service(WSO2Context.EDGE_ANALYTICS_SERVICE);
>>>11. }
>>>12. ...
>>>
>>>
>>> Currently we are creating interfaces in order to establish communication
>>> between the service and the clients
>>>
>>> Thanks and Regards,
>>> --
>>> Janitha Samarasinghe
>>> Intern Software Engineer
>>> WSO2 Inc: http://wso2.com
>>> phone: +94716517331
>>>
>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>> *WSO2 Inc. *http://wso2.com
>> * *
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ twitter:
>> http://twitter.com/suhothayan  | linked-in:
>> http://lk.linkedin.com/in/suhothayan *
>>
>
>
>
> --
> Janitha Samarasinghe
> Intern Software Engineer
> WSO2 Inc: http://wso2.com
> phone: +94716517331
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Imesh Gunaratne
On Tue, Dec 1, 2015 at 7:38 PM, Gayan Gunarathne  wrote:

>
> On Tue, Dec 1, 2015 at 6:46 PM, Akila Ravihansa Perera  > wrote:
>
>> Hi,
>>
>> +1 for the proposed folder structure.
>>
>> @Gayan: Currently that tool only exports existing cartridge subscriptions
>> and domain mappings. It doesn't do any conversion or migration (although it
>> is called migration-tool);
>>
>
> In that case, if we put this under tools/migration it will be misleading
> for the end user.
>

Yes, that's why we need to rename it. +1 for calling it
subscription-exporter.

Thanks

>
>
>> which is why it should be renamed to subscription-manager. I actually
>> prefer the name "subscription-exporter". We had to create an external tool
>> since regular API didn't have any methods to expose those information. We
>> should include those API methods to the regular API in the new PPaaS
>> release.
>>
>> @Malmee: I've restructured the folder structure in [1]. You can create a
>> new folder named "artifact-converter" at tools/migration/ to host your
>> tool. Please send a PR with those changes.
>>
>> On a side note; does your tool support converting cartridge subscription
>> artifacts to application signups and domain mapping subscriptions?
>>
>> [1]
>> https://github.com/wso2/product-private-paas/tree/master/tools/migration
>>
>> Thanks.
>>
>> On Tue, Dec 1, 2015 at 6:23 PM, Imesh Gunaratne  wrote:
>>
>>>
>>>
>>> On Tue, Dec 1, 2015 at 5:23 PM, Gayan Gunarathne 
>>> wrote:
>>>

 On Tue, Dec 1, 2015 at 4:46 PM, Imesh Gunaratne  wrote:

> Shall we have something like below:
>
> └── tools
> └── migration
> ├── artifact-converter
> └── subcription-manager
>

 I think it is a subscription-converter not subscription-manager.It will
 convert the 4.0.0 subscriptions to the 4.1.0. So shall we call it as
 subscription-converter?

>>>
>>> AFAIK it does not convert subscriptions, it just export them.
>>>
>>> Thanks
>>>

> @Akila: We might need to rename the existing subscription management
> tool.
>
> Thanks
>
> On Tue, Dec 1, 2015 at 3:37 PM, Malmee Weerasinghe 
> wrote:
>
>> Hi Akila,
>> We need to locate the Artifact Converter Tool which converts PPaaS
>> 4.0.0 artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.
>>
>> As Artifact Converter Tool and paas-migration/4.0.0 tool have quite
>> similar functionality, can we create a new folder in 'tools' and move
>> paas-migration/4.0.0 tool to it and locate together with Artifact
>> Converter Tool. Do you have any suggestions?
>>
>> Thank you
>> --
>> Malmee Weerasinghe
>> WSO2 Intern
>> mobile : (+94)* 71 7601905* |   email :   
>> mal...@wso2.com
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.gunaratne.org
> Lean . Enterprise . Middleware
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


 --

 Gayan Gunarathne
 Technical Lead, WSO2 Inc. (http://wso2.com)
 Committer & PMC Member, Apache Stratos
 email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>



>>>
>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Senior Technical Lead
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: http://imesh.gunaratne.org
>>> Lean . Enterprise . Middleware
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Akila Ravihansa Perera
>> WSO2 Inc.;  http://wso2.com/
>>
>> Blog: http://ravihansa3000.blogspot.com
>>
>
>
>
> --
>
> Gayan Gunarathne
> Technical Lead, WSO2 Inc. (http://wso2.com)
> Committer & PMC Member, Apache Stratos
> email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
>
>
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.gunaratne.org
Lean . Enterprise . Middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [EMM] Doubt on Server Url to be taken from carbon.xml HostName value or some other configuration.

2015-12-01 Thread Dulitha Wijewantha
On Thu, Nov 26, 2015 at 8:33 AM, Afkham Azeez  wrote:

> I think your requirement is to to send a URL to the client in an email.
> The best option is the define the entire URL as some config element and use
> that without complicating stuff so much.
>

​+1 for this.
​


>
>

> On Thu, Nov 26, 2015 at 6:53 PM, Geeth Munasinghe  wrote:
>
>>
>>
>> On Thu, Nov 26, 2015 at 11:56 AM, Sameera Jayasoma 
>> wrote:
>>
>>> At the moment carbon.xml contains proxy host, proxy context path of the
>>> worker cluster. But proxy port of the worker cluster is missing. Therefore
>>> we need to add this to carbon.xml.
>>>
>>> Suggestion is to put following properties under the "Ports" element.
>>>
>>> 80
>>> 443
>>>
>>> WDYT?
>>>
>>
>> +1
>>
>> If the both worker and manager nodes are exposed globally, we are able to
>> get the host name from carbon.xml and proxy port from catalina-server.xml.
>>
>> But there is a deployment scenario where proxy port cannot be taken from
>> catalin-server.xml.
>>
>> Our use case is EMM administrator add users and sends emails with the
>> instructions to enroll the mobile device. We use the manager node to add
>> user and send the email. But devices will be enrolled to the worker node.
>> So email sent by the manager node contains the url of the worker nodes.
>> That means it has the proxy hostname and the proxy port of the worker. So
>> in a setup where manager node is not exposed to the outside world, only
>> worker nodes are exposed globally through the LB, then proxy port is not
>> configured in the manager node. Manager node can be accessed only from
>> internal network which is valid use case for many companies where security
>> is much concerned. In this case we are not able to get the proxy port of
>> the worker nodes from manager nodes.
>>
>> I think above parameters would fix our problem. I have created a jira [1]
>> for this.
>>
>> [1] https://wso2.org/jira/browse/CARBON-15659
>>
>> Thanks
>> Geeth
>>
>>
>>> Thanks,
>>> Sameera.
>>>
>>> On Tue, Nov 24, 2015 at 10:34 AM, Sameera Jayasoma 
>>> wrote:
>>>
 +1. We should use carbon.xml at all cost otherwise we are adding
 unnecessary overhead in configuring the products. You can see how we
 generate other URLs. We do have few util methods.  Please reuse the util
 methods.

 When you calculate the URL, you need to consider following parameters.

 hostname
 proxy port or port
 proxy path etc

 Thanks,
 Sameera.

 On Tue, Nov 24, 2015 at 8:17 AM, Selvaratnam Uthaiyashankar <
 shan...@wso2.com> wrote:

> I agree with Chamara. We have a way to configure public hostname
> (HostName, MgtHostName in carbon.xml) and port (proxy port in
> tomcat/catalina-server.xml). This is what used in generating service
> endpoints, WSDL URLs etc. when a server is fronted with LB. I don't see 
> any
> necessary for EMM to have a new configuration.
>
> On Tue, Nov 24, 2015 at 12:41 AM, Geeth Munasinghe 
> wrote:
>
>>
>>
>> On Tue, Nov 24, 2015 at 12:12 AM, Chamara Ariyarathne <
>> chama...@wso2.com> wrote:
>>
>>> Hi Milan. Thanks for the information. We will try this tomorrow. But
>>> our purpose is to replace this whole url with a configured host name.
>>>
>>> However Geeth, I think the EMM team having to introduce a new config
>>> to put the globally exposed server url deviates from the purpose of 
>>> having
>>> HostName and MgtHostname properties in the carbon.xml..
>>>
>>
>> Chamara,
>> I think I disagree with on that point. I dont think carbon hostname
>> or mgt host name cannot be used for globally exposing the server url.
>> AFAIK there is no place to put the port number in carbon.xml. There is no
>> point of having just a host name without the port number. The carbon.xml
>> host name will be the server ip address or the host name of the server
>> which the product is running as clearly mentioned in the document [1].
>>
>> As another reference, AFAIK in ESB, we use WSDLPrefix [2] in order to
>> change the address endpoint of generated wsdls to LB's address when ESB 
>> is
>> fronted by a LB.
>>
>> So I think introducing a new config to put the LB host name and port
>> is valid.
>>
>> [1] https://docs.wso2.com/display/Carbon440/Configuring+carbon.xml
>> [2]
>> https://docs.wso2.com/display/ESB490/Setting+Up+Host+Names+and+Ports
>>
>> Thanks
>> Geeth
>>
>>>
>>> On Mon, Nov 23, 2015 at 9:58 PM, Milan Perera 
>>> wrote:
>>>
 Hi
 ​Chamara​
 ,

 Today we found out that even when the Host Names are configured in
> the carbonl.xml to be server's identified domain name, the QR code 
> which is
> generated while device registration, 

Re: [Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Imesh Gunaratne
Shall we have something like below:

└── tools
└── migration
├── artifact-converter
└── subcription-manager

@Akila: We might need to rename the existing subscription management tool.

Thanks

On Tue, Dec 1, 2015 at 3:37 PM, Malmee Weerasinghe  wrote:

> Hi Akila,
> We need to locate the Artifact Converter Tool which converts PPaaS 4.0.0
> artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.
>
> As Artifact Converter Tool and paas-migration/4.0.0 tool have quite
> similar functionality, can we create a new folder in 'tools' and move
> paas-migration/4.0.0 tool to it and locate together with Artifact
> Converter Tool. Do you have any suggestions?
>
> Thank you
> --
> Malmee Weerasinghe
> WSO2 Intern
> mobile : (+94)* 71 7601905* |   email :   
> mal...@wso2.com
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.gunaratne.org
Lean . Enterprise . Middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [EMM] Exception when installing ios p2-repository in EMM

2015-12-01 Thread Sashika Wijesinghe
Hi All,

I want to configure IOS to MDM. I followed below steps to configure IOS.

   - configure general server configurations as mentioned in doc [1
   
   ]
   - Start EMM server and added ios-agent.ipa file to
   
'/repository/deployment/server/jaggeryapps/mdm/units/asset-download-agent-ios/public/asset'
   path
   - Installed p2 repository as mentioned in doc [2
   ]

[1] https://docs.wso2.com/display/EMM200/General+iOS+Server+Configurations
[2] https://docs.wso2.com/display/EMM200/Installing+the+P2+Repository

Below exception observed in terminal after installing p2 repository. May I
know whether I missed any mandatory configurations?

log4j:WARN No appenders could be found for logger
(org.apache.cxf.common.logging.LogUtils).
log4j:WARN Please initialize the log4j system properly.
[2015-12-01 18:10:13,701] ERROR
{org.apache.catalina.core.ApplicationContext} -  StandardWrapper.Throwable
org.springframework.beans.factory.BeanCreationException: Error creating
bean with name 'enrollmentService': Cannot resolve reference to bean
'enrollmentServiceBean' while setting bean property 'serviceBeans' with key
[0]; nested exception is
org.springframework.beans.factory.BeanCreationException: Error creating
bean with name 'enrollmentServiceBean' defined in URL
[jndi:/localhost/ios-enrollment/WEB-INF/cxf-servlet.xml]: Instantiation of
bean failed; nested exception is java.lang.NoClassDefFoundError:
org/wso2/carbon/device/mgt/ios/core/exception/IOSEnrollmentException
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveManagedList(BeanDefinitionValueResolver.java:353)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:153)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1327)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1085)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:516)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:455)
at
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293)
at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290)
at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192)
at
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585)
at
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895)
at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425)
at
org.apache.cxf.transport.servlet.CXFServlet.createSpringContext(CXFServlet.java:151)
at org.apache.cxf.transport.servlet.CXFServlet.loadBus(CXFServlet.java:74)
at
org.apache.cxf.transport.servlet.CXFNonSpringServlet.init(CXFNonSpringServlet.java:76)
at
org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1284)
at
org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1197)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1087)
at
org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5262)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5550)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
at
org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:290)
at
org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:198)
at
org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWebappDeployment(TomcatGenericWebappsDeployer.java:258)
at
org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWarWebappDeployment(TomcatGenericWebappsDeployer.java:208)
at

Re: [Dev] Cleaning up ML REST API

2015-12-01 Thread Thamali Wijewardhana
Link for the swagger definition:

https://docs.google.com/a/wso2.com/document/d/1KYmXyuEFJMhFMy6p-SjRztVrgW8mr83LZR2U4BK4d-M/edit?usp=sharing


On Tue, Dec 1, 2015 at 4:32 PM, Thamali Wijewardhana 
wrote:

>
>
> On Thu, Nov 26, 2015 at 7:40 PM, Frank Leymann  wrote:
>
>> Dear all, sorry for the delay  :-(
>>
>> What about one of the following time slots:
>>
>> Tuesday, Dec 1, 4pm Colombo Time
>> Wednesday, Dec 2, 4pm Colombo Time
>> Friday, Dec 4, 4pm Colombo Time
>>
>> I will be available later than 4pm but this won't be convenient for you
>> in Colombo.
>>
>>
>>
>> Best regards,
>> Frank
>>
>> 2015-11-25 8:43 GMT+01:00 Nirmal Fernando :
>>
>>> Hi Frank,
>>>
>>> Could you please let us know few time slots?
>>>
>>> On Mon, Nov 23, 2015 at 9:29 AM, Nirmal Fernando 
>>> wrote:
>>>
 Absolutely. We'll wait till Frank confirms a time. Thanks.

 On Sun, Nov 22, 2015 at 10:18 PM, Sanjeewa Malalgoda  wrote:

> Hi nirmal,
> please invite apim rest api team as well. we would like to join this
> discussion.
>
> Thanks
> sanjeewa.
>
> sent from my phone
> On Nov 22, 2015 7:00 PM, "Nirmal Fernando"  wrote:
>
>> Thanks Frank for the response. +1 for having a call. Could you please
>> propose few time slots?
>>
>> On Sun, Nov 22, 2015 at 6:55 PM, Frank Leymann 
>> wrote:
>>
>>> Dear Thamali,
>>>
>>> we (APIM, ES Publisher,... teams) developed some guidelines on
>>> making all of our APIs more consistent. For example, versioning (major,
>>> minor, patch) as part of the URL context etc.  Also, you are not using 
>>> PUT
>>> but always POST - this has some implications a bunch of REST-folks are
>>> serious about. Similarly, the use of proper HTTP headers is a REST 
>>> issue to
>>> reduce the amount of data transferred, to avoid potential concurrency
>>> problems etc.
>>>
>>> Should we have a call to discuss the API and check where we can help?
>>>
>>>
>>>
>>> Best regards,
>>> Frank
>>>
>>> 2015-11-18 12:10 GMT+01:00 Nirmal Fernando :
>>>
 Thanks Thamali! Please try to generate the Swagger definition for
 ML API as the next step.

 On Wed, Nov 18, 2015 at 12:21 PM, Thamali Wijewardhana <
 tham...@wso2.com> wrote:

> REST API standards define the way to produce a RESTful API. For an
> API to become a RESTful API, it should confirm to those REST
> standards.This document includes a set of  improvements to make the 
> WSO2
> API, a RESTful API.
>
>
>
> https://docs.google.com/spreadsheets/d/1HYiS-TpqYaZTtBLLSIeYZ_nvZkbt7zAFwetnHLe4vg8/edit#gid=0
>
>
>


 --

 Thanks & regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/



>>>
>>
>>
>> --
>>
>> Thanks & regards,
>> Nirmal
>>
>> Team Lead - WSO2 Machine Learner
>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>> Mobile: +94715779733
>> Blog: http://nirmalfdo.blogspot.com/
>>
>>
>>


 --

 Thanks & regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/



>>>
>>>
>>> --
>>>
>>> Thanks & regards,
>>> Nirmal
>>>
>>> Team Lead - WSO2 Machine Learner
>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>>> Mobile: +94715779733
>>> Blog: http://nirmalfdo.blogspot.com/
>>>
>>>
>>>
>>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Malith Dhanushka
Hi Folks,

Currently indexing arbitrary fields is being achieved by dynamically
updating analytics table schema through analytics REST API. This is not an
accurate solution for a frequently updating schema. So the ideal solution
would be to have a flag in data bridge event sink configuration to
enable/disable indexing for all arbitrary fields.

WDUT?

Thanks,
Malith
-- 
Malith Dhanushka
Senior Software Engineer - Data Technologies
*WSO2, Inc. : wso2.com *
*Mobile*  : +94 716 506 693
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Anjana Fernando
Hi Malith,

No, it cannot be done like that. How the indexing and all happens is, it
looks up the table schema for a table and do the indexing according to
that. So the table schema must be set before hand. It is not a dynamic
thing that can be set, when arbitrary fields are sent to the receiver, and
it cannot always load the current schema and set it always for each event,
even though we can cache that information and do some operations, but that
gets complicated. So the idea is, it is the responsibility of the client to
set the target table's schema properly before hand, which may or may not
include arbitrary fields, and then send the data.

Also, if this requirement is for the log analytics solution work, as we've
discussed before, there should be a whole new remote API for that, and that
API can do these operations inside the server, using the OSGi services, and
not the original DAS REST API. So those operations will happen
automatically while keeping the remote log related API clean.

Cheers,
Anjana.

On Tue, Dec 1, 2015 at 5:13 PM, Malith Dhanushka  wrote:

> Hi Folks,
>
> Currently indexing arbitrary fields is being achieved by dynamically
> updating analytics table schema through analytics REST API. This is not an
> accurate solution for a frequently updating schema. So the ideal solution
> would be to have a flag in data bridge event sink configuration to
> enable/disable indexing for all arbitrary fields.
>
> WDUT?
>
> Thanks,
> Malith
> --
> Malith Dhanushka
> Senior Software Engineer - Data Technologies
> *WSO2, Inc. : wso2.com *
> *Mobile*  : +94 716 506 693
>



-- 
*Anjana Fernando*
Senior Technical Lead
WSO2 Inc. | http://wso2.com
lean . enterprise . middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Indexing arbitrary fields

2015-12-01 Thread Malith Dhanushka
Hi Anjana,

Yes. Requirement is for the internal log related REST API which is being
written using osgi services. In the perspective of log analysis data, we
have one master table to persist all the log events from different log
sources. The way log data comes in to log REST API is as arbitrary fields.
So different log sources have different set of arbitrary fields which leads
log REST API to change the schema of master table every time it receives
log events from a new/updated log source. That's what i meant inaccurate
which can be solved much cleaner way by having that flag to index or not to
index arbitrary fields for a particular stream.

Thanks,
Malith

On Tue, Dec 1, 2015 at 6:06 PM, Anjana Fernando  wrote:

> Hi Malith,
>
> No, it cannot be done like that. How the indexing and all happens is, it
> looks up the table schema for a table and do the indexing according to
> that. So the table schema must be set before hand. It is not a dynamic
> thing that can be set, when arbitrary fields are sent to the receiver, and
> it cannot always load the current schema and set it always for each event,
> even though we can cache that information and do some operations, but that
> gets complicated. So the idea is, it is the responsibility of the client to
> set the target table's schema properly before hand, which may or may not
> include arbitrary fields, and then send the data.
>
> Also, if this requirement is for the log analytics solution work, as we've
> discussed before, there should be a whole new remote API for that, and that
> API can do these operations inside the server, using the OSGi services, and
> not the original DAS REST API. So those operations will happen
> automatically while keeping the remote log related API clean.
>
> Cheers,
> Anjana.
>
> On Tue, Dec 1, 2015 at 5:13 PM, Malith Dhanushka  wrote:
>
>> Hi Folks,
>>
>> Currently indexing arbitrary fields is being achieved by dynamically
>> updating analytics table schema through analytics REST API. This is not an
>> accurate solution for a frequently updating schema. So the ideal solution
>> would be to have a flag in data bridge event sink configuration to
>> enable/disable indexing for all arbitrary fields.
>>
>> WDUT?
>>
>> Thanks,
>> Malith
>> --
>> Malith Dhanushka
>> Senior Software Engineer - Data Technologies
>> *WSO2, Inc. : wso2.com *
>> *Mobile*  : +94 716 506 693
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 
Malith Dhanushka
Senior Software Engineer - Data Technologies
*WSO2, Inc. : wso2.com *
*Mobile*  : +94 716 506 693
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Akila Ravihansa Perera
Hi,

+1 for the proposed folder structure.

@Gayan: Currently that tool only exports existing cartridge subscriptions
and domain mappings. It doesn't do any conversion or migration (although it
is called migration-tool); which is why it should be renamed to
subscription-manager. I actually prefer the name "subscription-exporter".
We had to create an external tool since regular API didn't have any methods
to expose those information. We should include those API methods to the
regular API in the new PPaaS release.

@Malmee: I've restructured the folder structure in [1]. You can create a
new folder named "artifact-converter" at tools/migration/ to host your
tool. Please send a PR with those changes.

On a side note; does your tool support converting cartridge subscription
artifacts to application signups and domain mapping subscriptions?

[1] https://github.com/wso2/product-private-paas/tree/master/tools/migration

Thanks.

On Tue, Dec 1, 2015 at 6:23 PM, Imesh Gunaratne  wrote:

>
>
> On Tue, Dec 1, 2015 at 5:23 PM, Gayan Gunarathne  wrote:
>
>>
>> On Tue, Dec 1, 2015 at 4:46 PM, Imesh Gunaratne  wrote:
>>
>>> Shall we have something like below:
>>>
>>> └── tools
>>> └── migration
>>> ├── artifact-converter
>>> └── subcription-manager
>>>
>>
>> I think it is a subscription-converter not subscription-manager.It will
>> convert the 4.0.0 subscriptions to the 4.1.0. So shall we call it as
>> subscription-converter?
>>
>
> AFAIK it does not convert subscriptions, it just export them.
>
> Thanks
>
>>
>>> @Akila: We might need to rename the existing subscription management
>>> tool.
>>>
>>> Thanks
>>>
>>> On Tue, Dec 1, 2015 at 3:37 PM, Malmee Weerasinghe 
>>> wrote:
>>>
 Hi Akila,
 We need to locate the Artifact Converter Tool which converts PPaaS
 4.0.0 artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.

 As Artifact Converter Tool and paas-migration/4.0.0 tool have quite
 similar functionality, can we create a new folder in 'tools' and move
 paas-migration/4.0.0 tool to it and locate together with Artifact
 Converter Tool. Do you have any suggestions?

 Thank you
 --
 Malmee Weerasinghe
 WSO2 Intern
 mobile : (+94)* 71 7601905* |   email :   
 mal...@wso2.com

>>>
>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Senior Technical Lead
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: http://imesh.gunaratne.org
>>> Lean . Enterprise . Middleware
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>>
>> Gayan Gunarathne
>> Technical Lead, WSO2 Inc. (http://wso2.com)
>> Committer & PMC Member, Apache Stratos
>> email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
>>
>>
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.gunaratne.org
> Lean . Enterprise . Middleware
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Akila Ravihansa Perera
WSO2 Inc.;  http://wso2.com/

Blog: http://ravihansa3000.blogspot.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Gayan Gunarathne
On Tue, Dec 1, 2015 at 4:46 PM, Imesh Gunaratne  wrote:

> Shall we have something like below:
>
> └── tools
> └── migration
> ├── artifact-converter
> └── subcription-manager
>

I think it is a subscription-converter not subscription-manager.It will
convert the 4.0.0 subscriptions to the 4.1.0. So shall we call it as
subscription-converter?

>
> @Akila: We might need to rename the existing subscription management tool.
>
> Thanks
>
> On Tue, Dec 1, 2015 at 3:37 PM, Malmee Weerasinghe 
> wrote:
>
>> Hi Akila,
>> We need to locate the Artifact Converter Tool which converts PPaaS 4.0.0
>> artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.
>>
>> As Artifact Converter Tool and paas-migration/4.0.0 tool have quite
>> similar functionality, can we create a new folder in 'tools' and move
>> paas-migration/4.0.0 tool to it and locate together with Artifact
>> Converter Tool. Do you have any suggestions?
>>
>> Thank you
>> --
>> Malmee Weerasinghe
>> WSO2 Intern
>> mobile : (+94)* 71 7601905* |   email :   
>> mal...@wso2.com
>>
>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.gunaratne.org
> Lean . Enterprise . Middleware
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 

Gayan Gunarathne
Technical Lead, WSO2 Inc. (http://wso2.com)
Committer & PMC Member, Apache Stratos
email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Imesh Gunaratne
On Tue, Dec 1, 2015 at 5:23 PM, Gayan Gunarathne  wrote:

>
> On Tue, Dec 1, 2015 at 4:46 PM, Imesh Gunaratne  wrote:
>
>> Shall we have something like below:
>>
>> └── tools
>> └── migration
>> ├── artifact-converter
>> └── subcription-manager
>>
>
> I think it is a subscription-converter not subscription-manager.It will
> convert the 4.0.0 subscriptions to the 4.1.0. So shall we call it as
> subscription-converter?
>

AFAIK it does not convert subscriptions, it just export them.

Thanks

>
>> @Akila: We might need to rename the existing subscription management tool.
>>
>> Thanks
>>
>> On Tue, Dec 1, 2015 at 3:37 PM, Malmee Weerasinghe 
>> wrote:
>>
>>> Hi Akila,
>>> We need to locate the Artifact Converter Tool which converts PPaaS 4.0.0
>>> artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.
>>>
>>> As Artifact Converter Tool and paas-migration/4.0.0 tool have quite
>>> similar functionality, can we create a new folder in 'tools' and move
>>> paas-migration/4.0.0 tool to it and locate together with Artifact
>>> Converter Tool. Do you have any suggestions?
>>>
>>> Thank you
>>> --
>>> Malmee Weerasinghe
>>> WSO2 Intern
>>> mobile : (+94)* 71 7601905* |   email :   
>>> mal...@wso2.com
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.gunaratne.org
>> Lean . Enterprise . Middleware
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
>
> Gayan Gunarathne
> Technical Lead, WSO2 Inc. (http://wso2.com)
> Committer & PMC Member, Apache Stratos
> email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
>
>
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.gunaratne.org
Lean . Enterprise . Middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Locating the Artifact Converter Tool in WSO2 Product-Private-PaaS Repo

2015-12-01 Thread Gayan Gunarathne
On Tue, Dec 1, 2015 at 6:46 PM, Akila Ravihansa Perera 
wrote:

> Hi,
>
> +1 for the proposed folder structure.
>
> @Gayan: Currently that tool only exports existing cartridge subscriptions
> and domain mappings. It doesn't do any conversion or migration (although it
> is called migration-tool);
>

In that case, if we put this under tools/migration it will be misleading
for the end user.


> which is why it should be renamed to subscription-manager. I actually
> prefer the name "subscription-exporter". We had to create an external tool
> since regular API didn't have any methods to expose those information. We
> should include those API methods to the regular API in the new PPaaS
> release.
>
> @Malmee: I've restructured the folder structure in [1]. You can create a
> new folder named "artifact-converter" at tools/migration/ to host your
> tool. Please send a PR with those changes.
>
> On a side note; does your tool support converting cartridge subscription
> artifacts to application signups and domain mapping subscriptions?
>
> [1]
> https://github.com/wso2/product-private-paas/tree/master/tools/migration
>
> Thanks.
>
> On Tue, Dec 1, 2015 at 6:23 PM, Imesh Gunaratne  wrote:
>
>>
>>
>> On Tue, Dec 1, 2015 at 5:23 PM, Gayan Gunarathne  wrote:
>>
>>>
>>> On Tue, Dec 1, 2015 at 4:46 PM, Imesh Gunaratne  wrote:
>>>
 Shall we have something like below:

 └── tools
 └── migration
 ├── artifact-converter
 └── subcription-manager

>>>
>>> I think it is a subscription-converter not subscription-manager.It will
>>> convert the 4.0.0 subscriptions to the 4.1.0. So shall we call it as
>>> subscription-converter?
>>>
>>
>> AFAIK it does not convert subscriptions, it just export them.
>>
>> Thanks
>>
>>>
 @Akila: We might need to rename the existing subscription management
 tool.

 Thanks

 On Tue, Dec 1, 2015 at 3:37 PM, Malmee Weerasinghe 
 wrote:

> Hi Akila,
> We need to locate the Artifact Converter Tool which converts PPaaS
> 4.0.0 artifacts to PPaaS 4.1.0, in Product-Private-PaaS Repo.
>
> As Artifact Converter Tool and paas-migration/4.0.0 tool have quite
> similar functionality, can we create a new folder in 'tools' and move
> paas-migration/4.0.0 tool to it and locate together with Artifact
> Converter Tool. Do you have any suggestions?
>
> Thank you
> --
> Malmee Weerasinghe
> WSO2 Intern
> mobile : (+94)* 71 7601905* |   email :   
> mal...@wso2.com
>



 --
 *Imesh Gunaratne*
 Senior Technical Lead
 WSO2 Inc: http://wso2.com
 T: +94 11 214 5345 M: +94 77 374 2057
 W: http://imesh.gunaratne.org
 Lean . Enterprise . Middleware


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


>>>
>>>
>>> --
>>>
>>> Gayan Gunarathne
>>> Technical Lead, WSO2 Inc. (http://wso2.com)
>>> Committer & PMC Member, Apache Stratos
>>> email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
>>>
>>>
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.gunaratne.org
>> Lean . Enterprise . Middleware
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Akila Ravihansa Perera
> WSO2 Inc.;  http://wso2.com/
>
> Blog: http://ravihansa3000.blogspot.com
>



-- 

Gayan Gunarathne
Technical Lead, WSO2 Inc. (http://wso2.com)
Committer & PMC Member, Apache Stratos
email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev