Re: [Dev] No available task nodes for resolving a task

2016-11-07 Thread Niranda Perera
Hi all,

I agree with @sinthuja here. We've had the same issue in DAS clustering and
it is configurable in the task-config.xml

Best

On Mon, Nov 7, 2016 at 10:47 PM, Sinthuja Ragendran <sinth...@wso2.com>
wrote:

> Hi,
>
> Have you configured  parameter in the task-config.xml?
> AFAIR the server will wait for the configured number of nodes to be present
> before trying to schedule the task.
>
> Thanks,
> Sinthuja.
>
> On Mon, Nov 7, 2016 at 4:57 PM, Dilshani Subasinghe <dilsh...@wso2.com>
> wrote:
>
>> Hi Shazni,
>>
>> AFAIK this is an expected behavior. I got this issue in ESB cluster. Hope
>> someone in the team will provide a proper answer for that error.
>>
>> Regards,
>> Dilshani
>>
>> On Mon, Nov 7, 2016 at 4:48 PM, Shazni Nazeer <sha...@wso2.com> wrote:
>>
>>> I get the following exception in the manager node when it's started
>>> first before worker nodes. Exception doesn't occur if the worker nodes
>>> started first. I have a scheduled task deployed in the setup.
>>>
>>> TID: [-1234] [] [2016-11-03 12:36:36,126] ERROR
>>> {org.wso2.carbon.mediation.ntask.NTaskTaskManager} - Scheduling task [[
>>> NTask::-1234::CalendarCleanupTask]::synapse.simple.quartz] FAILED.
>>> Error: No available task nodes for resolving a task l
>>> ocation {org.wso2.carbon.mediation.ntask.NTaskTaskManager}
>>> org.wso2.carbon.ntask.common.TaskException: No available task nodes for
>>> resolving a task location
>>>
>>> at org.wso2.carbon.ntask.core.impl.clustered.ClusteredTaskManager.
>>> getTaskLocation(ClusteredTaskManager.java:232)
>>> at org.wso2.carbon.ntask.core.impl.clustered.ClusteredTaskManager.
>>> locateMemberForTask(ClusteredTaskManager.java:209)
>>> at org.wso2.carbon.ntask.core.impl.clustered.ClusteredTaskManager.
>>> getMemberIdFromTaskName(ClusteredTaskManager.java:283)
>>> at org.wso2.carbon.ntask.core.impl.clustered.ClusteredTaskManager.
>>> scheduleTask(ClusteredTaskManager.java:91)
>>> at org.wso2.carbon.mediation.ntask.NTaskTaskManager.schedule(NT
>>> askTaskManager.java:103)
>>> at org.wso2.carbon.mediation.ntask.NTaskTaskManager.init(NTaskT
>>> askManager.java:352)
>>> at org.wso2.carbon.mediation.ntask.NTaskTaskManager.update(NTas
>>> kTaskManager.java:365)
>>> at org.wso2.carbon.mediation.ntask.internal.NtaskService.update
>>> AndCleanupObservers(NtaskService.java:103)
>>> at org.wso2.carbon.mediation.ntask.internal.NtaskService.setCon
>>> figurationContextService(NtaskService.java:96)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:57)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at org.eclipse.equinox.internal.ds.model.ComponentReference.bin
>>> d(ComponentReference.java:376)
>>> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.b
>>> indReference(ServiceComponentProp.java:430)
>>> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.b
>>> ind(ServiceComponentProp.java:218)
>>> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.b
>>> uild(ServiceComponentProp.java:343)
>>> at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(
>>> InstanceProcess.java:620)
>>> at org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(
>>> InstanceProcess.java:197)
>>> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolve
>>> r.java:343)
>>> at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SC
>>> RManager.java:222)
>>> at org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.
>>> serviceChanged(FilteredServiceListener.java:107)
>>>
>>>
>>> According to [1] this is expected. Is there a fix for this to suppress
>>> the exception?
>>>
>>> [1] http://bugsbydilshani.blogspot.co.uk/2016/07/wso2esbclus
>>> ter-task-scheduling-error.html
>>>
>>> --
>>> Shazni Nazeer
>>> Associate Technical Lead | WSO2
>>>
>>> Mob : +94 37331
>>> LinkedIn : http://lk.linkedin.com/in/shazninazeer
>>> Blog : http://shazninazeer.blogspot.com
>>>
>>> <http://wso2.com/signature>
>>>
>>
>>
>>
>> --
>> Best Regards,
>>
>> Dilshani Subasinghe
>&g

[Dev] [IS] Public cert download link not seen in the custom tenant keystore

2016-10-17 Thread Niranda Perera
Hi all,

I have replaced my tenant keystore with a another keystore following this
blog from Asela [1]

One thing I noticed while doing this is, the Public cert download link does
not appear in the subsequently added keystores. Please refer the
screenshots attached.

​
 before.jpg
<https://drive.google.com/a/wso2.com/file/d/0B1GsnfycTl32V2Qtd3VUbS1xZW8/view?usp=drive_web>
​​
 after.jpg
<https://drive.google.com/a/wso2.com/file/d/0B1GsnfycTl32WnhBZXhmODJoODA/view?usp=drive_web>
​

Is this the expected behavior? or is it a problem in creating the KS?

Best 

[1] http://xacmlinfo.org/2014/11/05/how-to-changing-the-
primary-keystore-of-a-tenant-in-carbon-products/

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Get the private and public keys of a user

2016-10-06 Thread Niranda Perera
Hi,

I am trying to create a SAML response manually. This is a special type of
SAML response named SAML NameIdResponse and I am trying to set a signature
in it.

I am trying to create a signature element, as mentioned here [1].

For me to do this, I need to access the private and public keys
programatically.

Could you please point me to a place where I could extract these
information?

Best

[1]
http://www.programcreek.com/java-api-examples/index.php?source_dir=saml-generator-master/src/main/java/com/rackspace/saml/SamlAssertionProducer.java

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Analytics] NoClassDefFoundError:com.codahale.metrics.json.MetricsModule when installing latest analytics features in IOT server

2016-10-05 Thread Niranda Perera
Hi Ruwan,

Did we try upgrading the jackson version in spark? I'm hoping that there
are no API changes in jackson 2.8.3.

We have done a similar exercise for guava and hadoop client

Best

On Wed, Oct 5, 2016 at 12:44 AM, Ruwan Yatawara <ruw...@wso2.com> wrote:

> Hi Niranda,
>
> Are u referring to the spark core? if so, it is bound to json4s-jackson
> bundle.
>
> if we are changing the jackson version of metrics-json we will have to
> make an orbit out of it. From the way I see it, metrics-json must have
> included said version range in attempt to make the bundle future proof.
> (Latest releave version of jackson-core is 2.8.3 [1])
>
> Given that we have to push out a release in a weeks time, changing
> jackson version of spark is not a feasible option.
>
> Therefore, I am +1 for changing the jakson version range of
> metrics-json to [2.4.0,2.5.0).
>
> [1] -https://mvnrepository.com/artifact/com.fasterxml.
> jackson.core/jackson-core
>
> Thanks and Regards,
>
> Ruwan Yatawara
>
> Associate Technical Lead,
> WSO2 Inc.
>
> email : ruw...@wso2.com
> mobile : +94 77 9110413
> blog : http://ruwansrants.blogspot.com/
>   https://500px.com/ruwan_ace
> www: :http://wso2.com
>
>
> On Tue, Oct 4, 2016 at 7:39 PM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Maninda,
>>
>> What are the 2 Jason versions here?
>>
>> Best
>>
>> On Tue, Oct 4, 2016 at 8:17 AM, Maninda Edirisooriya <mani...@wso2.com>
>> wrote:
>>
>>> + SameeraJ
>>>
>>> As we have found so far, the issue is due to the existence of two
>>> versions of Jackson bundles exists in the IoT server pack. This was not the
>>> case in DAS because IoT has APIM dependencies which brings the newer
>>> version of Jackson into the environment. As Spark uses the older version of
>>> Jackson and Metrics use the newer version of Jackson, importing Metrics
>>> bundle to Spark bundle fails in OSGi level, because the export packages in
>>> Metrics, uses some Jackson packages.
>>>
>>> This has several potential solutions but with inherent issues.
>>>
>>> 1. Release a new version of Metrics bundle having the same older Jackson
>>> dependency. - Releasing with a older version of dependency may be
>>> unsuitable in long term. And if in future, APIM features starts to import
>>> Metrics bundle, the issue will start to happen again on that import.
>>>
>>> 2. Release a new version of Spark to work with newer Jackson bundles.
>>> - As Spark bundle is only correctly functioning with Jackson 2.4.4 (older
>>> version) and not working properly with later version of Jackson we will not
>>> be able to easily release a new Spark version without fixing that issue.
>>>
>>> 3. Remove DAS components from the IoT server and package as separate IoT
>>> Analytics server - Some customers may want to run DAS inside IoT and
>>> removing DAS components from IoT server will effect the user experience for
>>> a WSO2 product evaluator to run  in a single server.
>>>
>>> Please help to find the best approach.
>>>
>>> Thanks.
>>>
>>>
>>> *Maninda Edirisooriya*
>>> Senior Software Engineer
>>>
>>> *WSO2, Inc.*lean.enterprise.middleware.
>>>
>>> *Blog* : http://maninda.blogspot.com/
>>> *E-mail* : mani...@wso2.com
>>> *Skype* : @manindae
>>> *Twitter* : @maninda
>>>
>>> On Tue, Oct 4, 2016 at 5:06 PM, Ruwan Yatawara <ruw...@wso2.com> wrote:
>>>
>>>> Hi Niranda,
>>>>
>>>> Yes, this bundle is active. We found this Jackson related problem upon
>>>> further debugging.
>>>>
>>>> Thanks and Regards,
>>>>
>>>> Ruwan Yatawara
>>>>
>>>> Associate Technical Lead,
>>>> WSO2 Inc.
>>>>
>>>> email : ruw...@wso2.com
>>>> mobile : +94 77 9110413
>>>> blog : http://ruwansrants.blogspot.com/
>>>>   https://500px.com/ruwan_ace
>>>> www: :http://wso2.com
>>>>
>>>>
>>>> On Tue, Oct 4, 2016 at 4:49 PM, Niranda Perera <nira...@wso2.com>
>>>> wrote:
>>>>
>>>>> + RuwanY
>>>>>
>>>>> @Waruna, can you check if the com.codahale.metrics.json bundle is
>>>>> active or not from the OSGI console?
>>>>>
>>>>> Best
>>>>>
>>>>> On Tue, Oct 4, 2016 at 4:25 AM, Waruna Jayaw

Re: [Dev] [Analytics] NoClassDefFoundError:com.codahale.metrics.json.MetricsModule when installing latest analytics features in IOT server

2016-10-04 Thread Niranda Perera
Hi Maninda,

What are the 2 Jason versions here?

Best

On Tue, Oct 4, 2016 at 8:17 AM, Maninda Edirisooriya <mani...@wso2.com>
wrote:

> + SameeraJ
>
> As we have found so far, the issue is due to the existence of two versions
> of Jackson bundles exists in the IoT server pack. This was not the case in
> DAS because IoT has APIM dependencies which brings the newer version of
> Jackson into the environment. As Spark uses the older version of Jackson
> and Metrics use the newer version of Jackson, importing Metrics bundle to
> Spark bundle fails in OSGi level, because the export packages in Metrics,
> uses some Jackson packages.
>
> This has several potential solutions but with inherent issues.
>
> 1. Release a new version of Metrics bundle having the same older Jackson
> dependency. - Releasing with a older version of dependency may be
> unsuitable in long term. And if in future, APIM features starts to import
> Metrics bundle, the issue will start to happen again on that import.
>
> 2. Release a new version of Spark to work with newer Jackson bundles. - As
> Spark bundle is only correctly functioning with Jackson 2.4.4 (older
> version) and not working properly with later version of Jackson we will not
> be able to easily release a new Spark version without fixing that issue.
>
> 3. Remove DAS components from the IoT server and package as separate IoT
> Analytics server - Some customers may want to run DAS inside IoT and
> removing DAS components from IoT server will effect the user experience for
> a WSO2 product evaluator to run  in a single server.
>
> Please help to find the best approach.
>
> Thanks.
>
>
> *Maninda Edirisooriya*
> Senior Software Engineer
>
> *WSO2, Inc.*lean.enterprise.middleware.
>
> *Blog* : http://maninda.blogspot.com/
> *E-mail* : mani...@wso2.com
> *Skype* : @manindae
> *Twitter* : @maninda
>
> On Tue, Oct 4, 2016 at 5:06 PM, Ruwan Yatawara <ruw...@wso2.com> wrote:
>
>> Hi Niranda,
>>
>> Yes, this bundle is active. We found this Jackson related problem upon
>> further debugging.
>>
>> Thanks and Regards,
>>
>> Ruwan Yatawara
>>
>> Associate Technical Lead,
>> WSO2 Inc.
>>
>> email : ruw...@wso2.com
>> mobile : +94 77 9110413
>> blog : http://ruwansrants.blogspot.com/
>>   https://500px.com/ruwan_ace
>> www: :http://wso2.com
>>
>>
>> On Tue, Oct 4, 2016 at 4:49 PM, Niranda Perera <nira...@wso2.com> wrote:
>>
>>> + RuwanY
>>>
>>> @Waruna, can you check if the com.codahale.metrics.json bundle is active
>>> or not from the OSGI console?
>>>
>>> Best
>>>
>>> On Tue, Oct 4, 2016 at 4:25 AM, Waruna Jayaweera <waru...@wso2.com>
>>> wrote:
>>>
>>>> [Looping Niranda,Anjana]
>>>>
>>>> On Tue, Oct 4, 2016 at 12:15 PM, Waruna Jayaweera <waru...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>> After moving to latest analytics version(1.2.8) , we are getting class
>>>>> not found error [1].
>>>>>
>>>>> This is due to the package import conflicts of spark bundle and
>>>>> io.dropwizard.metrics.json which imports different version of jackson
>>>>> packages. IOT server packs multiple jackson versions 2.4.4 and 2.8.2.
>>>>> Spark bundle has jackson import range of [2.4.0,2.5.0), so wired to
>>>>> jackson-core 2.4.4.
>>>>> Io.dropwizard.metrics.json bundle has jackson import range of [2.4,3),
>>>>> so wired to jackson-core 2.8.2.
>>>>> Spark also required to import Io.dropwizard.metrics.json but it fails
>>>>> due to two different version of jackson packages in spark bundle class
>>>>> space.
>>>>> So we need to upgrade the spark jackson version range to  [2.4,3) or
>>>>> we need to downgrade metrics jackson version to [2.4.0,2.5.0).
>>>>> Appreciate any suggestions to fix the issue.
>>>>>
>>>>> [1]
>>>>> ERROR - AnalyticsComponent Error initializing analytics executor:
>>>>> Unable to create analytics client. com/codahale/metrics/json/Metr
>>>>> icsModule
>>>>> org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
>>>>> Unable to create analytics client. com/codahale/metrics/json/Metr
>>>>> icsModule
>>>>> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalytics
>>>>> Executor.initializeSparkContext(SparkAnalyt

Re: [Dev] [Analytics] NoClassDefFoundError:com.codahale.metrics.json.MetricsModule when installing latest analytics features in IOT server

2016-10-04 Thread Niranda Perera
omponentProp.java:146)
>> at org.eclipse.equinox.internal.ds.model.ServiceComponentProp.b
>> uild(ServiceComponentProp.java:345)
>> at org.eclipse.equinox.internal.ds.InstanceProcess.buildCompone
>> nt(InstanceProcess.java:620)
>> at org.eclipse.equinox.internal.ds.InstanceProcess.buildCompone
>> nts(InstanceProcess.java:197)
>> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolve
>> r.java:343)
>> at org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SC
>> RManager.java:222)
>> at org.eclipse.osgi.internal.serviceregistry.FilteredServiceLis
>> tener.serviceChanged(FilteredServiceListener.java:107)
>> at org.eclipse.osgi.framework.internal.core.BundleContextImpl.d
>> ispatchEvent(BundleContextImpl.java:861)
>> at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEve
>> nt(EventManager.java:230)
>> at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEv
>> entSynchronous(ListenerQueue.java:148)
>> at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.pu
>> blishServiceEventPrivileged(ServiceRegistry.java:819)
>> at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.pu
>> blishServiceEvent(ServiceRegistry.java:771)
>> at org.eclipse.osgi.internal.serviceregistry.ServiceRegistratio
>> nImpl.register(ServiceRegistrationImpl.java:130)
>> at org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.re
>> gisterService(ServiceRegistry.java:214)
>> at org.eclipse.osgi.framework.internal.core.BundleContextImpl.r
>> egisterService(BundleContextImpl.java:433)
>> at org.eclipse.equinox.http.servlet.internal.Activator.register
>> HttpService(Activator.java:81)
>> at org.eclipse.equinox.http.servlet.internal.Activator.addProxy
>> Servlet(Activator.java:60)
>> at org.eclipse.equinox.http.servlet.internal.ProxyServlet.init(
>> ProxyServlet.java:40)
>> at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.init(De
>> legationServlet.java:38)
>> at org.apache.catalina.core.StandardWrapper.initServlet(Standar
>> dWrapper.java:1282)
>> at org.apache.catalina.core.StandardWrapper.loadServlet(Standar
>> dWrapper.java:1195)
>> at org.apache.catalina.core.StandardWrapper.load(StandardWrappe
>> r.java:1085)
>> at org.apache.catalina.core.StandardContext.loadOnStartup(Stand
>> ardContext.java:5349)
>> at org.apache.catalina.core.StandardContext.startInternal(Stand
>> ardContext.java:5641)
>> at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
>> at org.apache.catalina.core.ContainerBase$StartChild.call(Conta
>> inerBase.java:1571)
>> at org.apache.catalina.core.ContainerBase$StartChild.call(Conta
>> inerBase.java:1561)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.NoClassDefFoundError: com/codahale/metrics/json/Metr
>> icsModule
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:274)
>> at org.apache.spark.util.Utils$.classForName(Utils.scala:175)
>> at org.apache.spark.metrics.MetricsSystem$$anonfun$registerSink
>> s$1.apply(MetricsSystem.scala:190)
>> at org.apache.spark.metrics.MetricsSystem$$anonfun$registerSink
>> s$1.apply(MetricsSystem.scala:186)
>> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(Ha
>> shMap.scala:98)
>> at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(Ha
>> shMap.scala:98)
>> at scala.collection.mutable.HashTable$class.foreachEntry(HashTa
>> ble.scala:226)
>> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>> at org.apache.spark.metrics.MetricsSystem.registerSinks(Metrics
>> System.scala:186)
>> at org.apache.spark.metrics.MetricsSystem.start(MetricsSystem.scala:100)
>> at org.apache.spark.SparkContext.(SparkContext.scala:540)
>> at org.apache.spark.api.java.JavaSparkContext.(JavaSparkC
>> ontext.scala:59)
>> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalytics
>> Executor.initializeSparkContext(SparkAnalyticsExecutor.java:319)
>> ... 144 more
>> Caused by: java.lang.ClassNotFoundException:
>> com.codahale.metrics.json.MetricsModule cannot be found by
>> spark-core_2.10_1.6.2.wso2v1
>> at org.eclipse.osgi.internal.loader.BundleLoader.findClassInter
>> nal(BundleLoader.java:501)
>> at org.eclipse.osgi.internal.loader.BundleLoader.findClass(Bund
>> leLoader.java:421)
>> at org.eclipse.osgi.internal.loader.BundleLoader.findClass(Bund
>> leLoader.java:412)
>> at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loa
>> dClass(DefaultClassLoader.java:107)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> ... 159 more
>>
>> --
>> Regards,
>>
>> Waruna Lakshitha Jayaweera
>> Senior Software Engineer
>> WSO2 Inc; http://wso2.com
>> phone: +94713255198
>> http://waruapz.blogspot.com/
>>
>
>
>
> --
> Regards,
>
> Waruna Lakshitha Jayaweera
> Senior Software Engineer
> WSO2 Inc; http://wso2.com
> phone: +94713255198
> http://waruapz.blogspot.com/
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] How to get a connection from the carbon datasources

2016-09-26 Thread Niranda Perera
Hi all,

I want to use a jdbc connection provided by a carbon-datasource.

I found the following blog from Kishanthan [1], which was done in 2013.

it uses the org.wso2.carbon.tomcat.jndi.CarbonJavaURLContextFactory as
follows

Hashtable environment = new Hashtable();
 
environment.put("java.naming.factory.initialoorg.wso2.carbon.tomcat.jndi.CarbonJavaURLContextFactory");
Context initContext = new InitialContext(environment);
Object result = initContext.lookup("jdbc/MyCarbonDataSource");
if (result != null) {
// Do your work here
} else {
System.out.println(“Cannot find MyCarbonDataSource”);
}

My question is, is there a better way of doing this now (a util method may
be?) or is this method still applicable?

Best

[1]
https://kishanthan.wordpress.com/2013/02/11/access-cabon-data-sources-within-webapps-in-wso2-application-server/

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [IS] Using Email Address as the Username - missing config?

2016-09-22 Thread Niranda Perera
Hi,

I was trying to get email as the username for IS on a default LDAP and I
was following doc [1].

But I was getting the following error,

"Could not add user PRIMARY/d...@ddd.com. Error is: Username d...@ddd.com is
not valid. User name must be a non null string with following format,
[a-zA-Z0-9._-|//]{3,30}$"

 ,when I was trying to create user with the username "d...@ddd.com".

To overcome this, I had to change the "UsernameJavaRegEx" to this,
^[a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$


I think this is intuitive to do! but isn't this a necessary config? Its not
mentioned in the docs [1]

Best

[1] https://docs.wso2.com/display/IS520/Using+Email+Address+as+the+Username

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS][310] Errors while running APIM_INCREMENTAL_PROCESSING_SCRIPT

2016-09-21 Thread Niranda Perera
t org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(
> ResolvedDataSource.scala:158)
> at org.apache.spark.sql.execution.datasources.
> CreateTempTableUsing.run(ddl.scala:92)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(commands.scala:58)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.scala:56)
> at org.apache.spark.sql.execution.ExecutedCommand.
> doExecute(commands.scala:70)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:132)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:130)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(
> QueryExecution.scala:55)
> at org.apache.spark.sql.execution.QueryExecution.
> toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:130)
> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQueryLocal(SparkAnalyticsExecutor.java:760)
> ... 11 more
>
>
> [1] https://docs.wso2.com/display/AM200/Installing+WSO2+
> APIM+Analytics+Features
> [2] https://docs.wso2.com/display/DAS310/Incremental+Processing
>
> Regards,
> Malith
> --
> Malith Munasinghe | Software Engineer
> M: +94 (71) 9401122
> E: mali...@wso2.com
> W: http://wso2.com
> <http://wso2.com/signature>
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Architecture] WSO2 Data Analytics Server (DAS) 3.1.0 Released!

2016-09-14 Thread Niranda Perera
*WSO2 Data Analytics Server (DAS) 3.1.0 Released!*

​
WSO2 Data Analytics Server development team is pleased to announce the
release of WSO2 Data Analytics Server 3.1
​.0​

WSO2 Data Analytics Server combines real-time, batch, interactive, and
predictive (via machine learning) analysis of data into one integrated
platform to support the multiple demands of Internet of Things (IoT)
solutions, as well as mobile and Web apps.

As a part of WSO2’s Analytics Platform, WSO2 DAS introduces a single
solution with the ability to build systems and applications that collect
and analyze both batch and realtime data to communicate results. It is
designed to treat millions of events per second, and is therefore capable
to handle Big Data volumes and Internet of Things projects.

WSO2 DAS is powered by WSO2 Carbon <http://wso2.com/products/carbon/>, the
SOA middleware component platform.

An open source product, WSO2 Carbon is available under the Apache Software
License (v2.0) <http://www.apache.org/licenses/LICENSE-2.0.html>

You can download this distribution from wso2.com/products/data-
analytics-server and give it a try.


​​
What's New In This Release

   - Integrating WSO2 Machine Learner features
   - Supporting incremental data processing
   - Improved gadget generation wizard
   - Cross-tenant support
   - Improved CarbonJDBC connector
   - Improvements for facet based aggregations
   - Supporting index based sorting
   - Supporting Spark on YARN for DAS
   - Improvements for indexing
   - ​
   Upgrading Spark to 1.6.2



Issues Fixed in This Release

   - WSO2 DAS 3.1.0 Fixed Issues
   <https://wso2.org/jira/issues/?filter=13152>

Known Issues

   - WSO2 DAS 3.1.0 Known Issues
   <https://wso2.org/jira/issues/?filter=13154>

*Source and distribution packages:*


   - ​http://wso2.com/products/data-analytics-server/ ​


Please download, test, and vote. The README file under the distribution
contains guide and instructions on how to try it out locally.
​​
Mailing Lists

Join our mailing list and correspond with the developers directly.

   - Developer List : dev@wso2.org | Subscribe | Mail Archive
   <http://mail.wso2.org/mailarchive/dev/>

Reporting Issues

We encourage you to report issues, documentation faults and feature
requests regarding WSO2 DAS through the public DAS JIRA
<https://wso2.org/jira/browse/DAS>. You can use the Carbon JIRA
<http://www.wso2.org/jira/browse/CARBON> to report any issues related to
the Carbon base framework or associated Carbon components.
Discussion Forums

Alternatively, questions could be raised on http://stackoverflow.com
<http://stackoverflow.com/questions/tagged/wso2>.
Support

We are committed to ensuring that your enterprise middleware deployment is
completely supported from evaluation to production. Our unique approach
ensures that all support leverages our open development methodology and is
provided by the very same engineers who build the technology.

For more details and to take advantage of this unique opportunity please
visit http://wso2.com/support.
For more information about WSO2 DAS please see wso2.com/products/data-
analytics-server.​


Regards,
WSO2 DAS Team
​


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC3

2016-09-14 Thread Niranda Perera
Hi all,

We are closing the vote now!

11 [+] Stable - Go ahead and release
0 [-] Broken - Do not release

We are now starting the release process!

Cheers

On Wed, Sep 14, 2016 at 4:35 PM, Malith Dhanushka <mal...@wso2.com> wrote:

> Tested following scenarios and found no issues
>
> - Event persistence with RDBMS
> -H2
> -MySQL
> - All the Samples
> - Activity dashboard
> - Analytics Backup tool
> - Analytics Migration tool
>
> [+] Stable - Go ahead and release
>
> On Wed, Sep 14, 2016 at 9:56 AM, Sachith Withana <sach...@wso2.com> wrote:
>
>> Tested the following scenarios,
>>
>>  - basic end to end scenario ( publishing, persistence, realtime and
>> batch graphs)
>>  - spark console
>> - interactive analytics
>> - Analytics backup tool
>> - Analytics migration tool
>>
>> [+] Stable - Go ahead and release
>>
>> On Wed, Sep 14, 2016 at 9:42 AM, Supun Sethunga <sup...@wso2.com> wrote:
>>
>>> Tested following ML scenarios:
>>>
>>>- Creating/Deleting Datasets from CSV files / DAS Tables
>>>- Creating/Deleting Projects
>>>- Creating/Deleting Analyzes
>>>- Training/Deleting/Publishing/Downloading Models.
>>>- Predicting using Models
>>>
>>> [+] Stable - Go ahead and release
>>>
>>> Regards
>>> Supun
>>>
>>> On Wed, Sep 14, 2016 at 2:52 AM, Anjana Fernando <anj...@wso2.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I've tested the following:-
>>>>
>>>> * Basic Indexing/Searching Operations
>>>> * Spark SQL scheduled query execution in background/foreground
>>>> * Activity Monitoring with multiple streams
>>>> * Samples
>>>>   - Wikipedia
>>>>   - SmartHome
>>>>
>>>> [+] Stable - Go ahead and release
>>>>
>>>> Cheers,
>>>> Anjana.
>>>>
>>>> On Fri, Sep 9, 2016 at 4:54 PM, Niranda Perera <nira...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Devs,
>>>>>
>>>>> This is the third release candidate (RC) of WSO2 Data Analytics Server
>>>>> 3.1.0 release.
>>>>>
>>>>> New / Improvements In This Release after RC2
>>>>>
>>>>>- Bug fixes in ML integration
>>>>>- Spark configuration parameters for long running jobs
>>>>>
>>>>> Issues Fixed in This Release
>>>>>
>>>>>    - WSO2 DAS 3.1.0 Fixed Issues
>>>>><https://wso2.org/jira/issues/?filter=13152>
>>>>>
>>>>> Known Issues
>>>>>
>>>>>- WSO2 DAS 3.1.0 Known Issues
>>>>><https://wso2.org/jira/issues/?filter=13154>
>>>>>
>>>>> Source and distribution packages:
>>>>>
>>>>>- https://github.com/wso2/product-das/releases/tag/v3.1.0-RC3
>>>>>
>>>>> Please download, test, and vote. The README file under the
>>>>> distribution contains guide and instructions on how to try it out locally.
>>>>>
>>>>> [+] Stable - Go ahead and release
>>>>> [-] Broken - Do not release (explain why)
>>>>>
>>>>> This vote will be open for 72 hours or as needed.
>>>>>
>>>>> Regards,
>>>>> WSO2 DAS Team
>>>>>
>>>>> --
>>>>> *Niranda Perera*
>>>>> Software Engineer, WSO2 Inc.
>>>>> Mobile: +94-71-554-8430
>>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>>> https://pythagoreanscript.wordpress.com/
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Anjana Fernando*
>>>> Associate Director / Architect
>>>> WSO2 Inc. | http://wso2.com
>>>> lean . enterprise . middleware
>>>>
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>> *Supun Sethunga*
>>> Senior Software Engineer
>>> WSO2, Inc.
>>> http://wso2.com/
>>> lean | enterprise | middleware
>>> Mobile : +94 716546324
>>> Blog: http://supunsetunga.blogspot.com
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Sachith Withana
>> Software Engineer; WSO2 Inc.; http://wso2.com
>> E-mail: sachith AT wso2.com
>> M: +94715518127
>> Linked-In: <http://goog_416592669>https://lk.linkedin.com/in/sac
>> hithwithana
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Malith Dhanushka
> Senior Software Engineer - Data Technologies
> *WSO2, Inc. : wso2.com <http://wso2.com/>*
> *Mobile*  : +94 716 506 693
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC3

2016-09-12 Thread Niranda Perera
Hi all,

I have tested the following and found no issues
- DAS usual flow: Data publishing, script execution and dashboards in the
local mode using the smart home example
- DAS HA and failover scenarios, 3 node cluster test
- Spark on YARN in DAS clustering

Hence,
+1 Stable - Go ahead and release

Best

On Fri, Sep 9, 2016 at 4:54 PM, Niranda Perera <nira...@wso2.com> wrote:

> Hi Devs,
>
> This is the third release candidate (RC) of WSO2 Data Analytics Server
> 3.1.0 release.
>
> New / Improvements In This Release after RC2
>
>- Bug fixes in ML integration
>- Spark configuration parameters for long running jobs
>
> Issues Fixed in This Release
>
>- WSO2 DAS 3.1.0 Fixed Issues
><https://wso2.org/jira/issues/?filter=13152>
>
> Known Issues
>
>- WSO2 DAS 3.1.0 Known Issues
><https://wso2.org/jira/issues/?filter=13154>
>
> Source and distribution packages:
>
>- https://github.com/wso2/product-das/releases/tag/v3.1.0-RC3
>
> Please download, test, and vote. The README file under the distribution
> contains guide and instructions on how to try it out locally.
>
> [+] Stable - Go ahead and release
> [-] Broken - Do not release (explain why)
>
> This vote will be open for 72 hours or as needed.
>
> Regards,
> WSO2 DAS Team
>
> --
> *Niranda Perera*
> Software Engineer, WSO2 Inc.
> Mobile: +94-71-554-8430
> Twitter: @n1r44 <https://twitter.com/N1R44>
> https://pythagoreanscript.wordpress.com/
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC3

2016-09-09 Thread Niranda Perera
Hi Devs,

This is the third release candidate (RC) of WSO2 Data Analytics Server
3.1.0 release.

New / Improvements In This Release after RC2

   - Bug fixes in ML integration
   - Spark configuration parameters for long running jobs

Issues Fixed in This Release

   - WSO2 DAS 3.1.0 Fixed Issues
   <https://wso2.org/jira/issues/?filter=13152>

Known Issues

   - WSO2 DAS 3.1.0 Known Issues
   <https://wso2.org/jira/issues/?filter=13154>

Source and distribution packages:

   - https://github.com/wso2/product-das/releases/tag/v3.1.0-RC3

Please download, test, and vote. The README file under the distribution
contains guide and instructions on how to try it out locally.

[+] Stable - Go ahead and release
[-] Broken - Do not release (explain why)

This vote will be open for 72 hours or as needed.

Regards,
WSO2 DAS Team

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC2

2016-09-08 Thread Niranda Perera
Hi all,

After further testing, we found some bugs in the ML integration.
Additionally, there are some configuration parameters which need to be set
for long running Spark queries, which are not added by default in the
configuration file.
Hence,
-1 Broken - Do not release

We will release a RC3 with these issues fixed.

Best



On Tue, Sep 6, 2016 at 8:18 PM, Gokul Balakrishnan <go...@wso2.com> wrote:

> I have tested the following through the EC2 performance test round:
>
> - Clustered DAS deployment
> - Analyzer, Indexer and Receiver profiles
> - Data persistence with HBase
> - Spark script execution in clustered mode (CarbonAnalytics)
> - Clustered indexing operation
> - Data explorer
> - Basic data publishing
> - Smart Home and Wikipedia samples
>
> [+] Stable - Go ahead and release.
>
>
> On Tuesday, 6 September 2016, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Devs,
>>
>> This is the first release candidate 2 (RC2) of WSO2 Data Analytics Server
>> 3.1.0 release.
>>
>> New / Improvements In This Release after RC1
>>
>>- Upgrading Spark to 1.6.2
>>- Improvements for indexing
>>
>> Issues Fixed in This Release
>>
>>- WSO2 DAS 3.1.0 Fixed Issues
>><https://wso2.org/jira/issues/?filter=13152>
>>
>> Known Issues
>>
>>- WSO2 DAS 3.1.0 Known Issues
>><https://wso2.org/jira/issues/?filter=13154>
>>
>> Source and distribution packages:
>>
>>- https://github.com/wso2/product-das/releases/tag/v3.1.0-RC2
>>
>> Please download, test, and vote. The README file under the distribution
>> contains guide and instructions on how to try it out locally.
>>
>> [+] Stable - Go ahead and release
>> [-] Broken - Do not release (explain why)
>>
>> This vote will be open for 72 hours or as needed.
>>
>> Regards,
>> WSO2 DAS Team
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>
>
> --
> Gokul Balakrishnan
> Senior Software Engineer,
> WSO2, Inc. http://wso2.com
> M +94 77 5935 789 | +44 7563 570502
>
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC2

2016-09-06 Thread Niranda Perera
Hi Devs,

This is the first release candidate 2 (RC2) of WSO2 Data Analytics Server
3.1.0 release.

New / Improvements In This Release after RC1

   - Upgrading Spark to 1.6.2
   - Improvements for indexing

Issues Fixed in This Release

   - WSO2 DAS 3.1.0 Fixed Issues
   <https://wso2.org/jira/issues/?filter=13152>

Known Issues

   - WSO2 DAS 3.1.0 Known Issues
   <https://wso2.org/jira/issues/?filter=13154>

Source and distribution packages:

   - https://github.com/wso2/product-das/releases/tag/v3.1.0-RC2

Please download, test, and vote. The README file under the distribution
contains guide and instructions on how to try it out locally.

[+] Stable - Go ahead and release
[-] Broken - Do not release (explain why)

This vote will be open for 72 hours or as needed.

Regards,
WSO2 DAS Team

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] NoSuchAlgorithmException when starting DAS using IBM JDK

2016-08-11 Thread Niranda Perera
Hi all,

When I start DAS using IBM JDK, I get the following exception. [1]

Upon further investigations, I found that [2] the following change needs to
be done.

"Please set the following property[1] in
"Owasp.CsrfGuard.Carbon.properties" file
(SERVER_HOME/repository/conf/security) which is default set to [2].
[1] - org.owasp.csrfguard.PRNG.Provider=IBMJCE
[2] - org.owasp.csrfguard.PRNG.Provider=SUN"

It actually resolved my issue. My question is, is this a platform wide
configuration? and If that is so, I suggest we add it in all the docs.

Best


[1]
ERROR {org.apache.catalina.core.StandardContext} -  Exception sending
context initialized event to listener instance of class
org.owasp.csrfguard.CsrfGuardServletContextListener
java.lang.RuntimeException: java.lang.RuntimeException:
java.security.NoSuchAlgorithmException: no such algorithm: SHA1PRNG for
provider SUN
at
org.owasp.csrfguard.config.PropertiesConfigurationProviderFactory.retrieveConfiguration(PropertiesConfigurationProviderFactory.java:34)
at
org.owasp.csrfguard.config.overlay.ConfigurationAutodetectProviderFactory.retrieveConfiguration(ConfigurationAutodetectProviderFactory.java:73)
at org.owasp.csrfguard.CsrfGuard.retrieveNewConfig(CsrfGuard.java:112)
at org.owasp.csrfguard.CsrfGuard.config(CsrfGuard.java:86)
at org.owasp.csrfguard.CsrfGuard.isPrintConfig(CsrfGuard.java:685)
at
org.owasp.csrfguard.CsrfGuardServletContextListener.printConfigIfConfigured(CsrfGuardServletContextListener.java:97)
at
org.owasp.csrfguard.CsrfGuardServletContextListener.contextInitialized(CsrfGuardServletContextListener.java:86)
at
org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5068)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5584)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1572)
at
org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1562)
at java.util.concurrent.FutureTask.run(FutureTask.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:618)
at java.lang.Thread.run(Thread.java:785)
Caused by: java.lang.RuntimeException:
java.security.NoSuchAlgorithmException: no such algorithm: SHA1PRNG for
provider SUN
at
org.owasp.csrfguard.config.PropertiesConfigurationProvider.(PropertiesConfigurationProvider.java:234)
at
org.owasp.csrfguard.config.PropertiesConfigurationProviderFactory.retrieveConfiguration(PropertiesConfigurationProviderFactory.java:32)
... 15 more
Caused by: java.security.NoSuchAlgorithmException: no such algorithm:
SHA1PRNG for provider SUN
at sun.security.jca.GetInstance.getService(GetInstance.java:99)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:218)
at java.security.SecureRandom.getInstance(SecureRandom.java:342)
at
org.owasp.csrfguard.config.PropertiesConfigurationProvider.(PropertiesConfigurationProvider.java:121)
... 16 more

[2] https://wso2.org/jira/browse/ESBJAVA-4772

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Using a filter in event receivers to validate incoming data stream

2016-08-10 Thread Niranda Perera
​Hi Dinesh,

Ye​s, I also agree with Sajith. We can do it in an execution plan and then
persist the resulting stream.

Best

On Thu, Aug 11, 2016 at 12:13 AM, Dinesh J Weerakkody <dine...@wso2.com>
wrote:

> Hi Sajith,
> Thanks for quick response. I'll try this.
>
> Thanks
>
> *Dinesh J. Weerakkody*
> Senior Software Engineer
> WSO2 Inc.
> lean | enterprise | middleware
> M : +94 727 868676 | E : dine...@wso2.com | W : www.wso2.com
>
> On Wed, Aug 10, 2016 at 2:19 PM, Sajith Ravindra <saji...@wso2.com> wrote:
>
>> Hi Dinesh,
>>
>> I'm afraid you can't do that in the event receiver itself. Event
>> receivers will pass all the events for processing without doing any
>> filtering.
>>
>> You can do this with a simple Execution plan. Please refer[1] on the
>> regex extension for siddhi.
>>
>> [1] - https://docs.wso2.com/display/CEP420/Siddhi+Extensions#Siddh
>> iExtensions-regexregex
>>
>> Thanks
>> *,Sajith Ravindra*
>> Senior Software Engineer
>> WSO2 Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> mobile: +94 77 2273550
>> blog: http://sajithr.blogspot.com/
>> <http://lk.linkedin.com/pub/shani-ranasinghe/34/111/ab>
>>
>> On Wed, Aug 10, 2016 at 11:30 PM, Dinesh J Weerakkody <dine...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Is it possible to validate the incoming data stream values in DAS? for
>>> example let's say I have a stream which has date as a attribute. I need to
>>> validate the date format with a regex and ignore the event if the format
>>> doesn't match. Can we do something like this in WSO2 DAS event receivers?
>>>
>>> Thanks
>>>
>>> *Dinesh J. Weerakkody*
>>> Senior Software Engineer
>>> WSO2 Inc.
>>> lean | enterprise | middleware
>>> M : +94 727 868676 | E : dine...@wso2.com | W : www.wso2.com
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC1

2016-08-01 Thread Niranda Perera
Hi all,

[-] Broken - Do not release (explain why)

We found several issues while doing further testing.

Hence, calling off the vote.


On Mon, Aug 1, 2016 at 12:04 PM, Niranda Perera <nira...@wso2.com> wrote:

> Hi Iranga,
>
> ML features are fully integrated to DAS now. So, it is not required to
> install any additional features to use the ML functionality
>
> ESB analytics is still not released. I think @Nirmal should be able to
> give a p2 for APIM analytics.
>
> Best
>
> On Mon, Aug 1, 2016 at 11:54 AM, Iranga Muthuthanthri <ira...@wso2.com>
> wrote:
>
>> Would it be possible to have a p2-repo to test the  following
>>
>> [i] Install ML features
>> [ii] Install product analytics [ESB, API-M]
>>
>>
>> Added a doc jira to keep track [1]
>>
>> [1]https://wso2.org/jira/browse/DOCUMENTATION-3699
>>
>>
>> On Sat, Jul 30, 2016 at 3:31 AM, Niranda Perera <nira...@wso2.com> wrote:
>>
>>> Hi Devs,
>>>
>>> This is the first release candidate (RC1) of WSO2 Data Analytics Server
>>> 3.1.0 release.
>>>
>>>
>>> New / Improvements In This Release
>>>
>>>- Integrating WSO2 Machine Learner features
>>>- Supporting incremental data processing
>>>- Improved gadget generation wizard
>>>- Cross-tenant support
>>>- Improved CarbonJDBC connector
>>>- Improvements for facet based aggregations
>>>- Supporting index based sorting
>>>- Supporting Spark on YARN for DAS
>>>- Improvements for indexing
>>>
>>>
>>>
>>> Issues Fixed in This Release
>>>
>>>- WSO2 DAS 3.1.0 Fixed Issues
>>><https://wso2.org/jira/issues/?filter=13152>
>>>
>>> Known Issues
>>>
>>>- WSO2 DAS 3.1.0 Known Issues
>>><https://wso2.org/jira/issues/?filter=13154>
>>>
>>> *Source and distribution packages:*
>>>
>>>
>>>- https://github.com/wso2/product-das/releases/tag/v3.1.0-RC1
>>><https://github.com/wso2/product-das/releases/tag/v3.1.0-RC1>
>>>
>>>
>>>
>>> Please download, test, and vote. The README file under the distribution
>>> contains guide and instructions on how to try it out locally.
>>>
>>> [+] Stable - Go ahead and release
>>>
>>> [-] Broken - Do not release (explain why)
>>>
>>>
>>> This vote will be open for 72 hours or as needed.
>>>
>>> Regards,
>>> WSO2 DAS Team
>>>
>>> --
>>> *Niranda Perera*
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94-71-554-8430
>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>> https://pythagoreanscript.wordpress.com/
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Thanks & Regards
>>
>> Iranga Muthuthanthri
>> (M) -0777-255773
>> Team Product Management
>>
>>
>
>
> --
> *Niranda Perera*
> Software Engineer, WSO2 Inc.
> Mobile: +94-71-554-8430
> Twitter: @n1r44 <https://twitter.com/N1R44>
> https://pythagoreanscript.wordpress.com/
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC1

2016-08-01 Thread Niranda Perera
Hi Iranga,

ML features are fully integrated to DAS now. So, it is not required to
install any additional features to use the ML functionality

ESB analytics is still not released. I think @Nirmal should be able to give
a p2 for APIM analytics.

Best

On Mon, Aug 1, 2016 at 11:54 AM, Iranga Muthuthanthri <ira...@wso2.com>
wrote:

> Would it be possible to have a p2-repo to test the  following
>
> [i] Install ML features
> [ii] Install product analytics [ESB, API-M]
>
>
> Added a doc jira to keep track [1]
>
> [1]https://wso2.org/jira/browse/DOCUMENTATION-3699
>
>
> On Sat, Jul 30, 2016 at 3:31 AM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Devs,
>>
>> This is the first release candidate (RC1) of WSO2 Data Analytics Server
>> 3.1.0 release.
>>
>>
>> New / Improvements In This Release
>>
>>- Integrating WSO2 Machine Learner features
>>- Supporting incremental data processing
>>- Improved gadget generation wizard
>>- Cross-tenant support
>>- Improved CarbonJDBC connector
>>- Improvements for facet based aggregations
>>- Supporting index based sorting
>>- Supporting Spark on YARN for DAS
>>- Improvements for indexing
>>
>>
>>
>> Issues Fixed in This Release
>>
>>- WSO2 DAS 3.1.0 Fixed Issues
>><https://wso2.org/jira/issues/?filter=13152>
>>
>> Known Issues
>>
>>- WSO2 DAS 3.1.0 Known Issues
>><https://wso2.org/jira/issues/?filter=13154>
>>
>> *Source and distribution packages:*
>>
>>
>>- https://github.com/wso2/product-das/releases/tag/v3.1.0-RC1
>><https://github.com/wso2/product-das/releases/tag/v3.1.0-RC1>
>>
>>
>>
>> Please download, test, and vote. The README file under the distribution
>> contains guide and instructions on how to try it out locally.
>>
>> [+] Stable - Go ahead and release
>>
>> [-] Broken - Do not release (explain why)
>>
>>
>> This vote will be open for 72 hours or as needed.
>>
>> Regards,
>> WSO2 DAS Team
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Thanks & Regards
>
> Iranga Muthuthanthri
> (M) -0777-255773
> Team Product Management
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Vote] Release WSO2 Data Analytics Server (DAS) 3.1.0-RC1

2016-07-29 Thread Niranda Perera
Hi Devs,

This is the first release candidate (RC1) of WSO2 Data Analytics Server
3.1.0 release.


New / Improvements In This Release

   - Integrating WSO2 Machine Learner features
   - Supporting incremental data processing
   - Improved gadget generation wizard
   - Cross-tenant support
   - Improved CarbonJDBC connector
   - Improvements for facet based aggregations
   - Supporting index based sorting
   - Supporting Spark on YARN for DAS
   - Improvements for indexing



Issues Fixed in This Release

   - WSO2 DAS 3.1.0 Fixed Issues
   <https://wso2.org/jira/issues/?filter=13152>

Known Issues

   - WSO2 DAS 3.1.0 Known Issues
   <https://wso2.org/jira/issues/?filter=13154>

*Source and distribution packages:*


   - https://github.com/wso2/product-das/releases/tag/v3.1.0-RC1
   <https://github.com/wso2/product-das/releases/tag/v3.1.0-RC1>



Please download, test, and vote. The README file under the distribution
contains guide and instructions on how to try it out locally.

[+] Stable - Go ahead and release

[-] Broken - Do not release (explain why)


This vote will be open for 72 hours or as needed.

Regards,
WSO2 DAS Team

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [ML] Making the server port configurable in the ML samples

2016-07-26 Thread Niranda Perera
Hi Nirmal,

When I run the ML samples, I found out that SERVER_PORT is not configurable
as of the current impl [1]. Is there any way where we could adjust the port
offset in the samples?

If not, I created a PR to make it configurable, following the same approach
for SERVER_PORT. Please review it and merge it. :-)

Best

[1]
https://github.com/wso2/product-ml/blob/master/modules/samples/server.conf
[2] https://github.com/wso2/product-ml/pull/307
-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Adding YARN configuration files in anlytics4x distributions

2016-07-22 Thread Niranda Perera
Hi all,

Since the YARN support for DAS, the following config files were added to
the spark server feature. [1]

Please add those files in the relevant bin.xml file in the product
analytic4x distribution to maintain the consistency [2]

Best

[1]
https://github.com/wso2/carbon-analytics/tree/master/features/analytics-processors/org.wso2.carbon.analytics.spark.server.feature/resources/yarn
[2]
https://github.com/wso2/product-das/commit/df328c73ead9ca11b4c3e3c91fbc8c95ea70620b

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Mesos-Marathon] Issues in Creating Point to Point TCP Connections in Mesos-Marathon based Deployments

2016-07-18 Thread Niranda Perera
HA/Distributed setup [3]. For this, we can't just bind to
>>>> the host node IP since the public IP is not visible to the container; and
>>>> can't use the local container IP since container to container communication
>>>> is not possible (due to the lack of an overlay network).
>>>>
>>>> One possible option is such scenarios is to use the HOST networking
>>>> mode of Docker [4]. Then the container will be using the host IP and ports
>>>> directly. One issue with this approach is since we are mapping container
>>>> ports to host ports directly without assigning random ports from the host
>>>> side, only one container can run in one host node. But, in a way this
>>>> defeats the purpose of containerization.
>>>>
>>> Using host mode would be a good alternative. I don't think limiting one
>>> container per Mesos node (Mesos HOST=UNIQUE constrain) in this scenario
>>>  since a HA Mesos deployment will have multiple Mesos nodes.
>>> Another advantage of running in host mode is performance efficiency
>>> rather than bridge mode. Network efficiency would be useful in thrift
>>> scenario since it is assumed to be fast.
>>>
>> Agreed. However, IMHO we should consider this as the last option if
>> nothing else works out.
>>
>>>
>>>> Please share your thoughts on this.
>>>>
>>>> [1]. https://github.com/mesosphere/marathon
>>>>
>>>> [2].
>>>> https://github.com/wso2/mesos-artifacts/tree/master/common/mesos-membership-scheme
>>>>
>>>> [3].
>>>> https://docs.wso2.com/display/CLUSTER44x/Clustering+CEP+4.0.0#ClusteringCEP4.0.0-Highavailabilitydeployment
>>>>
>>>> [4]. http://www.dasblinkenlichten.com/docker-networking-101-host-mode/
>>>>
>>>>
>>>> --
>>>> Thanks and Regards,
>>>>
>>>> Isuru H.
>>>> +94 716 358 048
>>>>
>>>>
>>>>
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Udara Liyanage
>>> Software Engineer
>>> WSO2, Inc.: http://wso2.com
>>> lean. enterprise. middleware
>>>
>>> web: http://udaraliyanage.wordpress.com
>>> phone: +94 71 443 6897
>>>
>>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* <http://wso2.com/>*
>>
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* <http://wso2.com/>*
>
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Setting akka Configuration Properties

2016-07-18 Thread Niranda Perera
Hi Isuru,

I checked the Spark docs but it is not publicly documented, but if you
check here [1], akka settings can be set in the spark conf. So, you should
be able to set those configs in the spark-defaults.conf file in
repository/conf/analytics/spark dir.

Best

[1]
https://github.com/apache/spark/blob/branch-1.4/core/src/main/scala/org/apache/spark/SparkConf.scala#L320

On Mon, Jul 18, 2016 at 12:48 AM, Isuru Haththotuwa <isu...@wso2.com> wrote:

> Hi all,
>
> When deploying DAS 3.0.1 min HA setup in a containerized environment, its
> required for the akka local actor to bind to an IP from one network
> interface and advertise another. This is possible via the configurations
> [1]. I noted that we do not have a config file for akka RPC system in DAS.
> Is it possible to set these properties in DAS?
>
> [1]. http://doc.akka.io/docs/akka/snapshot/additional/faq.html
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* <http://wso2.com/>*
>
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Mesos artifacts] Deploying wso2das 3.x.x on Mesos

2016-07-06 Thread Niranda Perera
Hi Akila,

Yes, there is a possibility to use a custom member discovery schema for
Spark. In fact, that is what we are using at the moment. We are using
Hazlecast for membership management (using WKA).
We are extending the following extension points in Spark [1] to support
this.

I agree with Imesh. We still have not extensively tested connecting DAS to
an external Mesos Spark cluster. But it is in the roadmap and we are
working on it.

Best

[1]
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/master/RecoveryModeFactory.scala
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/master/PersistenceEngine.scala



On Wed, Jul 6, 2016 at 11:36 AM, Imesh Gunaratne <im...@wso2.com> wrote:

> Hi Akila,
>
> IMO it would better to use Spark Mesos Framework on Mesos DC/OS for
> production deployments rather than trying to cluster the built-in Spark
> module on Mesos. Then we might not need to worry much about this problem.
>
> None production environments such as Dev should be able to use a
> standalone DAS server with a volume mount with auto healing. WDYT?
>
> Thanks
>
> On Wed, Jul 6, 2016 at 9:38 AM, Akila Ravihansa Perera <raviha...@wso2.com
> > wrote:
>
>> [Looping Anjana and Isuru]
>>
>> @Anjana, Niranda: is it possible to plugin a custom member discovery
>> scheme for Spark? In the current implementation how does a wso2das with
>> embedded Spark instance discover the other members?
>> I'm referring to Spark cluster mode deployment here.
>>
>> Thanks.
>>
>> On Wed, Jul 6, 2016 at 6:49 AM, Nirmal Fernando <nir...@wso2.com> wrote:
>>
>>> [Looping Niranda]
>>>
>>> On Tue, Jul 5, 2016 at 10:22 PM, Akila Ravihansa Perera <
>>> raviha...@wso2.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Deploying wso2das in Mesos is bit tricky due to lack of overlay network
>>>> support in Mesos OOTB. This is mainly because Spark cluster (when running
>>>> in Spark cluster mode) is unable to communicate through Docker IP addresses
>>>> when they are assigned to different host machines in a multi-node Mesos
>>>> deployment. I checked the config options in [1] and tried setting
>>>> SPARK_PUBLIC_DNS parameter to Mesos host IP without any success.
>>>>
>>>> The main problem is there is no way to instruct Spark members to bind
>>>> to the local Docker IP and advertise a different IP (Mesos slave host IP)
>>>> to other members.
>>>>
>>>> @Niranda, Nirmal: is this something we can fix from our side? AFAIU, we
>>>> are using Hazelcast to discover Spark/wso2das members and adding them to
>>>> Spark context, right?
>>>>
>>>> On a side note, there is a Wiki page explaining how Spark should be
>>>> used with Mesos in [2]. This is available after Spark 1.2.0. In this
>>>> approach, we treat each Mesos slave as a Spark member and Spark/Mesos
>>>> driver can directly schedule tasks on Mesos slaves instead of running Spark
>>>> itself as a container. We should consider this approach in our C5 based
>>>> efforts. We can leverage Kubernetes in the same way.
>>>>
>>>> IMO, we should recommend users to use wso2das in Spark client mode on
>>>> Mesos due to these complexities.
>>>>
>>>
>>> +1, if we can't solve above from within DAS.
>>>
>>>
>>>> There is a DCOS Mesos framework for Spark [3] which can be used to
>>>> deploy Spark on Mesos very easily. We can even leverage DCOS Spark
>>>> framework in our deploy scripts.
>>>>
>>>> [1] http://spark.apache.org/docs/latest/spark-standalone.html
>>>> [2] http://spark.apache.org/docs/latest/running-on-mesos.html
>>>> [3] https://docs.mesosphere.com/1.7/usage/service-guides/spark/
>>>>
>>>> Thanks.
>>>>
>>>> --
>>>> Akila Ravihansa Perera
>>>> WSO2 Inc.;  http://wso2.com/
>>>>
>>>> Blog: http://ravihansa3000.blogspot.com
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> Thanks & regards,
>>> Nirmal
>>>
>>> Team Lead - WSO2 Machine Learner
>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>>> Mobile: +94715779733
>>> Blog: http://nirmalfdo.blogspot.com/
>>>
>>>
>>>
>>
>>
>> --
>> Akila Ravihansa Perera
>> WSO2 Inc.;  http://wso2.com/
>>
>> Blog: http://ravihansa3000.blogspot.com
>>
>
>
>
> --
> *Imesh Gunaratne*
> Software Architect
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: https://medium.com/@imesh TW: @imesh
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Upgrading the guava version of carbon-commons

2016-06-22 Thread Niranda Perera
Hi Rasika,

I'm afraid we have exhausted all of the OSGI options before coming into
this conclusion. We are unable to limit this by the OSGI version ranges,
because the tomcat bundle has dynamic imports by default, and therefore, we
can not predict to which version, the tomcat bundle would get wired to.

Hence we have decided to remove all the other versions of guava and bump
all the dependencies to the latest.

Best

On Wed, Jun 22, 2016 at 11:00 AM, Rasika Perera <rasi...@wso2.com> wrote:

> Hi Niranda,
>
> So, it was decided to bring all the guava versions to a common version,
>> platform wide
>
> IMO making a library version, a platform common version is not practical.
> Can't we use OSGi version ranges to solve this problem?
>
> 
> com.google.common.*;version="[13.0, 19)"
> 
>
> I have created a JIRA to upgrade the guava versions in carbon-commons from
>> 13.0.1 to v19 which is the latest.
>
> ​+1 for upgrading into latest.
>
> Thanks​
>
> On Tue, Jun 21, 2016 at 5:40 PM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi all,
>>
>> In the current carbon server runtime, there are multiple guava versions.
>> But this creates some issues, expecially when using a guava library is
>> accessed in webapps (directly/ indirectly) [1], [2]
>>
>> So, it was decided to bring all the guava versions to a common version,
>> platform wide. I have created a JIRA to upgrade the guava versions in
>> carbon-commons from 13.0.1 to v19 which is the latest. [3]. Please find the
>> PR here [4]
>>
>> Could you please review the PR and merge it?
>>
>> Best
>>
>> [1] [Dev] [OSGI] 'Package uses conflict' when using multiple versions of
>> the same bundle
>> [2] [Dev] Integrating ML features in DAS
>> [3] https://wso2.org/jira/browse/CCOMMONS-17
>> [4] https://github.com/wso2/carbon-commons/pull/227
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> With Regards,
>
> *Rasika Perera*
> Software Engineer
> M: +94 71 680 9060 E: rasi...@wso2.com
> LinkedIn: http://lk.linkedin.com/in/rasika90
>
> WSO2 Inc. www.wso2.com
> lean.enterprise.middleware
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Upgrading the guava version of carbon-commons

2016-06-21 Thread Niranda Perera
Hi all,

In the current carbon server runtime, there are multiple guava versions.
But this creates some issues, expecially when using a guava library is
accessed in webapps (directly/ indirectly) [1], [2]

So, it was decided to bring all the guava versions to a common version,
platform wide. I have created a JIRA to upgrade the guava versions in
carbon-commons from 13.0.1 to v19 which is the latest. [3]. Please find the
PR here [4]

Could you please review the PR and merge it?

Best

[1] [Dev] [OSGI] 'Package uses conflict' when using multiple versions of
the same bundle
[2] [Dev] Integrating ML features in DAS
[3] https://wso2.org/jira/browse/CCOMMONS-17
[4] https://github.com/wso2/carbon-commons/pull/227

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Time series data summarization

2016-05-20 Thread Niranda Perera
Hi Chan,

Agree with the above comments. Additionally, you can refer to the log
analyzer scenarios as well, because I feel like their use case is somewhat
similar to yours.

Best

On Fri, May 20, 2016 at 9:02 AM, Sachith Withana <sach...@wso2.com> wrote:

> To add to what Srinath said.
> If you enable indexing for that summary table, you'll be able to get the
> data through the REST API as well using lucene queries,.
>
> Regards,
> Sachith
>
> On Fri, May 20, 2016 at 7:30 AM, Srinath Perera <srin...@wso2.com> wrote:
>
>> schedule a spark query that aggregate the data per year and write to new
>> table, and you can query from that table?
>>
>> On Fri, May 20, 2016 at 12:06 AM, Dulitha Wijewantha <duli...@wso2.com>
>> wrote:
>>
>>> Hi guys,
>>> I have a bunch of data streams that have time stamps (like event logs) -
>>> currently in DAS 3.0.1 - is there a way to efficiently query these?
>>>
>>> Right now I am summarizing them based on string manipulation to the
>>> timestamp. An example query I have is -
>>>
>>> 1) How many unique user events do we have in the last year.
>>>
>>> Cheers~
>>>
>>> --
>>> Dulitha Wijewantha (Chan)
>>> Software Engineer - Mobile Development
>>> WSO2 Inc
>>> Lean.Enterprise.Middleware
>>>  * ~Email   duli...@wso2.com <duli...@wso2mobile.com>*
>>> *  ~Mobile +94712112165 <%2B94712112165>*
>>> *  ~Website   dulitha.me <http://dulitha.me>*
>>> *  ~Twitter @dulitharw <https://twitter.com/dulitharw>*
>>>   *~Github @dulichan <https://github.com/dulichan>*
>>>   *~SO @chan <http://stackoverflow.com/users/813471/chan>*
>>>
>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Sachith Withana
> Software Engineer; WSO2 Inc.; http://wso2.com
> E-mail: sachith AT wso2.com
> M: +94715518127
> Linked-In: <http://goog_416592669>
> https://lk.linkedin.com/in/sachithwithana
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] If you have a custom UDF, make sure you add the path to jar in external-spark-classpath.conf to get it work on HA

2016-04-13 Thread Niranda Perera
Correct. As of now, we need to do this in the clustered setup. We'll give a
more user friendly approach later on.

Best
On Apr 13, 2016 09:00, "Nirmal Fernando"  wrote:

> Hi All,
>
> Just want to pass this information, if you are not aware already.
>
> If you have a Jar with custom UDFs, make sure you add the relative path to
> jars in repository/conf/analytics/spark/external-spark-classpath.conf file
> in order for the class to be used properly in a HAed environment.
>
> Sample:
>
> # --
>
> # ADD ADDITIONAL JARS TO THE SPARK CLASSPATH
>
> # --
>
> # Use this config file to add additional jars to the $SPARK_CLASSPATH
> system variable.
>
> # Specify the location of the jar separated by a new line.
>
>
> repository/components/plugins/org.wso2.carbon.analytics.apim.spark_1.0.0.SNAPSHOT.jar
>
> --
>
> Thanks & regards,
> Nirmal
>
> Team Lead - WSO2 Machine Learner
> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
> Mobile: +94715779733
> Blog: http://nirmalfdo.blogspot.com/
>
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Adding a relative path in external-spark-classpath.conf doesn't work

2016-04-12 Thread Niranda Perera
Hi Nirmal,

Thanks for pointing this out. I will merge the PR.

Rgds
On Apr 13, 2016 08:35, "Nirmal Fernando"  wrote:

> Hi,
>
> * Even though we have implemented the support for relative paths at
> repository/conf/analytics/spark/external-spark-classpath.conf file, it
> isn't working properly.
>
>   if (fileExists(line)) {
>
> scp = scp + separator + line;
>
> } *else if (fileExists(carbonHome + File.separator +
> line)) {*
>
> *scp = scp + separator + carbonHome + File.separator +
> line;*
>
> *} *else {
>
> throw new IOException("File not found : " + line);
>
> }
>
>   
>
>   
>
>   private static boolean fileExists(String path) {
>
> File tempFile = new File(path);
>
> return tempFile.exists() && !tempFile.isDirectory();
>
>}
> * We check for file.exists in order to determine the existence of the file
> and it'll be true even if it's a relative path in some cases (where current
> execution directory = carbon.home).
> * But Spark needs the path to be absolute.
> * Hence the fix was to check whether the path is absolute too.
>
> https://github.com/wso2/carbon-analytics/pull/170/files
>
> Please review and merge this PR.
>
> --
>
> Thanks & regards,
> Nirmal
>
> Team Lead - WSO2 Machine Learner
> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
> Mobile: +94715779733
> Blog: http://nirmalfdo.blogspot.com/
>
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS minimum HA deployment in one node?

2016-04-11 Thread Niranda Perera
Yes, since both the nodes are in the same file system, there is no need for
a symbolic link

best

On Tue, Apr 12, 2016 at 10:34 AM, Gihan Anuruddha <gi...@wso2.com> wrote:

> Normally what we are doing is stating one node path in both configuration
> files. Basically, this symbolic link is used to load some classes to
> runtime. Since both node are identical this won't be an issue IMO.
>
> Regards,
> Gihan
>
> On Tue, Apr 12, 2016 at 10:30 AM, Nirmal Fernando <nir...@wso2.com> wrote:
>
>> Hi All,
>>
>> Is it possible to setup DAS minimum HA deployment in one node? AFAIS the
>> requirement to create a symbolic link makes it impossible?
>>
>> --
>>
>> Thanks & regards,
>> Nirmal
>>
>> Team Lead - WSO2 Machine Learner
>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>> Mobile: +94715779733
>> Blog: http://nirmalfdo.blogspot.com/
>>
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> W.G. Gihan Anuruddha
> Senior Software Engineer | WSO2, Inc.
> M: +94772272595
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Spark clustering and DAS analytics cluster tuning

2016-04-07 Thread Niranda Perera
Hi Rukshani,

This has still not been done. Is it possible for us get this done soon?

On Fri, Mar 4, 2016 at 11:36 AM, Rukshani Weerasinha <ruksh...@wso2.com>
wrote:

> Hi Niranda,
>
> Not yet. I will add this content soon and let you know once it is done.
>
> Best Regards,
> Rukshani.
>
> On Fri, Mar 4, 2016 at 11:33 AM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Rukshani and Sam,
>>
>> were we able to publish these content?
>>
>> best
>>
>> On Tue, Jan 26, 2016 at 8:20 AM, Rukshani Weerasinha <ruksh...@wso2.com>
>> wrote:
>>
>>> Hello Niranda,
>>>
>>> Thank you for providing this content. We will start working on this.
>>>
>>> Best Regards,
>>> Rukshani.
>>>
>>> On Tue, Jan 26, 2016 at 3:34 AM, Niranda Perera <nira...@wso2.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I added 2 blog posts explaining how Spark clustering works [1] and how
>>>> to manage the resources in a DAS analytics cluster [2].
>>>>
>>>> @Rukshani, some of the content should go to this doc [3]. Can we work
>>>> on that?
>>>>
>>>> Best
>>>>
>>>> [1]
>>>> https://pythagoreanscript.wordpress.com/2016/01/23/the-dynamics-of-a-spark-cluster-wrt-wso2-das/
>>>> [2]
>>>> https://pythagoreanscript.wordpress.com/2016/01/25/wso2-das-spark-cluster-tuning/
>>>> [3] https://docs.wso2.com/display/DAS301/Performance+Tuning
>>>>
>>>> --
>>>> *Niranda Perera*
>>>> Software Engineer, WSO2 Inc.
>>>> Mobile: +94-71-554-8430
>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>> https://pythagoreanscript.wordpress.com/
>>>>
>>>
>>>
>>>
>>> --
>>> Rukshani Weerasinha
>>>
>>> WSO2 Inc.
>>> Web:http://wso2.com
>>> Mobile: 0777 683 738
>>>
>>>
>>
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>
>
>
> --
> Rukshani Weerasinha
>
> WSO2 Inc.
> Web:http://wso2.com
> Mobile: 0777 683 738
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] US Election Analytics - Data won't persist in DAS

2016-04-02 Thread Niranda Perera
Yes. It's related to DAS event sink UI
On Apr 2, 2016 22:04, "Mohanadarshan Vivekanandalingam" <mo...@wso2.com>
wrote:

>
>
> On Sat, Apr 2, 2016 at 9:56 PM, Udara Rathnayake <uda...@wso2.com> wrote:
>
>> Sure Niranda, Let me know whether I have to report this in CEP or DAS?
>>
>> It should be DAS :)
>
>
>> On Sat, Apr 2, 2016 at 12:21 PM, Niranda Perera <nira...@wso2.com> wrote:
>>
>>> Hi Udara,
>>>
>>> Ah. Never came across it but yes, you are correct. This seems to be a UI
>>> bug.
>>>
>>> Can you report a JIRA in this?
>>>
>>> Rgds
>>> On Apr 2, 2016 21:45, "Udara Rathnayake" <uda...@wso2.com> wrote:
>>>
>>>> Hi Niranda,
>>>>
>>>> ​If we edit and save an existing stream definition (without going to
>>>> the persistence UI)​ this can happen right? I have faced similar thing but
>>>> never tried reproducing.
>>>>
>>>> However we should bring tooling support(dev studio) ASAP to DAS
>>>> artifacts, so we can create and deploy via car apps easily.
>>>>
>>>>
>>>> On Fri, Apr 1, 2016 at 11:32 AM, Niranda Perera <nira...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Chehara,
>>>>>
>>>>> It's highly unlikely that a stream persistence configuration to get
>>>>> disabled automatically. Looks like someone have mistakenly disabled it 
>>>>> from
>>>>> the UI. As per the current persistence UI, there is a possibility that 
>>>>> such
>>>>> an incident may occur.
>>>>>
>>>>>
>>>>> On Fri, Apr 1, 2016 at 8:43 PM, Chehara Pathmabandu <cheh...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Solved. event persistent had been disabled automatically.
>>>>>>
>>>>>> On Fri, Apr 1, 2016 at 8:16 PM, Niranda Perera <nira...@wso2.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Yasara,
>>>>>>>
>>>>>>> We need to find where exactly the data stopped persisting. can you
>>>>>>> find the final entry? Are there any error in the logs?
>>>>>>>
>>>>>>> also, we need to check the mysql connections. Check if it has grown
>>>>>>> out of proportion as previously.
>>>>>>> And the CPU and memory usage of the node.
>>>>>>>
>>>>>>> On Fri, Apr 1, 2016 at 7:37 PM, Mohanadarshan Vivekanandalingam <
>>>>>>> mo...@wso2.com> wrote:
>>>>>>>
>>>>>>>> [Adding Dev mail thread]
>>>>>>>>
>>>>>>>> @Niranda, can you please dig on this issue and find the root
>>>>>>>> cause..
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Apr 1, 2016 at 7:26 PM, Yasara Dissanayake <yas...@wso2.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> As you can see from the image attached here[1]:
>>>>>>>>> Current status is growing as the data receive and we use this same
>>>>>>>>> stream to persist. But from a particular date data had not been 
>>>>>>>>> persisted.
>>>>>>>>>
>>>>>>>>> regards,
>>>>>>>>> yasara
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *V. Mohanadarshan*
>>>>>>>> *Senior Software Engineer,*
>>>>>>>> *Data Technologies Team,*
>>>>>>>> *WSO2, Inc. http://wso2.com <http://wso2.com> *
>>>>>>>> *lean.enterprise.middleware.*
>>>>>>>>
>>>>>>>> email: mo...@wso2.com
>>>>>>>> phone:(+94) 771117673
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Niranda Perera*
>>>>>>> Software Engineer, WSO2 Inc.
>>>>>>> Mobile: +94-71-554-8430
>>>>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>>>>> https://pythagoreanscript.wordpress.com/
>>>>>>>
>>>>>>> ___
>>>>>>> Dev mailing list
>>>>>>> Dev@wso2.org
>>>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Chehara Pathmabandu
>>>>>> *Software Engineer - Intern*
>>>>>> Mobile : +94711976407
>>>>>> cheh...@wso2.com
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Niranda Perera*
>>>>> Software Engineer, WSO2 Inc.
>>>>> Mobile: +94-71-554-8430
>>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>>> https://pythagoreanscript.wordpress.com/
>>>>>
>>>>> ___
>>>>> Dev mailing list
>>>>> Dev@wso2.org
>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> UdaraR
>>>>
>>>
>>
>>
>> --
>> Regards,
>> UdaraR
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *V. Mohanadarshan*
> *Senior Software Engineer,*
> *Data Technologies Team,*
> *WSO2, Inc. http://wso2.com <http://wso2.com> *
> *lean.enterprise.middleware.*
>
> email: mo...@wso2.com
> phone:(+94) 771117673
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] US Election Analytics - Data won't persist in DAS

2016-04-02 Thread Niranda Perera
Hi Udara,

Ah. Never came across it but yes, you are correct. This seems to be a UI
bug.

Can you report a JIRA in this?

Rgds
On Apr 2, 2016 21:45, "Udara Rathnayake" <uda...@wso2.com> wrote:

> Hi Niranda,
>
> ​If we edit and save an existing stream definition (without going to the
> persistence UI)​ this can happen right? I have faced similar thing but
> never tried reproducing.
>
> However we should bring tooling support(dev studio) ASAP to DAS artifacts,
> so we can create and deploy via car apps easily.
>
>
> On Fri, Apr 1, 2016 at 11:32 AM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Chehara,
>>
>> It's highly unlikely that a stream persistence configuration to get
>> disabled automatically. Looks like someone have mistakenly disabled it from
>> the UI. As per the current persistence UI, there is a possibility that such
>> an incident may occur.
>>
>>
>> On Fri, Apr 1, 2016 at 8:43 PM, Chehara Pathmabandu <cheh...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Solved. event persistent had been disabled automatically.
>>>
>>> On Fri, Apr 1, 2016 at 8:16 PM, Niranda Perera <nira...@wso2.com> wrote:
>>>
>>>> Hi Yasara,
>>>>
>>>> We need to find where exactly the data stopped persisting. can you find
>>>> the final entry? Are there any error in the logs?
>>>>
>>>> also, we need to check the mysql connections. Check if it has grown out
>>>> of proportion as previously.
>>>> And the CPU and memory usage of the node.
>>>>
>>>> On Fri, Apr 1, 2016 at 7:37 PM, Mohanadarshan Vivekanandalingam <
>>>> mo...@wso2.com> wrote:
>>>>
>>>>> [Adding Dev mail thread]
>>>>>
>>>>> @Niranda, can you please dig on this issue and find the root cause..
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Apr 1, 2016 at 7:26 PM, Yasara Dissanayake <yas...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> As you can see from the image attached here[1]:
>>>>>> Current status is growing as the data receive and we use this same
>>>>>> stream to persist. But from a particular date data had not been 
>>>>>> persisted.
>>>>>>
>>>>>> regards,
>>>>>> yasara
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *V. Mohanadarshan*
>>>>> *Senior Software Engineer,*
>>>>> *Data Technologies Team,*
>>>>> *WSO2, Inc. http://wso2.com <http://wso2.com> *
>>>>> *lean.enterprise.middleware.*
>>>>>
>>>>> email: mo...@wso2.com
>>>>> phone:(+94) 771117673
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Niranda Perera*
>>>> Software Engineer, WSO2 Inc.
>>>> Mobile: +94-71-554-8430
>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>> https://pythagoreanscript.wordpress.com/
>>>>
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Chehara Pathmabandu
>>> *Software Engineer - Intern*
>>> Mobile : +94711976407
>>> cheh...@wso2.com
>>>
>>
>>
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Regards,
> UdaraR
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] US Election Analytics - Data won't persist in DAS

2016-04-01 Thread Niranda Perera
Hi Chehara,

It's highly unlikely that a stream persistence configuration to get
disabled automatically. Looks like someone have mistakenly disabled it from
the UI. As per the current persistence UI, there is a possibility that such
an incident may occur.


On Fri, Apr 1, 2016 at 8:43 PM, Chehara Pathmabandu <cheh...@wso2.com>
wrote:

> Hi,
>
> Solved. event persistent had been disabled automatically.
>
> On Fri, Apr 1, 2016 at 8:16 PM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Yasara,
>>
>> We need to find where exactly the data stopped persisting. can you find
>> the final entry? Are there any error in the logs?
>>
>> also, we need to check the mysql connections. Check if it has grown out
>> of proportion as previously.
>> And the CPU and memory usage of the node.
>>
>> On Fri, Apr 1, 2016 at 7:37 PM, Mohanadarshan Vivekanandalingam <
>> mo...@wso2.com> wrote:
>>
>>> [Adding Dev mail thread]
>>>
>>> @Niranda, can you please dig on this issue and find the root cause..
>>>
>>>
>>>
>>> On Fri, Apr 1, 2016 at 7:26 PM, Yasara Dissanayake <yas...@wso2.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> As you can see from the image attached here[1]:
>>>> Current status is growing as the data receive and we use this same
>>>> stream to persist. But from a particular date data had not been persisted.
>>>>
>>>> regards,
>>>> yasara
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *V. Mohanadarshan*
>>> *Senior Software Engineer,*
>>> *Data Technologies Team,*
>>> *WSO2, Inc. http://wso2.com <http://wso2.com> *
>>> *lean.enterprise.middleware.*
>>>
>>> email: mo...@wso2.com
>>> phone:(+94) 771117673
>>>
>>
>>
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Chehara Pathmabandu
> *Software Engineer - Intern*
> Mobile : +94711976407
> cheh...@wso2.com
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] US Election Analytics - Data won't persist in DAS

2016-04-01 Thread Niranda Perera
Hi Yasara,

We need to find where exactly the data stopped persisting. can you find the
final entry? Are there any error in the logs?

also, we need to check the mysql connections. Check if it has grown out of
proportion as previously.
And the CPU and memory usage of the node.

On Fri, Apr 1, 2016 at 7:37 PM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

> [Adding Dev mail thread]
>
> @Niranda, can you please dig on this issue and find the root cause..
>
>
>
> On Fri, Apr 1, 2016 at 7:26 PM, Yasara Dissanayake <yas...@wso2.com>
> wrote:
>
>> Hi,
>>
>> As you can see from the image attached here[1]:
>> Current status is growing as the data receive and we use this same stream
>> to persist. But from a particular date data had not been persisted.
>>
>> regards,
>> yasara
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
>
> --
> *V. Mohanadarshan*
> *Senior Software Engineer,*
> *Data Technologies Team,*
> *WSO2, Inc. http://wso2.com <http://wso2.com> *
> *lean.enterprise.middleware.*
>
> email: mo...@wso2.com
> phone:(+94) 771117673
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS - How to change the schema of a table

2016-03-24 Thread Niranda Perera
Hi Damith,

you can set the schema again from Spark script also. you can write a query
llike this

create temporary table  using CarbonAnalytics options (tableName
"", schema "", mergeSchema "false");

On Thu, Mar 24, 2016 at 11:31 AM, Gimantha Bandara <giman...@wso2.com>
wrote:

> Hi Damith,
>
> You can set update the existing schema using the REST APIs directly as
> mentioned here [1]. The table name cannot be removed from DAS.
>
> [1] https://docs.wso2.com/pages/viewpage.action?pageId=50505976
>
> On Thu, Mar 24, 2016 at 9:37 AM, Damith Wickramasinghe <dami...@wso2.com>
> wrote:
>
>> Hi all,
>>
>> I have already created a table with some columns using spark queries and
>> inserted some data. But I need to add another column to it. Is there a way
>> to directly achieve this ? Or do I have to drop the table and reinitialize
>> it ? Also I cannot seems to find a way to remove the table also ? I have
>> purged all data in said table.
>>
>> Regards,
>> Damith.
>>
>>
>> --
>> Software Engineer
>> WSO2 Inc.; http://wso2.com
>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com=D=1=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>> lean.enterprise.middleware
>>
>> mobile: *+94728671315 <%2B94728671315>*
>>
>>
>
>
> --
> Gimantha Bandara
> Software Engineer
> WSO2. Inc : http://wso2.com
> Mobile : +94714961919
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] CNF error in windows

2016-03-16 Thread Niranda Perera
It was in the docs [1]

[1] https://docs.wso2.com/display/DAS301/Installing+on+Windows
Hi Niranda,

Thanks for the help. It is now working fine.

Thanks & Regards,
/charithag

On Wed, Mar 16, 2016 at 12:22 PM, Niranda Perera <nira...@wso2.com> wrote:

> Hi Charitha,
>
> Put the jar [1] in repository/components/lib directory.
>
> The jar in the plugins directory has some OSGI header issue. Putting it in
> lib should resolve it.
>
> best
>
> [1]
> http://mvnrepository.com/artifact/org.xerial.snappy/snappy-java/1.1.1.7
>
>
> On Wed, Mar 16, 2016 at 10:59 AM, Charitha Goonetilleke <
> charit...@wso2.com> wrote:
>
>> Hi Sachith,
>>
>> Thanks for the quick response. org.xerial.snappy.snappy-java_1.1.1.7.jar
>> is already there and there is no any other versions exist.
>>
>>
>> On Wed, Mar 16, 2016 at 10:54 AM, Sachith Withana <sach...@wso2.com>
>> wrote:
>>
>>> Hi Charitha,
>>>
>>> This occurs due to a version mismatch in the Snappy jar.
>>>
>>> Can you manually copy the jar to the location below and check?
>>> Make sure it's snappy 1.1.1.7 jar.
>>>
>>> repository/components/plugins/org.xerial.snappy.snappy-java_1.1.1.7.jar
>>>
>>> Thanks,
>>> Sachith
>>>
>>> On Wed, Mar 16, 2016 at 10:48 AM, Charitha Goonetilleke <
>>> charit...@wso2.com> wrote:
>>>
>>>> Hi All,
>>>>
>>>> When we are trying to run IoT Server on windows, we are getting this
>>>> exception on windows related to SnappyInputStream class which used in DAS.
>>>> However, this error is not exist on top of Linux and Mac.
>>>>
>>>> Is this a know issue on windows? Please kindly let us know if there any
>>>> known work around to fix this.
>>>>
>>>> Exception in thread "dag-scheduler-event-loop"
>>>> java.lang.NoClassDefFoundError: org/xerial/snappy/SnappyInputStream
>>>> at java.lang.Class.forName0(Native Method)
>>>> at java.lang.Class.forName(Class.java:348)
>>>> at
>>>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:66)
>>>> at
>>>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
>>>> at org.apache.spark.broadcast.TorrentBroadcast.org
>>>> $apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>>>> at
>>>> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>>>> at
>>>> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>>>> at
>>>> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>>>> at
>>>> org.apache.spark.SparkContext.broadcast(SparkContext.scala:1292)
>>>> at org.apache.spark.scheduler.DAGScheduler.org
>>>> $apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:874)
>>>> at org.apache.spark.scheduler.DAGScheduler.org
>>>> $apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:815)
>>>> at
>>>> org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:799)
>>>> at
>>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1429)
>>>> at
>>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1421)
>>>> at
>>>> org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>>>> Caused by: java.lang.ClassNotFoundException:
>>>> org.xerial.snappy.SnappyInputStream cannot be found by
>>>> spark-core_2.10_1.4.2.wso2v1
>>>> at
>>>> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501)
>>>> at
>>>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
>>>> at
>>>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
>>>> at
>>>> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>> ... 15 more
>>>>
>>>> --
>>>> *Charitha Goonetilleke*
>>>> Software Engineer
>>>>

Re: [Dev] DAS tries to connect H2 TCP Server on strange IP

2016-03-16 Thread Niranda Perera
+ anjana

Hi Charitha,

looks like this is a concurrency issue in H2.

the IP of the H2 db will be governed by the datasource, which is configured
in the analytic-datasources.xml
@anjana, I remember, there was a configuration in H2, to handle concurrent
db calls? isn't that so?

best


On Wed, Mar 16, 2016 at 12:18 PM, Charitha Goonetilleke <charit...@wso2.com>
wrote:

> Hi All,
>
> In IoT Server we are having following intermittent error. Seems IoT server
> tries to connect to 198.105.254.11 during summarization task. I'm wondering
> how this IP is in there and why.
>
> [2016-03-16 12:10:17,044] ERROR
> {org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation} -  Error
> while inserting data into table DEVICE_ACCELEROMETER_SUMMARY : Error in
> deleting table: Exception opening port "H2 TCP Server (tcp://
> 198.105.254.11:40762)" (port may be in use), cause: "timeout" [90061-140]
> org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
> Error in deleting table: Exception opening port "H2 TCP Server (tcp://
> 198.105.254.11:40762)" (port may be in use), cause: "timeout" [90061-140]
> at
> org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsRecordStore.deleteTable(RDBMSAnalyticsRecordStore.java:582)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.deleteTable(AnalyticsDataServiceImpl.java:542)
> at
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:157)
> at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:766)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:742)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
> at
> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
> at
> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> --
> *Charitha Goonetilleke*
> Software Engineer
> WSO2 Inc.; http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 77 751 3669 <%2B94777513669>
> Twitter:@CharithaWs <https://twitter.com/CharithaWs>, fb: charithag
> <https://www.facebook.com/charithag>, linkedin: charithag
> <http://www.linkedin.com/in/charithag>
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] CNF error in windows

2016-03-16 Thread Niranda Perera
Hi Charitha,

Put the jar [1] in repository/components/lib directory.

The jar in the plugins directory has some OSGI header issue. Putting it in
lib should resolve it.

best

[1] http://mvnrepository.com/artifact/org.xerial.snappy/snappy-java/1.1.1.7


On Wed, Mar 16, 2016 at 10:59 AM, Charitha Goonetilleke <charit...@wso2.com>
wrote:

> Hi Sachith,
>
> Thanks for the quick response. org.xerial.snappy.snappy-java_1.1.1.7.jar
> is already there and there is no any other versions exist.
>
>
> On Wed, Mar 16, 2016 at 10:54 AM, Sachith Withana <sach...@wso2.com>
> wrote:
>
>> Hi Charitha,
>>
>> This occurs due to a version mismatch in the Snappy jar.
>>
>> Can you manually copy the jar to the location below and check?
>> Make sure it's snappy 1.1.1.7 jar.
>>
>> repository/components/plugins/org.xerial.snappy.snappy-java_1.1.1.7.jar
>>
>> Thanks,
>> Sachith
>>
>> On Wed, Mar 16, 2016 at 10:48 AM, Charitha Goonetilleke <
>> charit...@wso2.com> wrote:
>>
>>> Hi All,
>>>
>>> When we are trying to run IoT Server on windows, we are getting this
>>> exception on windows related to SnappyInputStream class which used in DAS.
>>> However, this error is not exist on top of Linux and Mac.
>>>
>>> Is this a know issue on windows? Please kindly let us know if there any
>>> known work around to fix this.
>>>
>>> Exception in thread "dag-scheduler-event-loop"
>>> java.lang.NoClassDefFoundError: org/xerial/snappy/SnappyInputStream
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at
>>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:66)
>>> at
>>> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
>>> at org.apache.spark.broadcast.TorrentBroadcast.org
>>> $apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>>> at
>>> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>>> at
>>> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>>> at
>>> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
>>> at
>>> org.apache.spark.SparkContext.broadcast(SparkContext.scala:1292)
>>> at org.apache.spark.scheduler.DAGScheduler.org
>>> $apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:874)
>>> at org.apache.spark.scheduler.DAGScheduler.org
>>> $apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:815)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:799)
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1429)
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1421)
>>> at
>>> org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>>> Caused by: java.lang.ClassNotFoundException:
>>> org.xerial.snappy.SnappyInputStream cannot be found by
>>> spark-core_2.10_1.4.2.wso2v1
>>> at
>>> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501)
>>> at
>>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
>>> at
>>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
>>> at
>>> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>> ... 15 more
>>>
>>> --
>>> *Charitha Goonetilleke*
>>> Software Engineer
>>> WSO2 Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> mobile: +94 77 751 3669 <%2B94777513669>
>>> Twitter:@CharithaWs <https://twitter.com/CharithaWs>, fb: charithag
>>> <https://www.facebook.com/charithag>, linkedin: charithag
>>> <http://www.linkedin.com/in/charithag>
>>>
>>
>>
>>
>> --
>> Sachith Withana
>> Software Engineer; WSO2 Inc.; http://wso2.com
>> E-mail: sachith AT wso2.com
>> M: +94715518127
>> Linked-In: <http://goog_416592669>
>> https://lk.linkedin.com/in/sachithwithana
>>
>
>
>
> --
> *Charitha Goonetilleke*
> Software Engineer
> WSO2 Inc.; http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 77 751 3669 <%2B94777513669>
> Twitter:@CharithaWs <https://twitter.com/CharithaWs>, fb: charithag
> <https://www.facebook.com/charithag>, linkedin: charithag
> <http://www.linkedin.com/in/charithag>
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] WSO2 DAS: SPARK SQL query with UNION producing errors

2016-03-07 Thread Niranda Perera
Hi Charini,

there is a problem with the query here. use the following query

INSERT OVERWRITE TABLE All_three

select * from (
SELECT SYMBOL, VOLUME FROM First
UNION
SELECT SYMBOL, VOLUME FROM Middle
UNION
SELECT SYMBOL, VOLUME FROM Third

) temp;

Essentially what we do here is, wrapping the union result into one
temporary data element named 'temp' and select everything from there.

SparkSQL parser only takes one single select element in the insert queries,
and at the end of a select query it expects a limit (if available).
therefore, you need to wrap the subsequent select statements into one
select element.

Hope this resolves your issue

Best

On Tue, Mar 8, 2016 at 8:47 AM, Charini Nanayakkara <chari...@wso2.com>
wrote:

> Hi,
> The following query was attempted to be executed when performing batch
> analytics with WSO2 DAS using Spark SQL. Tables 'First', 'Middle' and
> 'Third' are required to be combined and written to table 'All_three'.
>
> INSERT OVERWRITE TABLE All_three SELECT SYMBOL, VOLUME FROM First UNION 
> SELECT SYMBOL, VOLUME FROM Middle UNION SELECT SYMBOL, VOLUME FROM Third;
>
>
> Following error is displayed on WSO2 DAS when this query is executed:
>
> ERROR: [1.79] failure: ``limit'' expected but `union' found INSERT OVERWRITE 
> TABLE X1234_All_three SELECT SYMBOL, VOLUME FROM X1234_First UNION SELECT 
> SYMBOL, VOLUME FROM X1234_Middle UNION SELECT SYMBOL, VOLUME FROM X1234_Third 
> ^
>
>
> Using LIMIT with UNION is not a necessity to the best of my knowledge.
> Enclosing the SELECT queries in parentheses too was attempted which didn't
> work. What am I doing wrong here? Thank you in advance!
>
>
>
>
>
> --
> Charini Vimansha Nanayakkara
> Software Engineer at WSO2
> Mobile: 0714126293
>
>
> _______
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error in clearing index data: Unable to delete directory

2016-03-07 Thread Niranda Perera
Hi Udara,

what are the datasources you are using here? RDBMS?

best

On Tue, Mar 8, 2016 at 1:56 AM, Udara Rathnayake <uda...@wso2.com> wrote:

> noticed same after ~20 mins..
>
> On Mon, Mar 7, 2016 at 3:10 PM, Udara Rathnayake <uda...@wso2.com> wrote:
>
>> Hi,
>>
>> Noticed following error[1] while running DAS 3.0.1 on a windows
>> environment.
>> After a server restart, I don't see this now. What can be the cause?
>>
>> [1].
>>
>> [2016-03-07 14:48:01,456] ERROR
>> {org.wso2.carbon.analytics.spark.core.sources.A
>> alyticsRelation} -  Error while inserting data into table daily_sessions
>> : Error
>>  in clearing index data: Unable to delete directory
>> C:\Users\URATHN~1\WORKSP~1\
>>
>> AS\dev\WSO2DA~1.1\WSO2DA~1.1\bin\..\repository\data\index_data\5\_data\taxonomy
>> -1234_daily_sessions.
>>
>> org.wso2.carbon.analytics.dataservice.commons.exception.AnalyticsIndexException
>>  Error in clearing index data: Unable to delete directory
>> C:\Users\URATHN~1\WOR
>>
>> SP~1\DAS\dev\WSO2DA~1.1\WSO2DA~1.1\bin\..\repository\data\index_data\5\_data\ta
>> onomy\-1234_daily_sessions.
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataInd
>> xer.clearIndexDataLocal(AnalyticsDataIndexer.java:1461)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.IndexNodeCoordin
>> tor.clearIndexData(IndexNodeCoordinator.java:681)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsDataInd
>> xer.clearIndexData(AnalyticsDataIndexer.java:1440)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.
>> learIndices(AnalyticsDataServiceImpl.java:889)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.
>> eleteTable(AnalyticsDataServiceImpl.java:541)
>> at
>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.inser
>> (AnalyticsRelation.java:157)
>> at
>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala
>> 53)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzyc
>> mpute(commands.scala:57)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(comm
>> nds.scala:57)
>> at org.apache.spark.sql.execution.ExecutedCommand.doExecute(
>> commands.sc
>> la:68)
>> at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(Sp
>> rkPlan.scala:88)
>> at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(Sp
>> rkPlan.scala:88)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.
>> cala:147)
>> at
>> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLC
>> ntext.scala:950)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scal
>> :950)
>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
>> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor
>> executeQueryLocal(SparkAnalyticsExecutor.java:731)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor
>> executeQuery(SparkAnalyticsExecutor.java:709)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService
>> executeQuery(CarbonAnalyticsProcessorService.java:201)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService
>> executeScript(CarbonAnalyticsProcessorService.java:151)
>> at
>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(Analytics
>> ask.java:59)
>> at
>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQua
>> tzJobAdapter.java:67)
>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:4
>> 1)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor
>> java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor

Re: [Dev] [DAS] Spark clustering and DAS analytics cluster tuning

2016-03-03 Thread Niranda Perera
Hi Rukshani and Sam,

were we able to publish these content?

best

On Tue, Jan 26, 2016 at 8:20 AM, Rukshani Weerasinha <ruksh...@wso2.com>
wrote:

> Hello Niranda,
>
> Thank you for providing this content. We will start working on this.
>
> Best Regards,
> Rukshani.
>
> On Tue, Jan 26, 2016 at 3:34 AM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi,
>>
>> I added 2 blog posts explaining how Spark clustering works [1] and how to
>> manage the resources in a DAS analytics cluster [2].
>>
>> @Rukshani, some of the content should go to this doc [3]. Can we work on
>> that?
>>
>> Best
>>
>> [1]
>> https://pythagoreanscript.wordpress.com/2016/01/23/the-dynamics-of-a-spark-cluster-wrt-wso2-das/
>> [2]
>> https://pythagoreanscript.wordpress.com/2016/01/25/wso2-das-spark-cluster-tuning/
>> [3] https://docs.wso2.com/display/DAS301/Performance+Tuning
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>
>
>
> --
> Rukshani Weerasinha
>
> WSO2 Inc.
> Web:http://wso2.com
> Mobile: 0777 683 738
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Unable to add a spark UDF without any parameter

2016-02-22 Thread Niranda Perera
Hi Udara,

as Sachith explained, this is a flaw in spark and we will send a PR to
them. but unfortunately, this would be resolved in the next DAS release
together with a spark upgrade.

best

On Tue, Feb 23, 2016 at 9:34 AM, Sachith Withana <sach...@wso2.com> wrote:

> I'm afraid it's not.
>
> This is due to the fact that it's not supported in the Spark version that
> we have now.
>
> Thanks,
> Sachith
>
> On Tue, Feb 23, 2016 at 1:14 AM, Udara Rathnayake <uda...@wso2.com> wrote:
>
>> Hi Niranda,
>>
>> Noticed same issue in DAS 3.0.1. Is this fixed now?
>>
>> Thanks!
>>
>> On Tue, Jan 5, 2016 at 11:12 AM, Niranda Perera <nira...@wso2.com> wrote:
>>
>>> Hi Udara,
>>>
>>> Yes, this is a known issue / limitation in the current implementation.
>>> You would have to pass a dummy param because udf0 implementation was not
>>> available by the time we released.
>>>
>>> Best
>>> On Jan 5, 2016 21:37, "Udara Rathnayake" <uda...@wso2.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> When I try to call a UDF without any parameter, getting following
>>>> error[1]. Let's assume my UDF is convertDate(). But if I try the same with
>>>> a parameter like convertDate(null) it works.
>>>>
>>>> Any Idea? Noticed that we have TimeNowUDF[2] sample, do we need to use
>>>> "now(null)" within a spark query?
>>>>
>>>>
>>>> [1]
>>>>
>>>> TID: [-1] [] [2016-01-05 10:45:51,744] ERROR
>>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>>> executing task:
>>>> org.apache.spark.sql.UDFRegistration$$anonfun$register$24$$anonfun$apply$1
>>>> cannot be cast to scala.Function0
>>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter}
>>>> java.lang.ClassCastException:
>>>> org.apache.spark.sql.UDFRegistration$$anonfun$register$24$$anonfun$apply$1
>>>> cannot be cast to scala.Function0
>>>> at
>>>> org.apache.spark.sql.catalyst.expressions.ScalaUdf.(ScalaUdf.scala:61)
>>>> at
>>>> org.apache.spark.sql.UDFRegistration$$anonfun$register$24.apply(UDFRegistration.scala:408)
>>>> at
>>>> org.apache.spark.sql.UDFRegistration$$anonfun$register$24.apply(UDFRegistration.scala:408)
>>>> at
>>>> org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:57)
>>>> at
>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:465)
>>>> at
>>>> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:463)
>>>> at
>>>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
>>>> at
>>>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
>>>> at
>>>> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
>>>> at
>>>> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:221)
>>>> at
>>>> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)
>>>> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>>>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>>>> at
>>>> scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>>>> at
>>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>>>> at
>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>>>> at
>>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>>>> at scala.collection.TraversableOnce$class.to
>>>> (TraversableOnce.scala:273)
>>>> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
>>>> at
>>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>>>> at
>>>> scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
>>>> at
>>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>>>> at
>&

[Dev] [DAS] Spark clustering and DAS analytics cluster tuning

2016-01-25 Thread Niranda Perera
Hi,

I added 2 blog posts explaining how Spark clustering works [1] and how to
manage the resources in a DAS analytics cluster [2].

@Rukshani, some of the content should go to this doc [3]. Can we work on
that?

Best

[1]
https://pythagoreanscript.wordpress.com/2016/01/23/the-dynamics-of-a-spark-cluster-wrt-wso2-das/
[2]
https://pythagoreanscript.wordpress.com/2016/01/25/wso2-das-spark-cluster-tuning/
[3] https://docs.wso2.com/display/DAS301/Performance+Tuning

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Niranda Perera
Hi Roshan,

I agree with Inosh. It takes few seconds for spark to recover, until then
you might see this exception. Do you see this exception through out?

best

On Mon, Jan 25, 2016 at 8:13 AM, Inosh Goonewardena <in...@wso2.com> wrote:

> Hi Roshan,
>
> How many analyzer nodes are there in the cluster? If the master count is
> set to 1 and the master is down, spark cluster will not survive. It you set
> the master count to 2, then if the master is down other node become the
> master and it survive. However until spark context is initialize properly
> in the other node(will take roughly about 5 - 30secs) you will see above
> error.
>
> On Sun, Jan 24, 2016 at 8:25 PM, Roshan Wijesena <ros...@wso2.com> wrote:
>
>> Hi  Niranda/DAS team,
>>
>> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
>> when one server is in down situation. I am getting this exception
>> periodically, and spark scripts are not running at all. it seems we can
>> *not* survive when one server is in down situation?
>>
>> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
>> executing the scheduled task for the script: is_log_analytics
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
>> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
>> Spark SQL Context is not available. Check if the cluster has instantiated
>> properly.
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
>> at
>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
>> at
>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> -Roshan
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena <ros...@wso2.com> wrote:
>>
>>> Hi Niranda / Inosh,
>>>
>>> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
>>> now. Did not appear for a while.
>>>
>>> -Roshan
>>>
>>> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera <nira...@wso2.com>
>>> wrote:
>>>
>>>> Hi Roshan,
>>>>
>>>> This happens when you have a malformed HA cluster. When you put the
>>>> master count as 2, the spark cluster would not get initiated until there
>>>> are 2 members in the analytics cluster. when the count as 2 and there is a
>>>> task scheduled already, you may come across this issue, until the 2nd node
>>>> is up and running. You should see that after sometime, the exception gets
>>>> resolved., and that is when the analytics cluster is at a workable state.
>>>>
>>>> But I agree, an NPE is not acceptable here and this has been already
>>>> fixed in 3.0.1 [1]
>>>>
>>>> as per the query modification, yes, the query gets modified to handle
>>>> multi tenancy in the spark runtime.
>>>>
>>>> hope this resolves your issues.
>>>>
>>>> rgds
>>>>
>>>> [1] https://wso2.org/jira/browse/DAS-329
>>>>
>>>> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena <ros...@wso2.com>
>>>> wrote:
>>>>
>>>>>  I reproduced the error. If we set carbon.spark.master.count value to
>>>>> 2 this error will occur. Any solution available in this case?
>>>>>
>>>>>
>>>>> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena <ros...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> After I enabled the debug, it

Re: [Dev] Getting ML model (1.1.0) working with DAS (3.0.0)

2016-01-23 Thread Niranda Perera
Shall we include this under another subtopic in the following page [1]?
then it will be a one stop page for all the -D options for DAS

[1] https://docs.wso2.com/pages/viewpage.action?pageId=45952727

On Fri, Jan 22, 2016 at 3:05 PM, Nirmal Fernando <nir...@wso2.com> wrote:

> Hi Iranga,
>
>
> On Fri, Jan 22, 2016 at 3:00 PM, Iranga Muthuthanthri <ira...@wso2.com>
> wrote:
>
>> Hi,
>>
>> In order to build an integrated  analytics scenario (Batch, Realtime and
>> Predictive)  with DAS 3.0.0 tried the following steps in a local server
>> though by installing and using CEP-ML extension in DAS. The steps followed
>> are below.
>>
>> 1.) Installed ML 1.1.0  features in DAS 3.0.0.
>> 2) Created a model through ML 1.1.0 .
>> 3.)  Publish model to DAS registry .
>> 4.) Use the ML extention through execution plan.
>>
>>  The following exceptions is observed at Server startup, possibly due to
>> the  issue  of "supporting multiple Spark context in a jvm"[2].  Can start
>> the server with command wso2server.sh -DdisableAnalyticsSparkCtx=true
>>
>
> Instead of doing this, do -DdisableMLSparkCtx=true this would disable ML
> Spark Context creation but predictions should still work.
>
>
>
>>
>> However this prevents using the batch analytics features in DAS,
>>
>> Is there a  solution to get all three integrated analytics with DAS
>> 3.0.0?Is there a way use the ML model to overcome issue[2]
>>
>>
>>
>>
>>
>> [1]2016-01-22 12:09:58,193] ERROR
>> {org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent} -  Error
>> initializing analytics executor: Only one SparkContext may be running in
>> this JVM (see SPARK-2243). To ignore this error, set
>> spark.driver.allowMultipleContexts = true. The currently running
>> SparkContext was created 
>>
>>
>> [2]https://issues.apache.org/jira/browse/SPARK-2243
>>
>> --
>> Thanks & Regards
>>
>> Iranga Muthuthanthri
>> (M) -0777-255773
>> Team Product Management
>>
>>
>
>
> --
>
> Thanks & regards,
> Nirmal
>
> Team Lead - WSO2 Machine Learner
> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
> Mobile: +94715779733
> Blog: http://nirmalfdo.blogspot.com/
>
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][VOTE] Release WSO2 Data Analytics Server 3.0.1 RC2

2016-01-17 Thread Niranda Perera
Hi,

I have tested DAS 3.0.1 RC2 with spark clustering and failover scenarios

It works well.

[x] Stable - Go ahead and release.

cheers

On Mon, Jan 18, 2016 at 11:30 AM, Maheshakya Wijewardena <
mahesha...@wso2.com> wrote:

> Hi
>
> I've tested ML 1.1.0 with DAS 3.0.1 RC2 spark cluster.
> Works fine.
>
> [x] Stable - Go ahead and release
>
> Best regards.
>
>
> On Mon, Jan 18, 2016 at 11:10 AM, Seshika Fernando <sesh...@wso2.com>
> wrote:
>
>> Hi,
>>
>> I've tested DAS 3.0.1 RC2 with the following
>>
>>- complex execution plans that test - event tables, indexing,
>>patterns, windows, joins etc;
>>- data persistence
>>- event tracing
>>- data explorer
>>- event publisher
>>- stream simulation.
>>
>>
>> All works well.
>>
>> My vote - [x] Stable - Go ahead and release.
>>
>> seshi
>>
>> On Mon, Jan 18, 2016 at 10:59 AM, Anjana Fernando <anj...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> * Tested the basic indexing functionality in a standalone server
>>>
>>> * Tested Spark analytics with a 2 node cluster with a subset of the
>>> Wikipedia data set (4 Million records).
>>>
>>> [X] Stable - Go ahead and release
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Sun, Jan 17, 2016 at 3:33 PM, Sachith Withana <sach...@wso2.com>
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> This is the second release candidate of WSO2 DAS 3.0.1. Please
>>>> download, test and vote.
>>>>
>>>> The vote will be open for 72 hours or as needed.
>>>>
>>>> This release fixes the following issues:
>>>> https://wso2.org/jira/issues/?filter=12622
>>>>
>>>> Binary distribution file:
>>>> https://svn.wso2.org/repos/wso2/people/sachith/rc2/
>>>>
>>>>
>>>> [ ] Broken - Do not release (explain why)
>>>> [ ] Stable - Go ahead and release
>>>>
>>>>
>>>> Thanks,
>>>> WSO2 DAS Team.
>>>>
>>>> --
>>>> Sachith Withana
>>>> Software Engineer; WSO2 Inc.; http://wso2.com
>>>> E-mail: sachith AT wso2.com
>>>> M: +94715518127
>>>> Linked-In: <http://goog_416592669>
>>>> https://lk.linkedin.com/in/sachithwithana
>>>>
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>> *Anjana Fernando*
>>> Senior Technical Lead
>>> WSO2 Inc. | http://wso2.com
>>> lean . enterprise . middleware
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Pruthuvi Maheshakya Wijewardena
> mahesha...@wso2.com
> +94711228855
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Unable to add a spark UDF without any parameter

2016-01-05 Thread Niranda Perera
Hi Udara,

Yes, this is a known issue / limitation in the current implementation. You
would have to pass a dummy param because udf0 implementation was not
available by the time we released.

Best
On Jan 5, 2016 21:37, "Udara Rathnayake"  wrote:

> Hi,
>
> When I try to call a UDF without any parameter, getting following
> error[1]. Let's assume my UDF is convertDate(). But if I try the same with
> a parameter like convertDate(null) it works.
>
> Any Idea? Noticed that we have TimeNowUDF[2] sample, do we need to use "
> now(null)" within a spark query?
>
>
> [1]
>
> TID: [-1] [] [2016-01-05 10:45:51,744] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task:
> org.apache.spark.sql.UDFRegistration$$anonfun$register$24$$anonfun$apply$1
> cannot be cast to scala.Function0
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter}
> java.lang.ClassCastException:
> org.apache.spark.sql.UDFRegistration$$anonfun$register$24$$anonfun$apply$1
> cannot be cast to scala.Function0
> at
> org.apache.spark.sql.catalyst.expressions.ScalaUdf.(ScalaUdf.scala:61)
> at
> org.apache.spark.sql.UDFRegistration$$anonfun$register$24.apply(UDFRegistration.scala:408)
> at
> org.apache.spark.sql.UDFRegistration$$anonfun$register$24.apply(UDFRegistration.scala:408)
> at
> org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:57)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:465)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:463)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:222)
> at
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:51)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:221)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:242)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> at scala.collection.TraversableOnce$class.to
> (TraversableOnce.scala:273)
> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
> at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
> at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformChildrenDown(TreeNode.scala:272)
> at
> org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:227)
> at org.apache.spark.sql.catalyst.plans.QueryPlan.org
> $apache$spark$sql$catalyst$plans$QueryPlan$$transformExpressionDown$1(QueryPlan.scala:75)
> at
> org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:85)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
> at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> at scala.collection.TraversableOnce$class.to
> (TraversableOnce.scala:273)
> at scala.collection.AbstractIterator.to(Iterator.scala:1157)
> at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
> at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
> at
> org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:94)
> at
> org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:64)
> at
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$13.applyOrElse(Analyzer.scala:463)
> at
> 

Re: [Dev] Can we suppress DAS message for schedule task execution?

2015-12-18 Thread Niranda Perera
ndActivity=1,450,424,068,623
> [2015-12-18 13:07:28,406]  WARN
> {org.wso2.carbon.device.mgt.iot.controlqueue.mqtt.MqttSubscriber} -  Lost
> Connection for client: b942b:raspberrypi to tcp://204.232.188.214:1883._This
> was due to - Timed out waiting for a response from the server (Sanitized)
> [2015-12-18 13:07:29,273]  INFO
> {org.wso2.carbon.device.mgt.iot.controlqueue.mqtt.MqttSubscriber} -
>  Subscribed with client id: b942b:raspberrypi
> [2015-12-18 13:07:29,273]  INFO
> {org.wso2.carbon.device.mgt.iot.controlqueue.mqtt.MqttSubscriber} -
>  Subscribed to topic: WSO2IoTServer/+/raspberrypi/+/publisher
> [2015-12-18 13:08:00,001]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Accelerometer_Sensor_Script for tenant id: -1234
> [2015-12-18 13:08:00,001]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Battery_Sensor_Script for tenant id: -1234
> [2015-12-18 13:08:00,002]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: GPS_Sensor_Script for tenant id: -1234
> [2015-12-18 13:08:00,002]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Gravity_Sensor_Script for tenant id: -1234
>
> --
> /sumedha
> m: +94 773017743
> b :  bit.ly/sumedha
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS][DSS] Error after installing DSS features on DAS

2015-12-16 Thread Niranda Perera
at
>> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
>> at
>> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
>> at
>> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
>> at
>> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
>> at
>> org.eclipse.equinox.http.servlet.internal.Activator.registerHttpService(Activator.java:81)
>> at
>> org.eclipse.equinox.http.servlet.internal.Activator.addProxyServlet(Activator.java:60)
>> at
>> org.eclipse.equinox.http.servlet.internal.ProxyServlet.init(ProxyServlet.java:40)
>> at
>> org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.init(DelegationServlet.java:38)
>> at
>> org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1284)
>> at
>> org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1197)
>> at
>> org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1087)
>> at
>> org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5262)
>> at
>> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5550)
>> at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
>> at
>> org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1575)
>> at
>> org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1565)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.ClassNotFoundException:
>> com.codahale.metrics.json.MetricsModule cannot be found by
>> spark-core_2.10_1.4.1.wso2v1
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421)
>> at
>> org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412)
>> at
>> org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> ... 155 more
>
>
>
> Thanks & Regards,
> Chanuka.
>
> --
> Chanuka Dissanayake
> *Software Engineer | **WSO2 Inc.*; http://wso2.com
>
> Mobile: +94 71 33 63 596
> Email: chan...@wso2.com
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
ature_Sensor_Script for tenant id: -1234
>>>>> [2015-12-16 15:11:01,132] ERROR
>>>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>>>> executing task: null
>>>>> java.util.ConcurrentModificationException
>>>>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>>>>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>>>>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>>>>> at
>>>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>>>>> at
>>>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
>>>>> at
>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
>>>>> at
>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
>>>>> at
>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
>>>>> at
>>>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
>>>>> at
>>>>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
>>>>> at
>>>>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
>>>>> at
>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>>>> at
>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>>>> at
>>>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>>>>> at
>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>> at
>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>> at
>>>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>>>>> at
>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
>>>>> at
>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
>>>>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
>>>>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
>>>>> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>>>>> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>>>>> at
>>>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
>>>>> at
>>>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
>>>>> at
>>>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
>>>>> at
>>>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
>>>>> at
>>>>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
>>>>> at
>>>>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>>>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>>>> at
>>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>>> at
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>>> at java.lang.Thread.run(Thread.java:745)
>>>>> [2015-12-16 15:12:00,001]  INFO
>>>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>>>> schedule task for: Accelerometer_Sensor_Script for tenant id: -1234
>>>>>
>>>>> --
>>>>> /sumedha
>>>>> m: +94 773017743
>>>>> b :  bit.ly/sumedha
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Anjana Fernando*
>>>> Senior Technical Lead
>>>> WSO2 Inc. | http://wso2.com
>>>> lean . enterprise . middleware
>>>>
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Ayoma Wijethunga
>>> Software Engineer
>>> WSO2, Inc.; http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> Mobile : +94 (0) 719428123 <+94+(0)+719428123>
>>> Blog : http://www.ayomaonline.com
>>> LinkedIn: https://www.linkedin.com/in/ayoma
>>>
>>
>>
>>
>> --
>> Ayoma Wijethunga
>> Software Engineer
>> WSO2, Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> Mobile : +94 (0) 719428123 <+94+(0)+719428123>
>> Blog : http://www.ayomaonline.com
>> LinkedIn: https://www.linkedin.com/in/ayoma
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
Hi Gihan,

The memory can be set by using the conf parameters ie. "
spark.executor.memory"

rgds

On Wed, Dec 16, 2015 at 7:01 PM, Gihan Anuruddha <gi...@wso2.com> wrote:

> Hi Niranda,
>
> So let say we have to run embedded DAS in a memory restricted environment.
> So where I can define the spark allocated memory configuration information?
>
> Regards,
> Gihan
>
> On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Sumedha,
>>
>> I checked the heapdump you provided, and the size of it is around 230MB.
>> I presume this was not a OOM scenario.
>>
>> As per the Spark memory usage, when you use spark in the local mode, the
>> processing will happen inside that JVM itself. So, we have to make sure
>> that we allocate enough memory for that
>>
>> Rgds
>>
>> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando <anj...@wso2.com> wrote:
>>
>>> Hi Ayoma,
>>>
>>> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
>>> return the Set here, it returns an array that was previously populated in
>>> the refresh operation, so no need to synchronize that method.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga <ay...@wso2.com>
>>> wrote:
>>>
>>>> And, missed mentioning that when this this race condition / state
>>>> corruption happens all "get" operations performed on Set/Map get blocked
>>>> resulting in OOM situation. [1
>>>> <http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html>]
>>>> has all that explained nicely. I have checked a heap dump in a similar
>>>> situation and if you take one, you will clearly see many threads waiting to
>>>> access this Set instance.
>>>>
>>>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>>>
>>>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga <ay...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Anjana,
>>>>>
>>>>> Sorry, I didn't notice that you have already replied this thread.
>>>>>
>>>>> However, please consider my point on "getAllIndexedTables" as well.
>>>>>
>>>>> Thank you,
>>>>> Ayoma.
>>>>>
>>>>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando <anj...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Sumedha,
>>>>>>
>>>>>> Thank you for reporting the issue. I've fixed the concurrent
>>>>>> modification exception issue, where, actually both the methods
>>>>>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
>>>>>> since
>>>>>> they both work on the shared Set object there.
>>>>>>
>>>>>> As for the OOM issue, can you please share a heap dump when the OOM
>>>>>> happened. So we can see what is causing this. And also, I see there are
>>>>>> multiple scripts running at the same time, so this actually can be a
>>>>>> legitimate error also, where the server actually doesn't have enough 
>>>>>> memory
>>>>>> to continue its operations. @Niranda, please share if there is any info 
>>>>>> on
>>>>>> tuning Spark's memory requirements.
>>>>>>
>>>>>> Cheers,
>>>>>> Anjana.
>>>>>>
>>>>>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe <sume...@wso2.com
>>>>>> > wrote:
>>>>>>
>>>>>>> We have DAS Lite included in IoT Server and several summarisation
>>>>>>> scripts deployed. Server is going OOM frequently with following 
>>>>>>> exception.
>>>>>>>
>>>>>>> Shouldn't this[1] method be synchronised?
>>>>>>>
>>>>>>> [1]
>>>>>>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>>>>>>
>>>>>>>
>>>>>>> >>>>>>>>>>>
>>>>>>> [2015-12-16 15:11:00,004]  INF

Re: [Dev] [DAS] Error reading from RDBMS

2015-12-10 Thread Niranda Perera
Hi Rukshan,

as per our offline discussion yesterday, this is due to, spark jdbc
connector not being able to handle the ShortType. As a workaround, is the
migration process, you might have to add a script to alter the table schema
in RDBMS and then take it back using the connector.

rgds

On Wed, Dec 9, 2015 at 1:38 PM, Rukshan Premathunga <ruks...@wso2.com>
wrote:

> Hi,
>
> i point to a mysql table using CarbonJDBC. RDBMS table contain the column
> name api,day,week time etc.
>
> But when reading values of the `day` column i got following error.
>
> *SparkSQL >* select day from APIThrottleSummaryData;
> ERROR :  Job aborted due to stage failure: Task 0 in stage 11.0 failed 1
> times, most recent failure: Lost task 0.0 in stage 11.0 (TID 23,
> localhost): java.lang.IllegalArgumentException: Unsupported field
> StructField(day,ShortType,true)
> at
> org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConversions$1.apply(JDBCRDD.scala:342)
> at
> org.apache.spark.sql.jdbc.JDBCRDD$$anonfun$getConversions$1.apply(JDBCRDD.scala:329)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
> at org.apache.spark.sql.jdbc.JDBCRDD.getConversions(JDBCRDD.scala:329)
> at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.(JDBCRDD.scala:374)
> at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:350)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
> at org.apache.spark.scheduler.Task.run(Task.scala:70)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> Driver stacktrace:
> *SparkSQL >*
>
> any solutions?
>
> Thanks and regards.
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2015-12-10 Thread Niranda Perera
cuteQuery(CarbonAnalyticsProcessorService.java:199)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
>>> at
>>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
>>> at
>>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>>
>>> [1] https://docs.wso2.com/display/DAS300/Analyzing+HTTPD+Logs
>>>
>>> --
>>> Roshan Wijesena.
>>> Senior Software Engineer-WSO2 Inc.
>>> Mobile: *+94719154640 <%2B94719154640>*
>>> Email: ros...@wso2.com
>>> *WSO2, Inc. :** wso2.com <http://wso2.com/>*
>>> lean.enterprise.middleware.
>>>
>>
>>
>>
>> --
>> Roshan Wijesena.
>> Senior Software Engineer-WSO2 Inc.
>> Mobile: *+94719154640 <%2B94719154640>*
>> Email: ros...@wso2.com
>> *WSO2, Inc. :** wso2.com <http://wso2.com/>*
>> lean.enterprise.middleware.
>>
>
>
>
> --
> Roshan Wijesena.
> Senior Software Engineer-WSO2 Inc.
> Mobile: *+94719154640 <%2B94719154640>*
> Email: ros...@wso2.com
> *WSO2, Inc. :** wso2.com <http://wso2.com/>*
> lean.enterprise.middleware.
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Exception when saving Summarised data into Oracle

2015-12-08 Thread Niranda Perera
Hi Rukshan,

Seems like a jar is missing in Spark classpath.

as a workaround, can you add the following line in the
repository/conf/analytics/spark/external-spark-classpath.conf file and see
if it works?

repository/components/plugins/org.xerial.snappy.snappy-java_1.1.1.7.jar

rgds

On Tue, Dec 8, 2015 at 11:01 PM, Rukshan Premathunga <ruks...@wso2.com>
wrote:

> Hi,
>
> One user mentioned that when integrating APIM+DAS, saving summarised data
> into Oracle Database got following exception.
>
> Does anyone know the cause for this?
>
>
> [2015-12-08 13:00:00,022] INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsT
> ask} - Executing the schedule task for: APIM_STAT_script for tenant id:
> -1234
> [2015-12-08 13:00:00,037] INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsT
> ask} - Executing the schedule task for: Throttle_script for tenant id:
> -1234
> Exception in thread "dag-scheduler-event-loop"
> java.lang.NoClassDefFoundError: o
> rg/xerial/snappy/SnappyInputStream
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:274)
> at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
> ala:66)
> at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
> ala:60)
> at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
> t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
> at org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.s
> cala:80)
> at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
> ntBroadcastFactory.scala:34)
> at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
> ager.scala:62)
> at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
> GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
> GScheduler$$submitStage(DAGScheduler.scala:815)
> at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
> er.scala:799)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
> Scheduler.scala:1426)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
> Scheduler.scala:1418)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> Caused by: java.lang.ClassNotFoundException:
> org.xerial.snappy.SnappyInputStream
> cannot be found by spark-core_2.10_1.4.1.wso2v1
> at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
> eLoader.java:501)
> at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
> java:421)
> at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
> java:412)
> at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
> faultClassLoader.java:107)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 15 more
>
>
> Thanks and Regards.
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS][SPARK] How to drop temporary tables from spark?

2015-12-03 Thread Niranda Perera
Hi Amani, Please find my comments inline.

On Thu, Dec 3, 2015 at 6:29 PM, Amani Soysa <am...@wso2.com> wrote:

> Hi Niranda/Anjana,
>
> I wanted to change the schema of an existing temporary table in spark,
> however, when I change the schema when defining the table it did not work
> because the table already existed.
>

this is a known bug in DAS 3.0.0 and it will be resolved in the 3.0.1 patch
release [1]. This happens because currently, the schema of a table is
merged with the existing schema. With the [1] fix, you would get an option
to merge/ replace the schema of a table.

[1] https://wso2.org/jira/browse/DAS-314


>
> So I tried to drop according to spark manual
>
> "DROP [TEMPORARY] TABLE [IF EXISTS] tbl_name"
>

I guess you are referring to the MySQL syntax here. In Spark SQL drop table
queries are not supported, because in the spark runtime, a temporary table
is only a mapping to a physical table available in the datasource. Spark
will only pull/ push data to/ from it and it would not delete data.


>
> and still it did not work. Finally I had to drop all the H2 tables to
> resolve this issue.
>
> Is there a best practice when changing the schema of spark tables?
>

Unfortunately, until the DAS-314 fix is released, you would have to create
a new table with the changed schema.


>
>
> Regards,
> Amani
> --
> Amani Soysa
> Associate Technical Lead
> Mobile: +94772325528
> WSO2, Inc. | http://wso2.com/
> Lean . Enterprise . Middleware
>


Hope I answered your question.

Rgds

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [DSS] Removing the timezone information from the response

2015-11-11 Thread Niranda Perera
Hi all,
DSS (v 3.2.2) adds hosted server timezone to the time data types of
response.
Eg : 2014-01-11T06:00:00.000 will be converted in to
2014-01-11T06:00:00.000+05:30
Is there a way to avoid this when data does not contains timezone?

I tried setting the system variable -Duser.timezone=UTC, but that will
result in something like below.
2015-10-26T08:36:33.000+00:00

So the consumer will always see it as UTC/GMT time. In a scenario where DBs
are in different timezones, it will not be reflected in the service
response.

I found this public jira [1], and I wonder if this serves my purpose.

cheers

[1] https://wso2.org/jira/browse/DS-1069

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Escaping "/" in a search query in DAS

2015-11-10 Thread Niranda Perera
copying @gimantha

On Wed, Nov 11, 2015 at 10:29 AM, Rukshan Premathunga <ruks...@wso2.com>
wrote:

> Hi Kalpa,
>
> try this, worked for me,
>
> you need to escape 'escape character' as well.
> {
>  "tableName":"ORG_WSO2_CARBON_MSS_HTTPMONITORING",
>  "query":"request_uri:\\/student\\/883343913V",
>  "start":0,
>  "count":100
> }
>
> or try with exact searching,
>
> {
>  "tableName":"ORG_WSO2_CARBON_MSS_HTTPMONITORING",
>  "query":"request_uri:\"/student/883343913V\"",
>  "start":0,
>  "count":100
> }
>
> Thanks and Regards.
>
>
> On Wed, Nov 11, 2015 at 9:55 AM, Kalpa Welivitigoda <kal...@wso2.com>
> wrote:
>
>> Hi DAS team,
>>
>> I am trying a search query via the REST API [1] in DAS 3.0.0 and [2] is
>> my payload of the POST request. When I try to invoke with [3]
>> (request-count.json has the payload) I get the following as the response,
>>
>> {"status":"failed","message":"Error in index search: Invalid query, a
>> term must have a field"}
>>
>> It works fine if I change the query element as in [4].
>>
>> I found that "/" is a special character in lucene [5] and tried with [6],
>> still I get the same error response.
>>
>> Any idea of what is missing here?
>>
>>
>> [1]
>> https://docs.wso2.com/display/DAS300/Retrieving+All+Records+Matching+the+Given+Search+Query+via+REST+API
>>
>> [2]
>> {
>>  "tableName":"ORG_WSO2_CARBON_MSS_HTTPMONITORING",
>>  "query":"request_uri:/student/883343913V",
>>  "start":0,
>>  "count":100
>> }
>>
>> [3] curl -k -H "Content-Type: application/json" -H "Authorization: Basic
>> YWRtaW46YWRtaW4=" -d...@request-count.json
>> https://localhost:9443/analytics/search
>>
>> [4] "query":"service_method:getStudent",
>>
>> [5]
>> https://lucene.apache.org/core/5_2_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Escaping_Special_Characters
>>
>> [6] "query":"request_uri:\/student\/883343913V",
>>
>> --
>> Best Regards,
>>
>> Kalpa Welivitigoda
>> Software Engineer, WSO2 Inc. http://wso2.com
>> Email: kal...@wso2.com
>> Mobile: +94776509215
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [APIM] [DAS] Error in publishing APIM stats to DAS

2015-11-04 Thread Niranda Perera
mmit when autocommit=true
>>>>>>>> java.lang.RuntimeException: Can't call commit when autocommit=true
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.jdbc.carbon.JDBCRelation.insert(JDBCRelation.scala:193)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>>>>> at
>>>>>>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
>>>>>>>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
>>>>>>>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
>>>>>>>> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>>>>>>>> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>>>>>>>>
>>>>>>>>
>>>>>>>> Immediately after that error the below error was there.
>>>>>>>>
>>>>>>>> [2015-11-04 12:15:00,027] ERROR
>>>>>>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>>>>>>> executing task: Error while connecting to datasource WSO2AM_STATS_DB :
>>>>>>>> Table 'TP_WSO2AM_STATS_DB.API_REQUEST_SUMMARY' doesn't exist
>>>>>>>> java.lang.RuntimeException: Error while connecting to datasource
>>>>>>>> WSO2AM_STATS_DB : Table 'TP_WSO2AM_STATS_DB.API_REQUEST_SUMMARY' 
>>>>>>>> doesn't
>>>>>>>> exist
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.jdbc.carbon.JDBCRelation.liftedTree1$1(JDBCRelation.scala:143)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.jdbc.carbon.JDBCRelation.(JDBCRelation.scala:137)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.jdbc.carbon.AnalyticsJDBCRelationProvider.createRelation(JDBCRelation.scala:119)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:269)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.sources.CreateTempTableUsing.run(ddl.scala:412)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>>>>>>> at
>>>>>>>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
>>>>>>>> at
>>>>>>>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
>>>>>>>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
>>>>>>>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
>>>>>>>> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>>>>>>>> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>>>>>>>>
>>>>>>>>
>>>>>>>> As the error message says the
>>>>>>>> 'TP_WSO2AM_STATS_DB.API_REQUEST_SUMMARY' table was gone !
>>>>>>>>
>>>>>>>> *I added the relaxAutoCommit=true property to the JDBC connection
>>>>>>>> string to solve the first error message and stat feature worked.*
>>>>>>>>
>>>>>>>> Is the table deletion, a result of the first error (trying to
>>>>>>>> commit when auto commit is enabled)
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> Rushmin
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> [1] -
>>>>>>>> http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> *Rushmin Fernando*
>>>>>>>> *Technical Lead*
>>>>>>>>
>>>>>>>> WSO2 Inc. <http://wso2.com/> - Lean . Enterprise . Middleware
>>>>>>>>
>>>>>>>> email : rush...@wso2.com
>>>>>>>> mobile : +94772310855
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Dev mailing list
>>>>>>>> Dev@wso2.org
>>>>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> *Sinthuja Rajendran*
>>>>>>> Associate Technical Lead
>>>>>>> WSO2, Inc.:http://wso2.com
>>>>>>>
>>>>>>> Blog: http://sinthu-rajan.blogspot.com/
>>>>>>> Mobile: +94774273955
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Rushmin Fernando*
>>>>>> *Technical Lead*
>>>>>>
>>>>>> WSO2 Inc. <http://wso2.com/> - Lean . Enterprise . Middleware
>>>>>>
>>>>>> email : rush...@wso2.com
>>>>>> mobile : +94772310855
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Sinthuja Rajendran*
>>>>> Associate Technical Lead
>>>>> WSO2, Inc.:http://wso2.com
>>>>>
>>>>> Blog: http://sinthu-rajan.blogspot.com/
>>>>> Mobile: +94774273955
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Rushmin Fernando*
>>>> *Technical Lead*
>>>>
>>>> WSO2 Inc. <http://wso2.com/> - Lean . Enterprise . Middleware
>>>>
>>>> email : rush...@wso2.com
>>>> mobile : +94772310855
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> *Sinthuja Rajendran*
>>> Associate Technical Lead
>>> WSO2, Inc.:http://wso2.com
>>>
>>> Blog: http://sinthu-rajan.blogspot.com/
>>> Mobile: +94774273955
>>>
>>>
>>>
>>
>>
>> --
>> *Rushmin Fernando*
>> *Technical Lead*
>>
>> WSO2 Inc. <http://wso2.com/> - Lean . Enterprise . Middleware
>>
>> email : rush...@wso2.com
>> mobile : +94772310855
>>
>>
>>
>
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] WSO2 DAS 3.0.0 issue: failed to get records from table

2015-10-31 Thread Niranda Perera
va:159)
> at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
> at
> org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
> at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
> at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:150)
> at java.net.SocketInputStream.read(SocketInputStream.java:121)
> at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
> at sun.security.ssl.InputRecord.read(InputRecord.java:480)
> at
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
> at
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
> at
> org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
> at
> org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
> at
> org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116)
> at
> org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.readLine(MultiThreadedHttpConnectionManager.java:1413)
> at
> org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1973)
> at
> org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735)
> at
> org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098)
> at
> org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
> at
> org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
> at
> org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
> at
> org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:630)
> at
> org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:195)
> ... 101 more
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 Data Analytics Server 3.0.0 RC3

2015-10-27 Thread Niranda Perera
Hi all,

I tested the following features (using Java 8) and found that they are
working as expected.

- Spark clustering
- Fail-over scenarios in the analytics member nodes (Master failover,
worker failover, analytics cluster failover)

my vote,
[X] Stable - go ahead and release

cheers

On Tue, Oct 27, 2015 at 11:58 PM, Vidura Gamini Abhaya <vid...@wso2.com>
wrote:

> Hi,
>
> I've tested the following features / samples and experienced no issues.
>
> - General functionality
> - Samples - Sending Notifications through Published events using spark,
> HTTPD logs, Smart Home, Realtime Service Stats, Wikipedia Data
> - CApp deployment
> - Gadget Creation
> - Custom Dashboard Creation
> - JVM Metrics
> - Siddhi Try It / Event Simulator
>
> My vote - [x] Stable - go ahead and release
>
> Thanks and Regards,
>
> Vidura
>
>
>
> On 27 October 2015 at 17:53, Gokul Balakrishnan <go...@wso2.com> wrote:
>
>> Hi all,
>>
>> This is the third release candidate of WSO2 DAS 3.0.0. Please download,
>> test and vote. The vote will be open for 72 hours or as needed.
>>
>>
>> This release fixes the following issues:
>> https://wso2.org/jira/issues/?filter=12485
>>
>> Source & binary distribution files:
>> https://svn.wso2.org/repos/wso2/people/gokul/das/rc3/
>>
>> Maven staging repo:
>> http://maven.wso2.org/nexus/content/repositories/orgwso2das-077/
>>
>> The tag to be voted upon:
>> https://github.com/wso2/product-das/tree/v3.0.0-RC3
>>
>>
>> [ ] Broken - do not release (explain why)
>> [ ] Stable - go ahead and release
>>
>>
>> Thanks,
>> The WSO2 DAS Team.
>>
>> --
>> Gokul Balakrishnan
>> Senior Software Engineer,
>> WSO2, Inc. http://wso2.com
>> Mob: +94 77 593 5789 | +1 650 272 9927
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Vidura Gamini Abhaya, Ph.D.
> Director of Engineering
> M:+94 77 034 7754
> E: vid...@wso2.com
>
> WSO2 Inc. (http://wso2.com)
> lean.enterprise.middleware
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 Data Analytics Server 3.0.0 RC2

2015-10-26 Thread Niranda Perera
Hi all,

we are calling off the vote due to following reasons.

- DAS-289 [1]
- upgrading siddhi, which has a critical bug fix

rgds

[1] https://wso2.org/jira/browse/DAS-289



On Mon, Oct 26, 2015 at 1:31 PM, Sinthuja Ragendran <sinth...@wso2.com>
wrote:

> Hi,
>
> I did a smoke test on below, and didn't get any issues.
>
> 1) Scripts add/edit/execute/execute-in-backgroud/scheduling
> 2) Activity Explorer
> 3) Message Explorer
> 4) Dashboard
>
> [x] Stable - Go ahead and Release
>
> Thanks,
> Sinthuja.
>
> On Mon, Oct 26, 2015 at 9:09 AM, Rukshan Premathunga <ruks...@wso2.com>
> wrote:
>
>> Hi NuwanD,
>>
>> I'll test this.
>>
>> Thanks and Regards.
>>
>> On Mon, Oct 26, 2015 at 9:02 AM, Nuwan Dias <nuw...@wso2.com> wrote:
>>
>>> Hi Rukshan,
>>>
>>> Shall we test this release with the CApp we created for API Analytics?
>>>
>>> Thanks,
>>> NuwanD.
>>>
>>> On Mon, Oct 26, 2015 at 1:18 AM, Sachith Withana <sach...@wso2.com>
>>> wrote:
>>>
>>>> I've tested the following,
>>>>
>>>> * Stream creation / persistence / indexing / data explorer
>>>> * Interactive Analytics Search
>>>>
>>>> * Both in the Standalone mode with H2/Cassandra and Clustered Mode with
>>>> Cassandra,
>>>>
>>>> - Deploying and testing the Smart Home sample
>>>> - UDF functionality of Spark
>>>> - Backup Tool functionality
>>>> - Migration Tool functionality
>>>>
>>>> [x] Stable - Go ahead and Release
>>>>
>>>> On Sun, Oct 25, 2015 at 3:01 PM, Anjana Fernando <anj...@wso2.com>
>>>> wrote:
>>>>
>>>>> I've tested the following:-
>>>>>
>>>>> * Stream creation / data persistence / data explorer
>>>>>   - Primary key based persistence
>>>>> * Interactive analytics search
>>>>> * Custom gadget/dashboard creation
>>>>> * Deployed and tested following samples:-
>>>>>   - HTTPD
>>>>>   - Smart Home
>>>>>   - Wikipedia
>>>>> * Spark script execution
>>>>>   - Scheduled execution
>>>>>   - Execute in foreground
>>>>>   - Execute in background
>>>>>
>>>>> [X] Stable - go ahead and release
>>>>>
>>>>> Cheers,
>>>>> Anjana.
>>>>>
>>>>> On Sat, Oct 24, 2015 at 8:34 PM, Gokul Balakrishnan <go...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Devs,
>>>>>>
>>>>>> This is the second  release candidate of WSO2 DAS 3.0.0. Please
>>>>>> download, test and vote. The vote will be open for 72 hours or as needed.
>>>>>>
>>>>>>
>>>>>> This release fixes the following issues:
>>>>>> https://wso2.org/jira/issues/?filter=12474
>>>>>>
>>>>>> Source & binary distribution files:
>>>>>> https://svn.wso2.org/repos/wso2/people/gokul/das/rc2/
>>>>>>
>>>>>> Maven staging repo:
>>>>>> http://maven.wso2.org/nexus/content/repositories/orgwso2das-059/
>>>>>>
>>>>>> The tag to be voted upon:
>>>>>> https://github.com/wso2/product-das/tree/v3.0.0-RC2
>>>>>>
>>>>>>
>>>>>> [ ] Broken - do not release (explain why)
>>>>>> [ ] Stable - go ahead and release
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> The WSO2 DAS Team.
>>>>>>
>>>>>> --
>>>>>> Gokul Balakrishnan
>>>>>> Senior Software Engineer,
>>>>>> WSO2, Inc. http://wso2.com
>>>>>> Mob: +94 77 593 5789 | +1 650 272 9927
>>>>>>
>>>>>> ___
>>>>>> Dev mailing list
>>>>>> Dev@wso2.org
>>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Anjana Fernando*
>>>>> Senior Technical Lead
>>>>> WSO2 Inc. | http://wso2.com
>>>>> lean . enterprise . middleware
>>>>>
>>>>> ___
>>>>> Dev mailing list
>>>>> Dev@wso2.org
>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Sachith Withana
>>>> Software Engineer; WSO2 Inc.; http://wso2.com
>>>> E-mail: sachith AT wso2.com
>>>> M: +94715518127
>>>> Linked-In: <http://goog_416592669>
>>>> https://lk.linkedin.com/in/sachithwithana
>>>>
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>>>
>>>
>>>
>>> --
>>> Nuwan Dias
>>>
>>> Technical Lead - WSO2, Inc. http://wso2.com
>>> email : nuw...@wso2.com
>>> Phone : +94 777 775 729
>>>
>>
>>
>>
>> --
>> Rukshan Chathuranga.
>> Software Engineer.
>> WSO2, Inc.
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Sinthuja Rajendran*
> Associate Technical Lead
> WSO2, Inc.:http://wso2.com
>
> Blog: http://sinthu-rajan.blogspot.com/
> Mobile: +94774273955
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Hazelcast Issues in EC2 (Kernel 4.4.2)

2015-10-25 Thread Niranda Perera
elcast.spi.exception.CallerNotMemberException: Not Member!
>>> caller:Address[172.18.1.228]:4000, partitionId: 30, operation:
>>> com.hazelcast.map.impl.operation.GetOperation, service: hz:impl:mapService
>>> at
>>> com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.ensureValidMember(OperationRunnerImpl.java:336)
>>> at
>>> com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:308)
>>> at
>>> com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.processPacket(OperationThread.java:142)
>>> at
>>> com.hazelcast.spi.impl.operationexecutor.classic.OperationThread.process(OperationThread.java:115)
>>>
>>> 
>>>
>>>  ERROR {com.hazelcast.cluster.impl.operations.JoinCheckOperation} -
>>> [172.18.1.229]:4000 [wso2.qa.das.domain] [3.5.2] Cannot send response:
>>> JoinRequest{packetVersion=4, buildNumber=20150826,
>>> address=Address[172.18.1.229]:4000,
>>> uuid='6848559f-141d-47cd-9e75-53bb2de10d52', credentials=null,
>>> memberCount=4, tryCount=0} to Address[172.18.1.228]:4000
>>> {com.hazelcast.cluster.impl.operations.JoinCheckOperation}
>>> com.hazelcast.core.HazelcastException: Cannot send response:
>>> JoinRequest{packetVersion=4, buildNumber=20150826,
>>> address=Address[172.18.1.229]:4000,
>>> uuid='6848559f-141d-47cd-9e75-53bb2de10d52', credentials=null,
>>> memberCount=4, tryCount=0} to Address[172.18.1.228]:4000
>>> at
>>> com.hazelcast.spi.impl.ResponseHandlerFactory$RemoteInvocationResponseHandler.sendResponse(ResponseHandlerFactory.java:131)
>>> at
>>> com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.handleResponse(OperationRunnerImpl.java:204)
>>>
>>> 
>>>
>>> Have any one of you guys seen issues similar to the above in EC2? .. and
>>> what are the possibilities this could arise, the most obvious thing would
>>> be networking issues that can arise in EC2, which maybe related to a higher
>>> latency that's there or something. Any of you guys know how to maybe tune
>>> HZ in this situations to have a higher timeout values or anything like
>>> that, or else, doing this in EC2 is altogether not recommended? ..
>>>
>>> [1] https://wso2.org/jira/browse/DAS-301
>>> [2] https://wso2.org/jira/browse/DAS-302
>>>
>>> Cheers,
>>> Anjana.
>>> --
>>> *Anjana Fernando*
>>> Senior Technical Lead
>>> WSO2 Inc. | http://wso2.com
>>> lean . enterprise . middleware
>>>
>>
>>
>>
>> --
>> *Anjana Fernando*
>> Senior Technical Lead
>> WSO2 Inc. | http://wso2.com
>> lean . enterprise . middleware
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Thusitha Dayaratne
> Software Engineer
> WSO2 Inc. - lean . enterprise . middleware |  wso2.com
>
> Mobile  +94712756809
> Blog  alokayasoya.blogspot.com
> Abouthttp://about.me/thusithathilina
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] MySQL Database and Table creation when using carbonJDBC option in spark environment

2015-10-01 Thread Niranda Perera
Hi Thanuja and Imesh,

let me clarify the use of the term "create temporary table" with regard to
Spark.
inside DAS we save ('persist') data in DAL (Dara access layer) tables. So
in order for us to query these tables, spark needs some sort of a mapping
to the tables in DAL in its runtime environment. This mapping is created in
the temporary table queries. These temp tables are only a mapping. Not a
physical table.

@thanuja, yes you are correct! We have to manually create the tables in
MySQL before making the temp table mapping in Spark SQL.
On Thu, Oct 1, 2015 at 9:53 AM, Thanuja Uruththirakodeeswaran <
thanu...@wso2.com> wrote:
>
>
> In DAS spark environment, we can't directly insert the analyzed data to
> our mysql table. We should create a temporary table using our datasources
> to manipulate them.
>

Yes, can you please explain the reasons? What does this analysis do? Why we
cannot directly insert them to the RDBMS?

Thanks

On Thu, Oct 1, 2015 at 9:53 AM, Thanuja Uruththirakodeeswaran <
thanu...@wso2.com> wrote:

> Hi Imesh,
>
> If we take the above scenario, I need to insert the analyzed/aggregated
> data which is obtained as result after spark sql processing, to my mysql
> table (sample_table). In order to do that, first we need to create a
> temporary table using the corresponding mysql database (sample_datasource)
> and table(sample_table) in spark environment and then only by inserting
> data to this temporary table in spark environment, we can update our mysql
> table.
>
> In DAS spark environment, we can't directly insert the analyzed data to
> our mysql table. We should create a temporary table using our datasources
> to manipulate them. I think that's why they named it as '*temporary*'
> table.
>
> @Niranda Please correct me if I'm wrong.
>
> Thanks.
>
> On Thu, Oct 1, 2015 at 7:00 AM, Imesh Gunaratne  wrote:
>
>> Hi Thanuja,
>>
>> Can you please explain the purpose of these temporary tables?
>>
>> Thanks
>>
>> On Wed, Sep 30, 2015 at 11:53 PM, Thanuja Uruththirakodeeswaran <
>> thanu...@wso2.com> wrote:
>>
>>> Hi All,
>>>
>>> When we create temporary tables in spark environment using carbonJDBC
>>> option as explained in [1], we are using a datasource and tableName from
>>> which spark environment temporary table will get data as follow:
>>> CREATE TEMPORARY TABLE  using CarbonJDBC options
>>> (dataSource "", tableName "");
>>>
>>> I've used a mysql database (sample_datasource) for datasource and used
>>> mysql tables created in that database for tableName (sample_table) as
>>> follow:
>>> CREATE TEMPORARY TABLE sample using CarbonJDBC options (dataSource "
>>> sample_datasource", tableName "sample_table");
>>>
>>> But I'm creating the mysql database and tables by executing sql
>>> statements manually. Is there a way in DAS that we can add these sql
>>> statements inside a script and create the database and tables when we start
>>> the server?
>>>
>>> [1]. https://docs.wso2.com/display/DAS300/Spark+Query+Language
>>>
>>> Thanks.
>>>
>>> --
>>> Thanuja Uruththirakodeeswaran
>>> Software Engineer
>>> WSO2 Inc.;http://wso2.com
>>> lean.enterprise.middleware
>>>
>>> mobile: +94 774363167
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.gunaratne.org
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Thanuja Uruththirakodeeswaran
> Software Engineer
> WSO2 Inc.;http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 774363167
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.gunaratne.org
Lean . Enterprise . Middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Class not found exception when running a spark query with HBase

2015-09-20 Thread Niranda Perera
 lost)
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> TID: [-1] [] [2015-09-18 09:36:39,600] ERROR
> {org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend} -  Asked
> to remove non-existent executor 1
> {org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend}
>
>
> Thank you!
> --
>
> *Pubudu Gunatilaka*
> Software Engineer
> WSO2, Inc.: http://wso2.com
> lean.enterprise.middleware
> mobile:  +94 77 4078049
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Extract year from timestamp with Spark in DAS 3.0.0

2015-09-13 Thread Niranda Perera
Hi Jorge and Gimantha,

UDF's is the way to go. Please go through that blogpost Gimantha has
posted.

we will be updating the docs on UDF's in DAS soon.

rgds

On Sat, Sep 12, 2015 at 10:23 AM, Gimantha Bandara <giman...@wso2.com>
wrote:

> Hi Jorge,
>
> You can use Spark UDFs to write a function to extract the year from epoch
> time. Please refer to the blog [1] for example implementation of UDFs.
>
> @Niranda, is there any other way to achieve the same?
>
> [1]
> http://thanu912.blogspot.com/2015/08/using-user-defined-function-udf-in.html
>
> On Fri, Sep 11, 2015 at 4:07 PM, Jorge <isildur...@gmail.com> wrote:
>
>> the question in SO:
>>
>> http://stackoverflow.com/questions/32531169/extract-year-from-timestamp-with-spark-sql-in-wso2-das
>>
>> Jorge.
>>
>>
>> 2015-09-10 12:32 GMT-04:00 Jorge <isildur...@gmail.com>:
>>
>>> Hi folks,
>>>
>>> In my hive scripts if I want to extract the year from timestamp I used
>>> this:
>>>
>>> year(from_unixtime(cast(payload_fecha/1000 as BIGINT),'-MM-dd
>>> HH:mm:ss.SSS' )) as year
>>>
>>> now I testing the new DAS snapshot and I want to do the same but I
>>> cannot use *from_unixtime*.
>>> So how can I do the same in spark SQL?
>>>
>>> Regards,
>>>Jorge.
>>>
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Gimantha Bandara
> Software Engineer
> WSO2. Inc : http://wso2.com
> Mobile : +94714961919
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error creating table using CarbonJDBC

2015-09-08 Thread Niranda Perera
lve.java:950)
> at
> org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421)
> at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
> at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> at java.lang.Thread.run(Thread.java:745)
>
> Thanks.
>
> --
> Thanuja Uruththirakodeeswaran
> Software Engineer
> WSO2 Inc.;http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 774363167
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error creating table using CarbonJDBC

2015-09-08 Thread Niranda Perera
Hi Thanuja,

I think this is the issue.

analytics-datasources.xml describes the datasource as


WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB
The datasource used for analytics record
store



jdbc:mysql://127.0.0.1:3306/ANALYTICS_PROCESSED_DATA_STORE
root
root
com.mysql.jdbc.Driver
6
true
SELECT 1
3
false





so, your query,

create temporary table cluster_member using CarbonJDBC options
(dataSource "ANALYTICS_PROCESSED_DATA_STORE", tableName
"CLUSTER_MEMBER");

the datasource name has to be "WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB".

can you check this and let me know?


cheers



On Wed, Sep 9, 2015 at 9:02 AM, Thanuja Uruththirakodeeswaran <
thanu...@wso2.com> wrote:

> Hi Niranda,
>
> I've attached the files here. I tested the connection by DAS datasource
>  "Test Connection" option. It says "Connection is healthy". Do I need to
> pass any param in analtytics-datasources.xml file for mysql db?
>
> Thanks.
>
> On Tue, Sep 8, 2015 at 11:25 PM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi thanuja,
>>
>> can you attach the config files in repository/conf/analytics/* and
>> repository/conf/datasources/*
>>
>> I suspect this is a connection issue.
>>
>> rgds
>>
>> On Tue, Sep 8, 2015 at 5:28 PM, Thanuja Uruththirakodeeswaran <
>> thanu...@wso2.com> wrote:
>>
>>> Hi Niranda,
>>>
>>> I'm trying the following query in spark:
>>>
>>> create temporary table cluster_member using CarbonJDBC options
>>> (dataSource "ANALYTICS_PROCESSED_DATA_STORE", tableName "CLUSTER_MEMBER");
>>>
>>> But I'm getting the following error. How to fix this?
>>>
>>> org.apache.axis2.AxisFault: Exception occurred while trying to invoke
>>> service method execute
>>> at
>>> org.apache.axis2.util.Utils.getInboundFaultFromMessageContext(Utils.java:531)
>>> at
>>> org.apache.axis2.description.OutInAxisOperationClient.handleResponse(OutInAxisOperation.java:370)
>>> at
>>> org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:445)
>>> at
>>> org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
>>> at
>>> org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
>>> at
>>> org.wso2.carbon.analytics.spark.admin.stub.AnalyticsProcessorAdminServiceStub.execute(AnalyticsProcessorAdminServiceStub.java:912)
>>> at
>>> org.wso2.carbon.analytics.spark.ui.client.AnalyticsExecutionClient.executeScriptContent(AnalyticsExecutionClient.java:67)
>>> at
>>> org.apache.jsp.spark_002dmanagement.executeScript_005fajaxprocessor_jsp._jspService(executeScript_005fajaxprocessor_jsp.java:110)
>>> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>>> at
>>> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
>>> at
>>> org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:395)
>>> at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:339)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>>> at org.wso2.carbon.ui.JspServlet.service(JspServlet.java:155)
>>> at org.wso2.carbon.ui.TilesJspServlet.service(TilesJspServlet.java:80)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>>> at
>>> org.eclipse.equinox.http.helper.ContextPathServletAdaptor.service(ContextPathServletAdaptor.java:37)
>>> at
>>> org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
>>> at
>>> org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
>>> at
>>> org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
>>> at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>>> at
>>> org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
>>> at
>>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
>>> at
>>> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
>>> at org.apach

Re: [Dev] [DAS] Error creating table using CarbonJDBC

2015-09-08 Thread Niranda Perera
a:421)
> at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1074)
> at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611)
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1739)
> at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1698)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> at java.lang.Thread.run(Thread.java:745)
>
> I checked my local mysql db as well.
>
> mysql> show databases;
> ++
> | Database   |
> ++
> | information_schema |
> | ANALYTICS_FS_DB|
> | ANALYTICS_PROCESSED_DATA_STORE |
> | as_config_db   |
> | mysql  |
> | performance_schema |
> | ppaas_config_db|
> | ppaas_registry_db  |
> | ppaas_user_db  |
> ++
> 9 rows in set (0.00 sec)
>
> mysql> SELECT TABLE_NAME  FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE
> = 'BASE TABLE' AND TABLE_SCHEMA='ANALYTICS_PROCESSED_DATA_STORE';
>
> +-+
> | TABLE_NAME  |
> +-+
> | ANX___7LgRmgTc_ |
> | ANX___7Lu5N_U8_ |
> | CLUSTER_MEMBER  |
> +-+
> 3 rows in set (0.00 sec)
>
> mysql>
>
> Thanks.
>
>
>
> On Wed, Sep 9, 2015 at 10:02 AM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Thanuja,
>>
>> I think this is the issue.
>>
>> analytics-datasources.xml describes the datasource as
>>
>> 
>> WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB
>> The datasource used for analytics record 
>> store
>> 
>> 
>> 
>> jdbc:mysql://127.0.0.1:3306/ANALYTICS_PROCESSED_DATA_STORE
>> root
>> root
>> com.mysql.jdbc.Driver
>> 6
>> true
>> SELECT 1
>> 3
>> false
>> 
>> 
>> 
>>
>>
>> so, your query,
>>
>> create temporary table cluster_member using CarbonJDBC options (dataSource 
>> "ANALYTICS_PROCESSED_DATA_STORE", tableName "CLUSTER_MEMBER");
>>
>> the datasource name has to be "WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB".
>>
>> can you check this and let me know?
>>
>>
>> cheers
>>
>>
>>
>> On Wed, Sep 9, 2015 at 9:02 AM, Thanuja Uruththirakodeeswaran <
>> thanu...@wso2.com> wrote:
>>
>>> Hi Niranda,
>>>
>>> I've attached the files here. I tested the connection by DAS datasource
>>>  "Test Connection" option. It says "Connection is healthy". Do I need to
>>> pass any param in analtytics-datasources.xml file for mysql db?
>>>
>>> Thanks.
>>>
>>> On Tue, Sep 8, 2015 at 11:25 PM, Niranda Perera <nira...@wso2.com>
>>> wrote:
>>>
>>>> Hi thanuja,
>>>>
>>>> can you attach the config files in repository/conf/analytics/* and
>>>> repository/conf/datasources/*
>>>>
>>>> I suspect this is a connection issue.
>>>>
>>>> rgds
>>>>
>>>> On Tue, Sep 8, 2015 at 5:28 PM, Thanuja Uruththirakodeeswaran <
>>>> thanu...@wso2.com> wrote:
>>>>
>>>>> Hi Niranda,
>>>>>
>>>>> I'm trying the following query in spark:
>>>>>
>>>>> create temporary table cluster_member using CarbonJDBC options
>>>>> (dataSource "ANALYTICS_PROCESSED_DATA_STORE", tableName "CLUSTER_MEMBER");
>>>>>
>>>>> But I'm getting the following error. How to fix this?
>>>>>
>>>>> org.apache.axis2.AxisFault: Exception occurred while trying to invoke
>>>>> service method execute
>>>>> at
>>>>> org.apache.axis2.util.Utils.getInboundFaultFromMessageContext(Utils.java:531)
>>>>> at
>>>>> org.apache.axis2.description.OutInAxisOperationClient.handleResponse(OutInA

Re: [Dev] MySQL error while executing dbscripts in windows env

2015-09-07 Thread Niranda Perera
Hi all,

thanks for the feedback.

I think this should be changed kernel wide. pls follow the thread [1]

cheers

[1] [Dev] [Carbon] mysql dbscripts using values > 255 for varchar for
unique keys

On Mon, Sep 7, 2015 at 3:37 PM, Aiyadurai Rajeevan <rajeev...@wso2.com>
wrote:

> +1 for Gokul suggestion(VARCHAR(255))
>
> I had same issue and fixed it by changing to VARCHAR(255)
>
> Thanks & Regards,
> S.A.Rajeevan
> Software Engineer WSO2 Inc
> E-Mail: rajeev...@wso2.com | Mobile : +94776411636
>
> On Mon, Sep 7, 2015 at 12:51 AM, Gokul Balakrishnan <go...@wso2.com>
> wrote:
>
>> Hi Niranda,
>>
>> IMO we should fix this properly, by setting the type to VARCHAR(255)
>> rather than VARCHAR(256) for fields with unique indices, in the place where
>> the tables are created. This is because we can't mandate users to have a
>> specific character set (especially latin1). Setting the engine to MyISAM
>> during table creation would also solve this but that is also not
>> recommended.
>>
>> Thanks,
>>
>>
>> On Monday, 7 September 2015, Niranda Perera <nira...@wso2.com> wrote:
>>
>>> HI Hemika,
>>>
>>> it worked! thank you :-)
>>>
>>> On Mon, Sep 7, 2015 at 12:19 AM, Hemika Kodikara <hem...@wso2.com>
>>> wrote:
>>>
>>>> Hi Niranda,
>>>>
>>>> As far as I can remember this is coming due to a db collation issue.
>>>>
>>>> You can fix it by changing Collation from UTF-8 to latin.
>>>>
>>>> ex :
>>>> create database  character set latin1;
>>>>
>>>> Regards,
>>>> Hemika
>>>>
>>>> Hemika Kodikara
>>>> Software Engineer
>>>> WSO2 Inc.
>>>> lean . enterprise . middleware
>>>> http://wso2.com
>>>>
>>>> Mobile : +9477762
>>>>
>>>> On Mon, Sep 7, 2015 at 12:09 AM, Niranda Perera <nira...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I'm testing DAS 3.0.0 in the windows env (windows 8.1 & java 1.8.0)
>>>>> with MySQL as the carbon_db.
>>>>>
>>>>> I tried, executing the mysql setup dbscript and I get the following
>>>>> error
>>>>>
>>>>> 23:41:09 CREATE INDEX REG_PATH_IND_BY_PATH_VALUE USING HASH ON
>>>>> REG_PATH(REG_PATH_VALUE, REG_TENANT_ID) Error Code: 1071. Specified
>>>>> key was too long; max key length is 767 bytes 0.078 sec
>>>>>
>>>>> this error has already been reported [1] and when I check the script
>>>>> also, it mentions this CARBON JIRA.
>>>>>
>>>>> however, this error does not occur in the linux (ubuntu) env.
>>>>>
>>>>> would like to know how to proceed with this?
>>>>>
>>>>> rgds
>>>>>
>>>>> [1] https://wso2.org/jira/browse/CARBON-5917
>>>>>
>>>>> --
>>>>> *Niranda Perera*
>>>>> Software Engineer, WSO2 Inc.
>>>>> Mobile: +94-71-554-8430
>>>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>>>> https://pythagoreanscript.wordpress.com/
>>>>>
>>>>> ___
>>>>> Dev mailing list
>>>>> Dev@wso2.org
>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *Niranda Perera*
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94-71-554-8430
>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>> https://pythagoreanscript.wordpress.com/
>>>
>>
>>
>> --
>> Gokul Balakrishnan
>> Senior Software Engineer,
>> WSO2, Inc. http://wso2.com
>> Mob: +94 77 593 5789 | +1 650 272 9927
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Carbon] mysql dbscripts using values > 255 for varchar for unique keys

2015-09-07 Thread Niranda Perera
Hi all,

I came across with the following issue while running the mysql script.

23:41:09 CREATE INDEX REG_PATH_IND_BY_PATH_VALUE USING HASH ON
REG_PATH(REG_PATH_VALUE, REG_TENANT_ID) Error Code: 1071. Specified key was
too long; max key length is 767 bytes 0.078 sec

pls refer the mail thread [1]

it turns out that this happens when you use varchar(255<) values and innodb
does not support it for unique keys. for an example query in carbon4-kernel
master is [2]

as mentioned in the thread [1] we need to fix this kernel wide, hence would
like to draw your attention to this.

cheers

[1] [Dev] MySQL error while executing dbscripts in windows env

[2]
https://github.com/wso2/carbon4-kernel/blob/master/distribution/kernel/carbon-home/dbscripts/mysql.sql#L31

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Carbon] mysql dbscripts using values > 255 for varchar for unique keys

2015-09-07 Thread Niranda Perera
Pls find the JIRA here [1]

[1] https://wso2.org/jira/browse/CARBON-15407

On Mon, Sep 7, 2015 at 6:25 PM, Aruna Karunarathna <ar...@wso2.com> wrote:

>
>
> On Mon, Sep 7, 2015 at 6:13 PM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi all,
>>
>> I came across with the following issue while running the mysql script.
>>
>> 23:41:09 CREATE INDEX REG_PATH_IND_BY_PATH_VALUE USING HASH ON
>> REG_PATH(REG_PATH_VALUE, REG_TENANT_ID) Error Code: 1071. Specified key
>> was too long; max key length is 767 bytes 0.078 sec
>>
>> pls refer the mail thread [1]
>>
>> it turns out that this happens when you use varchar(255<) values and
>> innodb does not support it for unique keys. for an example query in
>> carbon4-kernel master is [2]
>>
>> as mentioned in the thread [1] we need to fix this kernel wide, hence
>> would like to draw your attention to this.
>>
>
> Please create a Jira. So we won't miss this.
>
> Regards,
> Aruna
>
>>
>> cheers
>>
>> [1] [Dev] MySQL error while executing dbscripts in windows env
>>
>> [2]
>> https://github.com/wso2/carbon4-kernel/blob/master/distribution/kernel/carbon-home/dbscripts/mysql.sql#L31
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
>
> *Aruna Sujith Karunarathna *| Software Engineer
> WSO2, Inc | lean. enterprise. middleware.
> #20, Palm Grove, Colombo 03, Sri Lanka
> Mobile: +94 71 9040362 | Work: +94 112145345
> Email: ar...@wso2.com | Web: www.wso2.com
>
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error in creating system tables: Specified key was too long

2015-09-06 Thread Niranda Perera
lect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
> at
> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
> at
> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
> at
> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
> at
> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
> at
> org.eclipse.equinox.http.servlet.internal.Activator.registerHttpService(Activator.java:81)
> at
> org.eclipse.equinox.http.servlet.internal.Activator.addProxyServlet(Activator.java:60)
> at
> org.eclipse.equinox.http.servlet.internal.ProxyServlet.init(ProxyServlet.java:40)
> at
> org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.init(DelegationServlet.java:38)
> at
> org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1284)
> at
> org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1197)
> at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:1087)
> at
> org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:5262)
> at
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5550)
> at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
> at
> org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1575)
> at
> org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1565)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by:
> org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
> Error in creating system tables: Specified key was too long; max key length
> is 767 bytes
> at
> org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem.checkAndCreateSystemTables(RDBMSAnalyticsFileSystem.java:97)
> at
> org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem.init(RDBMSAnalyticsFileSystem.java:75)
> at
> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceImpl.(AnalyticsDataServiceImpl.java:134)
> ... 117 more
> Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:
> Specified key was too long; max key length is 767 bytes
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at com.mysql.jdbc.Util.handleNewInstance(Util.java:389)
> at com.mysql.jdbc.Util.getInstance(Util.java:372)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:980)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3835)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3771)
> at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2531)
> at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1618)
> at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1549)
> at
> org.wso2.carbon.analytics.datasource.rdbms.RDBMSAnalyticsFileSystem.checkAndCreateSystemTables(RDBMSAnalyticsFileSystem.java:90)
> ... 119 more
>
>
> Appreciate any help on this.
>
> Thank you!
> --
>
> *Pubudu Gunatilaka*
> Software Engineer
> WSO2, Inc.: http://wso2.com
> lean.enterprise.middleware
> mobile:  +94 77 4078049
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] MySQL error while executing dbscripts in windows env

2015-09-06 Thread Niranda Perera
Hi all,

I'm testing DAS 3.0.0 in the windows env (windows 8.1 & java 1.8.0) with
MySQL as the carbon_db.

I tried, executing the mysql setup dbscript and I get the following error

23:41:09 CREATE INDEX REG_PATH_IND_BY_PATH_VALUE USING HASH ON
REG_PATH(REG_PATH_VALUE, REG_TENANT_ID) Error Code: 1071. Specified key was
too long; max key length is 767 bytes 0.078 sec

this error has already been reported [1] and when I check the script also,
it mentions this CARBON JIRA.

however, this error does not occur in the linux (ubuntu) env.

would like to know how to proceed with this?

rgds

[1] https://wso2.org/jira/browse/CARBON-5917

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error in creating system tables: Specified key was too long

2015-09-06 Thread Niranda Perera
Hi Pubudu,

I got the same error in the windows env. I'm working on it. will update you
soon

cheers

On Sun, Sep 6, 2015 at 9:31 PM, Pubudu Gunatilaka <pubu...@wso2.com> wrote:

> Hi Niranda,
>
> Actually I am running on docker. Both DAS and mysql are docker containers
> which run on Debian.
>
> Thank you!
>
> On Sun, Sep 6, 2015 at 7:04 PM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Pubudu,
>>
>> are you using the windows env?
>>
>> On Sun, Sep 6, 2015 at 3:52 PM, Pubudu Gunatilaka <pubu...@wso2.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I am using one of the nightly built DAS packs from last week. I have
>>> used a mysql database for analytic file system and at the server start up I
>>> am getting following error message.
>>>
>>> TID: [-1234] [] [2015-09-06 09:01:51,489] ERROR
>>> {org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent} -
>>>  Error in activating analytics data service: Error in creating analytics
>>> data service from configuration: Error in creating system tables: Specified
>>> key was too long; max key length is 767 bytes
>>> {org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent}
>>> org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
>>> Error in creating analytics data service from configuration: Error in
>>> creating system tables: Specified key was too long; max key length is 767
>>> bytes
>>> at
>>> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceImpl.(AnalyticsDataServiceImpl.java:137)
>>> at
>>> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent.activate(AnalyticsDataServiceComponent.java:62)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
>>> at
>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
>>> at
>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
>>> at
>>> org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
>>> at
>>> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
>>> at
>>> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
>>> at
>>> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
>>> at
>>> org.wso2.carbon.ntask.core.internal.TasksDSComponent.activate(TasksDSComponent.java:106)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260

Re: [Dev] MySQL error while executing dbscripts in windows env

2015-09-06 Thread Niranda Perera
HI Hemika,

it worked! thank you :-)

On Mon, Sep 7, 2015 at 12:19 AM, Hemika Kodikara <hem...@wso2.com> wrote:

> Hi Niranda,
>
> As far as I can remember this is coming due to a db collation issue.
>
> You can fix it by changing Collation from UTF-8 to latin.
>
> ex :
> create database  character set latin1;
>
> Regards,
> Hemika
>
> Hemika Kodikara
> Software Engineer
> WSO2 Inc.
> lean . enterprise . middleware
> http://wso2.com
>
> Mobile : +9477762
>
> On Mon, Sep 7, 2015 at 12:09 AM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi all,
>>
>> I'm testing DAS 3.0.0 in the windows env (windows 8.1 & java 1.8.0) with
>> MySQL as the carbon_db.
>>
>> I tried, executing the mysql setup dbscript and I get the following error
>>
>> 23:41:09 CREATE INDEX REG_PATH_IND_BY_PATH_VALUE USING HASH ON
>> REG_PATH(REG_PATH_VALUE, REG_TENANT_ID) Error Code: 1071. Specified key
>> was too long; max key length is 767 bytes 0.078 sec
>>
>> this error has already been reported [1] and when I check the script
>> also, it mentions this CARBON JIRA.
>>
>> however, this error does not occur in the linux (ubuntu) env.
>>
>> would like to know how to proceed with this?
>>
>> rgds
>>
>> [1] https://wso2.org/jira/browse/CARBON-5917
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>


-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error in creating system tables: Specified key was too long

2015-09-06 Thread Niranda Perera
Hi Pubudu,

can you delete the mysql databases and create them as follows. this
resolved a similar problem in the windows env.

create database  character set latin1;

please to the thread [1]

rgds

[1] [Dev] MySQL error while executing dbscripts in windows env

On Sun, Sep 6, 2015 at 9:38 PM, Niranda Perera <nira...@wso2.com> wrote:

> Hi Pubudu,
>
> I got the same error in the windows env. I'm working on it. will update
> you soon
>
> cheers
>
> On Sun, Sep 6, 2015 at 9:31 PM, Pubudu Gunatilaka <pubu...@wso2.com>
> wrote:
>
>> Hi Niranda,
>>
>> Actually I am running on docker. Both DAS and mysql are docker containers
>> which run on Debian.
>>
>> Thank you!
>>
>> On Sun, Sep 6, 2015 at 7:04 PM, Niranda Perera <nira...@wso2.com> wrote:
>>
>>> Hi Pubudu,
>>>
>>> are you using the windows env?
>>>
>>> On Sun, Sep 6, 2015 at 3:52 PM, Pubudu Gunatilaka <pubu...@wso2.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am using one of the nightly built DAS packs from last week. I have
>>>> used a mysql database for analytic file system and at the server start up I
>>>> am getting following error message.
>>>>
>>>> TID: [-1234] [] [2015-09-06 09:01:51,489] ERROR
>>>> {org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent} -
>>>>  Error in activating analytics data service: Error in creating analytics
>>>> data service from configuration: Error in creating system tables: Specified
>>>> key was too long; max key length is 767 bytes
>>>> {org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent}
>>>> org.wso2.carbon.analytics.datasource.commons.exception.AnalyticsException:
>>>> Error in creating analytics data service from configuration: Error in
>>>> creating system tables: Specified key was too long; max key length is 767
>>>> bytes
>>>> at
>>>> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceImpl.(AnalyticsDataServiceImpl.java:137)
>>>> at
>>>> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent.activate(AnalyticsDataServiceComponent.java:62)
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>> at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>> at
>>>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
>>>> at
>>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
>>>> at
>>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
>>>> at
>>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
>>>> at
>>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
>>>> at
>>>> org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
>>>> at
>>>> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
>>>> at
>>>> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
>>>> at
>>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
>>>> at
>>>> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
>>>> at
>>>> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
>>>> at
>>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
>>>> at
>>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
>>>> at
>>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
>>>> at
>>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
>>>> at
>>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl

Re: [Dev] [DAS] Error when execute "insert overwrite" query using CarbonJDBC

2015-09-03 Thread Niranda Perera
Hi,

this is fixed now. :-)

On Tue, Sep 1, 2015 at 10:19 AM, Rukshan Premathunga <ruks...@wso2.com>
wrote:

> Yes Niranda.
>
> Thanks and Regards.
>
> On Tue, Sep 1, 2015 at 10:18 AM, Niranda Perera <nira...@wso2.com> wrote:
>
>> Hi Rukshan,
>>
>> Are you using a latest DAS pack?
>>
>> On Tue, Sep 1, 2015 at 9:49 AM, Rukshan Premathunga <ruks...@wso2.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, Sep 1, 2015 at 9:48 AM, Rukshan Premathunga <ruks...@wso2.com>
>>> wrote:
>>>
>>>> Hi Niranda,
>>>>
>>>> please find the attachment for stacktrace.
>>>>
>>>> Thanks and Regards.
>>>>
>>>> On Tue, Sep 1, 2015 at 9:45 AM, Niranda Perera <nira...@wso2.com>
>>>> wrote:
>>>>
>>>>> Hi Rukshan,
>>>>>
>>>>> Can you put the full stacktrace?
>>>>>
>>>>> Rgds
>>>>>
>>>>> On Tue, Sep 1, 2015, 09:44 Rukshan Premathunga <ruks...@wso2.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> i created two temp table as below.
>>>>>>
>>>>>> *create temporary table APIRequestSummaryData using CarbonJDBC
>>>>>> options (dataSource "WSO2_AM_STAT", tableName "API_REQUEST_SUMMARY");*
>>>>>>
>>>>>> *create temporary table APIRequestData USING CarbonAnalytics
>>>>>> OPTIONS(tableName "ORG_WSO2_APIMGT_STATISTICS_REQUEST"); *
>>>>>>
>>>>>> when i execute query,
>>>>>>
>>>>>> *insert overwrite table APIRequestSummaryData select query  *
>>>>>>
>>>>>> I get the following error.
>>>>>>
>>>>>> *org.apache.axis2.AxisFault: Can't DROP
>>>>>> 'API_REQUEST_SUMMARY_PARTITION_KEY'; check that column/key exists*
>>>>>>
>>>>>>
>>>>>> did anyone encounter this?
>>>>>>
>>>>>>
>>>>>> Thanks and Regards.
>>>>>>
>>>>>> --
>>>>>> Rukshan Chathuranga.
>>>>>> Software Engineer.
>>>>>> WSO2, Inc.
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Rukshan Chathuranga.
>>>> Software Engineer.
>>>> WSO2, Inc.
>>>>
>>>
>>>
>>>
>>> --
>>> Rukshan Chathuranga.
>>> Software Engineer.
>>> WSO2, Inc.
>>>
>>
>>
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>
>
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error when execute "insert overwrite" query using CarbonJDBC

2015-08-31 Thread Niranda Perera
Hi Rukshan,

Can you put the full stacktrace?

Rgds

On Tue, Sep 1, 2015, 09:44 Rukshan Premathunga  wrote:

> Hi,
>
> i created two temp table as below.
>
> *create temporary table APIRequestSummaryData using CarbonJDBC options
> (dataSource "WSO2_AM_STAT", tableName "API_REQUEST_SUMMARY");*
>
> *create temporary table APIRequestData USING CarbonAnalytics
> OPTIONS(tableName "ORG_WSO2_APIMGT_STATISTICS_REQUEST"); *
>
> when i execute query,
>
> *insert overwrite table APIRequestSummaryData select query  *
>
> I get the following error.
>
> *org.apache.axis2.AxisFault: Can't DROP
> 'API_REQUEST_SUMMARY_PARTITION_KEY'; check that column/key exists*
>
>
> did anyone encounter this?
>
>
> Thanks and Regards.
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Error when execute "insert overwrite" query using CarbonJDBC

2015-08-31 Thread Niranda Perera
Hi Rukshan,

Are you using a latest DAS pack?

On Tue, Sep 1, 2015 at 9:49 AM, Rukshan Premathunga <ruks...@wso2.com>
wrote:

>
>
> On Tue, Sep 1, 2015 at 9:48 AM, Rukshan Premathunga <ruks...@wso2.com>
> wrote:
>
>> Hi Niranda,
>>
>> please find the attachment for stacktrace.
>>
>> Thanks and Regards.
>>
>> On Tue, Sep 1, 2015 at 9:45 AM, Niranda Perera <nira...@wso2.com> wrote:
>>
>>> Hi Rukshan,
>>>
>>> Can you put the full stacktrace?
>>>
>>> Rgds
>>>
>>> On Tue, Sep 1, 2015, 09:44 Rukshan Premathunga <ruks...@wso2.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> i created two temp table as below.
>>>>
>>>> *create temporary table APIRequestSummaryData using CarbonJDBC options
>>>> (dataSource "WSO2_AM_STAT", tableName "API_REQUEST_SUMMARY");*
>>>>
>>>> *create temporary table APIRequestData USING CarbonAnalytics
>>>> OPTIONS(tableName "ORG_WSO2_APIMGT_STATISTICS_REQUEST"); *
>>>>
>>>> when i execute query,
>>>>
>>>> *insert overwrite table APIRequestSummaryData select query  *
>>>>
>>>> I get the following error.
>>>>
>>>> *org.apache.axis2.AxisFault: Can't DROP
>>>> 'API_REQUEST_SUMMARY_PARTITION_KEY'; check that column/key exists*
>>>>
>>>>
>>>> did anyone encounter this?
>>>>
>>>>
>>>> Thanks and Regards.
>>>>
>>>> --
>>>> Rukshan Chathuranga.
>>>> Software Engineer.
>>>> WSO2, Inc.
>>>>
>>>
>>
>>
>> --
>> Rukshan Chathuranga.
>> Software Engineer.
>> WSO2, Inc.
>>
>
>
>
> --
> Rukshan Chathuranga.
> Software Engineer.
> WSO2, Inc.
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Make load-spark-env-vars.sh easier to change

2015-08-30 Thread Niranda Perera
fixed [1]

[1] https://wso2.org/jira/browse/DAS-96

On Mon, Aug 31, 2015 at 9:54 AM, Nirmal Fernando nir...@wso2.com wrote:

 Thanks.

 On Mon, Aug 31, 2015 at 9:43 AM, Niranda Perera nira...@wso2.com wrote:

 I will do the needful we talked about offline.

 cheers

 On Mon, Aug 31, 2015 at 8:34 AM, Nirmal Fernando nir...@wso2.com wrote:

 All,

 This is a minor improvement that we could do to $subject. See below line;


 https://github.com/wso2/carbon-analytics/blob/master/features/analytics-processors/org.wso2.carbon.analytics.spark.server.feature/resources/load-spark-env-vars.sh#L8

 Can we extract out *$CARBON_HOME**/repository/components/plugin* and
 make it a new variable?

 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Make load-spark-env-vars.sh easier to change

2015-08-30 Thread Niranda Perera
I will do the needful we talked about offline.

cheers

On Mon, Aug 31, 2015 at 8:34 AM, Nirmal Fernando nir...@wso2.com wrote:

 All,

 This is a minor improvement that we could do to $subject. See below line;


 https://github.com/wso2/carbon-analytics/blob/master/features/analytics-processors/org.wso2.carbon.analytics.spark.server.feature/resources/load-spark-env-vars.sh#L8

 Can we extract out *$CARBON_HOME**/repository/components/plugin* and make
 it a new variable?

 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] ERROR:java.sql.SQLException: Invalid value for getInt() - 'id'

2015-08-27 Thread Niranda Perera
Hi Nashri,

this error comes from the Spark native jdbc connector as well. I'm
currently working on this. let me check.

in the mean time can you open a JIRRA for this

best

On Thu, Aug 27, 2015 at 12:20 PM, Aaquibah Nashry nas...@wso2.com wrote:

 Hi,

 I created a datasource (RDBMS) and I am trying to retrieve data in that to
 the DAL. I was able to create the temporary tables using:



 *create temporary table tempSparkTable using CarbonJDBC options
 (dataSource localDB, tableName test123); CREATE TEMPORARY TABLE
 tempDASTable USING CarbonAnalytics OPTIONS (tableName dasTemp, schema id
 INT, name STRING, countN INT);*

 When i tried to insert values using the following command:

 *insert overwrite table tempDASTable select id, name, countN from
 tempSparkTable;*

 i got the following error:

 *ERROR: *Job aborted due to stage failure: Task 0 in stage 1.0 failed 1
 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost):
 java.sql.SQLException: Invalid value for getInt() - 'id' at
 com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1094) at
 com.mysql.jdbc.SQLError.createSQLException(SQLError.java:997) at
 com.mysql.jdbc.SQLError.createSQLException(SQLError.java:983) at
 com.mysql.jdbc.SQLError.createSQLException(SQLError.java:928) at
 com.mysql.jdbc.ResultSetImpl.getInt(ResultSetImpl.java:2758) at
 org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.getNext(JDBCRDD.scala:416) at
 org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.hasNext(JDBCRDD.scala:472) at
 scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
 org.wso2.carbon.analytics.spark.core.sources.AnalyticsWritingFunction.apply(AnalyticsWritingFunction.java:72)
 at
 org.wso2.carbon.analytics.spark.core.sources.AnalyticsWritingFunction.apply(AnalyticsWritingFunction.java:41)
 at
 org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:878)
 at
 org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:878)
 at
 org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
 at
 org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63) at
 org.apache.spark.scheduler.Task.run(Task.scala:70) at
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)


 I get the same error when i try to use the following command in the
 console in DAS:
 *select * from tempSparkTable*


 Kindly provide assistance to overcome this issue.
 Thanks

 Regards,

 M.R.Aaquibah Nashry
 *Intern, Engineering**| **WSO2, Inc.*
 Mobile : +94 773946123
 Tel  : +94 112662541
 Email : nas...@wso2.com nas...@wso2.com




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS][L1] DAS interactive analytics fail to operate in nohup mode

2015-08-23 Thread Niranda Perera
Hi Anjana and Nirmal,

yes, it is the same issue [1]. I am working on it. will give a fix ASAP.

rgds

[1] https://wso2.org/jira/browse/DAS-58

On Mon, Aug 24, 2015 at 9:51 AM, Anjana Fernando anj...@wso2.com wrote:

 Hi Niranda,

 We have had issues with nohup mode in DAS, not sure if this is the same
 one. Did you take a look at it earlier? .. please fix ASAP.

 Cheers,
 Anjana.

 On Sat, Aug 22, 2015 at 5:15 PM, Nirmal Fernando nir...@wso2.com wrote:

 Hi,

 I ran DAS server in nohup mode; wso2server.sh start and perform a SQL
 query in 'Interactive Analytics Console' and it failed with following;

 Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most 
 recent failure: Lost task 0.3 in stage 0.0 (TID 3, 172.31.0.64): 
 java.lang.NullPointerException
  at 
 org.wso2.carbon.analytics.spark.core.rdd.AnalyticsRDD.compute(AnalyticsRDD.java:82)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
  at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
  at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
  at org.apache.spark.scheduler.Task.run(Task.scala:70)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)

 Driver stacktrace:


 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





 --
 *Anjana Fernando*
 Senior Technical Lead
 WSO2 Inc. | http://wso2.com
 lean . enterprise . middleware




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] Using Spark UDF in DAS

2015-08-23 Thread Niranda Perera
Hi Thanuja,

add-to-spark-classpath.xml is not related to UDFs. it is a configuration
file, which can be used to add jars to the spark classpath. but when you
put a jar into repository/components/lib, it will be added to the spark
classpath by default. so, there is no additional configs you have to do.

what are the errors/ exceptions thrown when you call UDFs?


On Mon, Aug 24, 2015 at 9:59 AM, Thanuja Uruththirakodeeswaran 
thanu...@wso2.com wrote:

 Hi Devs,

 I've used a UDF in spark query in DAS by following the below steps:

 1. Create a jar file for spark UDF implementation and add it
 DAS_HOME/repository/components/lib.
 2. Add the UDF class to *spark-udf-config.xml* which is in
 DAS_HOME/repository/conf.

 After that I tried the UDF in spark console and it worked fine. But now I
 got the latest changes and built a new pack. There same UDF doesn't work. I
 can see a new file called *add-to-spark-classpath.xml.*
 Do I need to any extra configuration for latest pack?

 Thanks.

 --
 Thanuja Uruththirakodeeswaran
 Software Engineer
 WSO2 Inc.;http://wso2.com
 lean.enterprise.middleware

 mobile: +94 774363167




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Error when deserializing model summary

2015-08-21 Thread Niranda Perera
I don't think it's correct. Scala version is 2.10.4 even in the mvn repo

On Fri, Aug 21, 2015, 13:46 Madawa Soysa madawa...@cse.mrt.ac.lk wrote:

 Also I asked this question in StackOverflow[1]
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java
 and there they have mentioned a version incompatibility between Scala and
 Spark versions

 [1] -
 http://stackoverflow.com/questions/32048618/how-to-serialize-apache-sparks-matrixfactorizationmodel-in-java

 On 21 August 2015 at 13:31, Madawa Soysa madawa...@cse.mrt.ac.lk wrote:

 Yes path is valid, I explicitly set the path here from the
 MLModelHandler persistModel method.

 On 21 August 2015 at 10:26, Nirmal Fernando nir...@wso2.com wrote:



 On Thu, Aug 20, 2015 at 9:21 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi All,

 There an issue with serializing Spark's MatrixFactorizationModel
 object. The object contains a huge RDD and as I have read in many blogs,
 this model cannot be serialized as a java object. Therefore when retrieving
 the model I get the same exception as above;

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*

 I have asked this question in Spark mailing lists and they recommended
 me to use the built in save and load functions other than using Java
 serializing.  So I have used following method to persist the model,

 model.save(MLCoreServiceValueHolder.*getInstance()*.getSparkContext().sc(),
 outPath);[1]
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 Then nothing happens when this line executes. No error is thrown as
 well. Any solution for this?


 Can you print outPath and see whether it's a valid file path?



 [1] -
 https://github.com/madawas/carbon-ml/commit/3700d3ed5915b0ad3b679bc0d9eb2611608463e9

 On 16 August 2015 at 18:06, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Yes I was able to resolve the issue by removing RDD fields from the
 SummaryModel object as @Mano pointed out. Still I have the same exception
 when retrieving the model. Trying to fix that issue.

 On 14 August 2015 at 10:43, Nirmal Fernando nir...@wso2.com wrote:

 Thanks Niranda, this doc is useful.

 On Fri, Aug 14, 2015 at 10:36 AM, Niranda Perera nira...@wso2.com
 wrote:

 From what I know, OneToOneDependancy come into play when spark tries
 to create the RDD dependency tree.

 Just thought of sharing that. this would be a good resource [1] :-)

 [1]
 https://databricks-training.s3.amazonaws.com/slides/advanced-spark-training.pdf

 On Thu, Aug 13, 2015 at 12:09 AM, Nirmal Fernando nir...@wso2.com
 wrote:

 What is *org.apache.spark.OneToOneDependency ? Is it something you
 use?*

 On Wed, Aug 12, 2015 at 11:30 PM, Madawa Soysa 
 madawa...@cse.mrt.ac.lk wrote:

 Hi,

 I created a model summary in order to show the model data in the
 analysis.jag page.
 But when refreshing the page after building the model I get the
 following error.

 org.wso2.carbon.ml.core.exceptions.MLAnalysisHandlerException:  An
 error has occurred while extracting all the models of analysis id: 13
 at
 org.wso2.carbon.ml.core.impl.MLAnalysisHandler.getAllModelsOfAnalysis(MLAnalysisHandler.java:245)
 at
 org.wso2.carbon.ml.rest.api.AnalysisApiV10.getAllModelsOfAnalysis(AnalysisApiV10.java:517)
 Caused by:
 org.wso2.carbon.ml.database.exceptions.DatabaseHandlerException:  An 
 error
 has occurred while extracting all the models of analysis id: 13
 at
 org.wso2.carbon.ml.database.internal.MLDatabaseService.getAllModels(MLDatabaseService.java:1797)
 at
 org.wso2.carbon.ml.core.impl.MLAnalysisHandler.getAllModelsOfAnalysis(MLAnalysisHandler.java:243)
 ... 52 more

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*
 at
 org.wso2.carbon.ml.database.util.MLDBUtil.getModelSummaryFromInputStream(MLDBUtil.java:54)
 at
 org.wso2.carbon.ml.database.internal.MLDatabaseService.getAllModels(MLDatabaseService.java:1790)
 ... 53 more

 I guess there is an error in persistence of the model summary
 object, what should be the cause for this error? [1]
 https://github.com/madawas/carbon-ml/commit/987c799231dad2bab6f4046df7acc672d0564f22
  contains
 the commit which I introduced the model summary.

 [1] -
 https://github.com/madawas/carbon-ml/commit/987c799231dad2bab6f4046df7acc672d0564f22

 --

 *_**Madawa Soysa*

 Undergraduate,

 Department of Computer Science and Engineering,

 University of Moratuwa.


 Mobile: +94 71 461 6050 %2B94%2075%20812%200726 | Email:
 madawa...@cse.mrt.ac.lk
 LinkedIn http://lk.linkedin.com/in/madawasoysa | Twitter
 https://twitter.com/madawa_rc | Tumblr
 http://madawas.tumblr.com/




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2

Re: [Dev] [Blocker][ML] Change in Spark Commons feature causing ML features failed to install in ESB/CEP

2015-08-18 Thread Niranda Perera
Hi Nirmal,

Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.fs.FSDataInputStream cannot be found by
spark-core_2.10_1.4.0.wso2v2

it has to be spark-core_2.10_1.4.1.wso2v1 actually.

in any case, can you check if the following bundle is active?
dependency
groupIdorg.wso2.orbit.org.apache.hadoop/groupId
artifactIdhadoop-client/artifactId
version2.6.0.wso2v1/version
/dependency


On Tue, Aug 18, 2015 at 9:59 AM, Nirmal Fernando nir...@wso2.com wrote:

 Hi,

 $Subject. We have reported following issue [1] and can we get urgent
 attention to it please?

 [1] https://wso2.org/jira/browse/DAS-32

 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/





-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [ML] Error when deserializing model summary

2015-08-13 Thread Niranda Perera
From what I know, OneToOneDependancy come into play when spark tries to
create the RDD dependency tree.

Just thought of sharing that. this would be a good resource [1] :-)

[1]
https://databricks-training.s3.amazonaws.com/slides/advanced-spark-training.pdf

On Thu, Aug 13, 2015 at 12:09 AM, Nirmal Fernando nir...@wso2.com wrote:

 What is *org.apache.spark.OneToOneDependency ? Is it something you use?*

 On Wed, Aug 12, 2015 at 11:30 PM, Madawa Soysa madawa...@cse.mrt.ac.lk
 wrote:

 Hi,

 I created a model summary in order to show the model data in the
 analysis.jag page.
 But when refreshing the page after building the model I get the following
 error.

 org.wso2.carbon.ml.core.exceptions.MLAnalysisHandlerException:  An error
 has occurred while extracting all the models of analysis id: 13
 at
 org.wso2.carbon.ml.core.impl.MLAnalysisHandler.getAllModelsOfAnalysis(MLAnalysisHandler.java:245)
 at
 org.wso2.carbon.ml.rest.api.AnalysisApiV10.getAllModelsOfAnalysis(AnalysisApiV10.java:517)
 Caused by:
 org.wso2.carbon.ml.database.exceptions.DatabaseHandlerException:  An error
 has occurred while extracting all the models of analysis id: 13
 at
 org.wso2.carbon.ml.database.internal.MLDatabaseService.getAllModels(MLDatabaseService.java:1797)
 at
 org.wso2.carbon.ml.core.impl.MLAnalysisHandler.getAllModelsOfAnalysis(MLAnalysisHandler.java:243)
 ... 52 more

 *Caused by: java.lang.ClassNotFoundException:
 org.apache.spark.OneToOneDependency cannot be found by
 org.scala-lang.scala-library_2.10.4.v20140209-180020-VFINAL-b66a3965*
 at
 org.wso2.carbon.ml.database.util.MLDBUtil.getModelSummaryFromInputStream(MLDBUtil.java:54)
 at
 org.wso2.carbon.ml.database.internal.MLDatabaseService.getAllModels(MLDatabaseService.java:1790)
 ... 53 more

 I guess there is an error in persistence of the model summary object,
 what should be the cause for this error? [1]
 https://github.com/madawas/carbon-ml/commit/987c799231dad2bab6f4046df7acc672d0564f22
  contains
 the commit which I introduced the model summary.

 [1] -
 https://github.com/madawas/carbon-ml/commit/987c799231dad2bab6f4046df7acc672d0564f22

 --

 *_**Madawa Soysa*

 Undergraduate,

 Department of Computer Science and Engineering,

 University of Moratuwa.


 Mobile: +94 71 461 6050 %2B94%2075%20812%200726 | Email:
 madawa...@cse.mrt.ac.lk
 LinkedIn http://lk.linkedin.com/in/madawasoysa | Twitter
 https://twitter.com/madawa_rc | Tumblr http://madawas.tumblr.com/




 --

 Thanks  regards,
 Nirmal

 Team Lead - WSO2 Machine Learner
 Associate Technical Lead - Data Technologies Team, WSO2 Inc.
 Mobile: +94715779733
 Blog: http://nirmalfdo.blogspot.com/



 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Architecture] Carbon Spark JDBC connector

2015-08-12 Thread Niranda Perera
Hi Gihan,

are we talking about incremental processing here? insert into/overwrite
queries will normally be used to push analyzed data into summary tables.

in the spark jargon, insert overwrite table means, completely deleting the
table and recreating it. I'm a confused with the meaning of 'overwrite'
with respect to the previous 2.5.0 Hive scripts, are doing an update there?

rgds

On Tue, Aug 11, 2015 at 7:58 PM, Gihan Anuruddha gi...@wso2.com wrote:

 Hi Niranda,

 Are we going to solve those limitations before the GA? Specially
 limitation no.2. Over time we can have stat table with thousands of
 records, so are we going to remove all the records and reinsert every time
 that spark script runs?

 Regards,
 Gihan

 On Tue, Aug 11, 2015 at 7:13 AM, Niranda Perera nira...@wso2.com wrote:

 Hi all,

 we have implemented a custom Spark JDBC connector to be used in the
 Carbon environment.

 this enables the following

1. Now, temporary tables can be created in the Spark environment by
specifying an analytics datasource (configured by the
analytics-datasources.xml) and a table
2. Spark uses SELECT 1 FROM $table LIMIT 1 query to check the
existence of a table and the LIMIT query is not provided by all dbs. With
the new connector, this query can be provided with as a config. (this
config is still WIP)
3. Adding new spark dialects related for various dbs (WIP)

 the idea is to test this for the following dbs

- mysql
- h2
- mssql
- oracle
- postgres
- db2

 I have loosely tested the connector with MySQL, and I would like the APIM
 team to use it with the API usage stats use-case, and provide us some
 feedback.

 this connector can be accessed as follows. (docs are still not updated. I
 will do that ASAP)

 create temporary table temp_table using CarbonJDBC options (dataSource
 datasource name, tableName table name);

 select * from temp_table

 insert into/overwrite table temp_table some select statement

 known limitations

1.  when creating a temp table, it should already be created in the
underlying datasource
2. insert overwrite table deletes the existing table and creates it
again


 would be very grateful if you could use this connector in your current
 JDBC use cases and provide us with feedback.

 best
 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/

 ___
 Architecture mailing list
 architect...@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture




 --
 W.G. Gihan Anuruddha
 Senior Software Engineer | WSO2, Inc.
 M: +94772272595

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Architecture] Carbon Spark JDBC connector

2015-08-10 Thread Niranda Perera
Hi all,

we have implemented a custom Spark JDBC connector to be used in the Carbon
environment.

this enables the following

   1. Now, temporary tables can be created in the Spark environment by
   specifying an analytics datasource (configured by the
   analytics-datasources.xml) and a table
   2. Spark uses SELECT 1 FROM $table LIMIT 1 query to check the
   existence of a table and the LIMIT query is not provided by all dbs. With
   the new connector, this query can be provided with as a config. (this
   config is still WIP)
   3. Adding new spark dialects related for various dbs (WIP)

the idea is to test this for the following dbs

   - mysql
   - h2
   - mssql
   - oracle
   - postgres
   - db2

I have loosely tested the connector with MySQL, and I would like the APIM
team to use it with the API usage stats use-case, and provide us some
feedback.

this connector can be accessed as follows. (docs are still not updated. I
will do that ASAP)

create temporary table temp_table using CarbonJDBC options (dataSource
datasource name, tableName table name);

select * from temp_table

insert into/overwrite table temp_table some select statement

known limitations

   1.  when creating a temp table, it should already be created in the
   underlying datasource
   2. insert overwrite table deletes the existing table and creates it
   again


would be very grateful if you could use this connector in your current JDBC
use cases and provide us with feedback.

best
-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Orbit] Please review merge the PR for Spark 1.4.1

2015-08-02 Thread Niranda Perera
Hi Maheshika,

could you please look into this?

thanks

On Thu, Jul 30, 2015 at 4:09 PM, Sameera Jayasoma same...@wso2.com wrote:

 Merged.

 Maheshika, please node.

 Thanks,
 Sameera.

 On Thu, Jul 30, 2015 at 4:05 PM, Niranda Perera nira...@wso2.com wrote:

 Hi Sameera,

 Could you please review and merge this PR [1] for wso2v1 of
 the Spark 1.4.1.

 this is a patch release, so there are no API or dependency changes. the
 orbit poms are identical to 1.4.0. So I believe there is not much of
 overhead in reviewing the poms.

 cheers

 [1] https://github.com/wso2/orbit/pull/112

 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/




 --
 Sameera Jayasoma,
 Software Architect,

 WSO2, Inc. (http://wso2.com)
 email: same...@wso2.com
 blog: http://blog.sameera.org
 twitter: https://twitter.com/sameerajayasoma
 flickr: http://www.flickr.com/photos/sameera-jayasoma/collections
 Mobile: 0094776364456

 Lean . Enterprise . Middleware




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Orbit] Please review merge the PR for Spark 1.4.1

2015-08-02 Thread Niranda Perera
thanks Maheshika

On Mon, Aug 3, 2015 at 10:24 AM, Maheshika Goonetilleke mahesh...@wso2.com
wrote:

 Hi Niranda

 spark-core, spark-sql and spark-streaming 1.4.1 versions have been
 deployed.

 On Mon, Aug 3, 2015 at 9:37 AM, Niranda Perera nira...@wso2.com wrote:

 Hi Maheshika,

 could you please look into this?

 thanks

 On Thu, Jul 30, 2015 at 4:09 PM, Sameera Jayasoma same...@wso2.com
 wrote:

 Merged.

 Maheshika, please node.

 Thanks,
 Sameera.

 On Thu, Jul 30, 2015 at 4:05 PM, Niranda Perera nira...@wso2.com
 wrote:

 Hi Sameera,

 Could you please review and merge this PR [1] for wso2v1 of
 the Spark 1.4.1.

 this is a patch release, so there are no API or dependency changes. the
 orbit poms are identical to 1.4.0. So I believe there is not much of
 overhead in reviewing the poms.

 cheers

 [1] https://github.com/wso2/orbit/pull/112

 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/




 --
 Sameera Jayasoma,
 Software Architect,

 WSO2, Inc. (http://wso2.com)
 email: same...@wso2.com
 blog: http://blog.sameera.org
 twitter: https://twitter.com/sameerajayasoma
 flickr: http://www.flickr.com/photos/sameera-jayasoma/collections
 Mobile: 0094776364456

 Lean . Enterprise . Middleware




 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/




 --

 Thanks  Best Regards,

 Maheshika Goonetilleke
 Engineering Process Coordinator

 *WSO2 Inc*
 *email   : mahesh...@wso2.com mahesh...@wso2.com*
 *mobile : +94 773 596707 %2B94%20773%20596707*
 *www: :http://wso2.com http://wso2.com/*lean . enterprise . middleware







-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DAS] simulating event stream with facet attribute

2015-08-02 Thread Niranda Perera
Can you attach a sample event you sent?

On Sun, Aug 2, 2015, 21:07 Rukshan Premathunga ruks...@wso2.com wrote:

 Hi all,

 i try to simulate event stream which contain facet attribute. even though
 i successfully send sample data through UI simulator, in message console i
 couldn't see any data.

 did anyone encounter this?

 Thanks and Regards.

 --
 Rukshan Chathuranga.
 Software Engineer.
 WSO2, Inc.

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Architecture] [DAS] Upgrading Spark 1.4.0 - 1.4.1 in DAS

2015-07-30 Thread Niranda Perera
Hi all,

this is to inform that we will be upgrading Spark from 1.4.0 to 1.4.1

the upgrade on the outset, does not have any API changes or dependency
upgrades. therefore, the version bump should not affect the current
components.

rgds

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [Orbit] Please review merge the PR for Spark 1.4.1

2015-07-30 Thread Niranda Perera
Hi Sameera,

Could you please review and merge this PR [1] for wso2v1 of the Spark 1.4.1.

this is a patch release, so there are no API or dependency changes. the
orbit poms are identical to 1.4.0. So I believe there is not much of
overhead in reviewing the poms.

cheers

[1] https://github.com/wso2/orbit/pull/112

-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Visualizing room data with BAM

2015-07-29 Thread Niranda Perera
Hi Corey,

this looks very much like the debs 2014 challenge, isn't it? [1]

are you using BAM 2.5.0 or the upcoming DAS 3.0 for your work?

rgds

[1] http://dl.acm.org/citation.cfm?id=2611333


On Wed, Jul 29, 2015 at 12:06 PM, Corey Denault co...@wso2.com wrote:

 Hi,

 I'm currently working on pushing the following data set to bam.
 http://courses.media.mit.edu/2004fall/mas622j/04.projects/home/

 My goal is to produce a web app that summarises the data based on
 frequency of activities, time spent on those activities, and usage of each
 sensor. Additionally, I would like to do some prediction of activities,
 such as at what time certain activities are likely to occur.

 So far I am working on pushing the data to Cassandra tables. Right now I
 am hung up on splitting up columns of string arrays into columns of single
 strings. I will continue to update this thread as my project progresses.

 Thanks,
 Corey Denault

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS] Spark cluster in DAS does not have worker nodes

2015-07-27 Thread Niranda Perera
Hi Maheshakya,

I've fixed this issue. Pls take an update of the carbon-analytics repo

rgds

On Mon, Jul 27, 2015 at 12:12 PM, Maheshakya Wijewardena 
mahesha...@wso2.com wrote:

 Hi Niranda,

 I started those one by one.

 On Mon, Jul 27, 2015 at 12:07 PM, Niranda Perera nira...@wso2.com wrote:

 Hi Maheshakya,

 I will look into this.

 According to your setting, the ideal scenario is,
 node1 -  spark master (active) + worker
 node2 - spark master (standby) + worker
 node3 - worker

 did you start the servers all at once or one by one?

 rgds

 On Mon, Jul 27, 2015 at 11:07 AM, Anjana Fernando anj...@wso2.com
 wrote:

 Hi,

 Actually, when the 3'rd sever has started up, all 3 servers should have
 worker instances. This seems to be a bug. @Niranda, please check it out
 ASAP.

 Cheers,
 Anjana.

 On Mon, Jul 27, 2015 at 11:01 AM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Hi,

 I have tried to create a Spark cluster with DAS using Carbon
 clustering. 3 DAS nodes are configured and number of Spark masters is set
 to 2. In this setting, one of the 3 nodes should have a Spark worker node,
 but all 3 nodes are starting as Spark masters. What can be the reason for
 this?

 Configuration files (one of *axis2.xml* files of DAS clusters and
 *spark-defaults.conf*) of DAS are attached herewith.

 Best regards.
 --
 Pruthuvi Maheshakya Wijewardena
 Software Engineer
 WSO2 : http://wso2.com/
 Email: mahesha...@wso2.com
 Mobile: +94711228855





 --
 *Anjana Fernando*
 Senior Technical Lead
 WSO2 Inc. | http://wso2.com
 lean . enterprise . middleware




 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/




 --
 Pruthuvi Maheshakya Wijewardena
 Software Engineer
 WSO2 : http://wso2.com/
 Email: mahesha...@wso2.com
 Mobile: +94711228855





-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS] Spark cluster in DAS does not have worker nodes

2015-07-27 Thread Niranda Perera
Hi Maheshakya,

I will look into this.

According to your setting, the ideal scenario is,
node1 -  spark master (active) + worker
node2 - spark master (standby) + worker
node3 - worker

did you start the servers all at once or one by one?

rgds

On Mon, Jul 27, 2015 at 11:07 AM, Anjana Fernando anj...@wso2.com wrote:

 Hi,

 Actually, when the 3'rd sever has started up, all 3 servers should have
 worker instances. This seems to be a bug. @Niranda, please check it out
 ASAP.

 Cheers,
 Anjana.

 On Mon, Jul 27, 2015 at 11:01 AM, Maheshakya Wijewardena 
 mahesha...@wso2.com wrote:

 Hi,

 I have tried to create a Spark cluster with DAS using Carbon clustering.
 3 DAS nodes are configured and number of Spark masters is set to 2. In this
 setting, one of the 3 nodes should have a Spark worker node, but all 3
 nodes are starting as Spark masters. What can be the reason for this?

 Configuration files (one of *axis2.xml* files of DAS clusters and
 *spark-defaults.conf*) of DAS are attached herewith.

 Best regards.
 --
 Pruthuvi Maheshakya Wijewardena
 Software Engineer
 WSO2 : http://wso2.com/
 Email: mahesha...@wso2.com
 Mobile: +94711228855





 --
 *Anjana Fernando*
 Senior Technical Lead
 WSO2 Inc. | http://wso2.com
 lean . enterprise . middleware




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Interns working on data analysis

2015-07-23 Thread Niranda Perera
Additionally,

you could access the docs [1] and the issue reporting [2] here.

cheers

[1]
https://docs.wso2.com/display/DAS300/WSO2+Data+Analytics+Server+Documentation

[2]  https://wso2.org/jira/browse/BAM

On Thu, Jul 23, 2015 at 10:58 AM, Niranda Perera nira...@wso2.com wrote:

 Hi Di,

 we would be more than happy to help you out with any problems you come
 across.

 on a side note, we are now moving BAM development to DAS (data analytics
 server). DAS comes with a new approach and whole new architecture. so, we
 suggest you take a look into that as well.
 you could try the beta version here [1]

 cheers

 [1] https://svn.wso2.org/repos/wso2/people/inosh/wso2das-3.0.0-BETA.zip

 On Thu, Jul 23, 2015 at 9:56 AM, Di Zhong d...@wso2.com wrote:

 Hi all,

 Corey and I are two interns from IU Bloomington, we are working on
 analyzing two datasets using WSO2 BAM.

 Right now we are still on the stage of pushing data to BAM. We might be
 sending emails asking some questions about BAM for a while, we appreciate
 any help and thanks for bearing with us.


 Best Regards,

 Di Zhong

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev




 --
 *Niranda Perera*
 Software Engineer, WSO2 Inc.
 Mobile: +94-71-554-8430
 Twitter: @n1r44 https://twitter.com/N1R44
 https://pythagoreanscript.wordpress.com/




-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 https://twitter.com/N1R44
https://pythagoreanscript.wordpress.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


  1   2   3   >