Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Inosh Goonewardena
Hi Roshan,

How many analyzer nodes are there in the cluster? If the master count is
set to 1 and the master is down, spark cluster will not survive. It you set
the master count to 2, then if the master is down other node become the
master and it survive. However until spark context is initialize properly
in the other node(will take roughly about 5 - 30secs) you will see above
error.

On Sun, Jan 24, 2016 at 8:25 PM, Roshan Wijesena  wrote:

> Hi  Niranda/DAS team,
>
> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
> when one server is in down situation. I am getting this exception
> periodically, and spark scripts are not running at all. it seems we can
> *not* survive when one server is in down situation?
>
> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
> executing the scheduled task for the script: is_log_analytics
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
> Spark SQL Context is not available. Check if the cluster has instantiated
> properly.
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
> at
> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
> at
> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> -Roshan
>
>
>
>
>
>
>
> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:
>
>> Hi Niranda / Inosh,
>>
>> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
>> now. Did not appear for a while.
>>
>> -Roshan
>>
>> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
>> wrote:
>>
>>> Hi Roshan,
>>>
>>> This happens when you have a malformed HA cluster. When you put the
>>> master count as 2, the spark cluster would not get initiated until there
>>> are 2 members in the analytics cluster. when the count as 2 and there is a
>>> task scheduled already, you may come across this issue, until the 2nd node
>>> is up and running. You should see that after sometime, the exception gets
>>> resolved., and that is when the analytics cluster is at a workable state.
>>>
>>> But I agree, an NPE is not acceptable here and this has been already
>>> fixed in 3.0.1 [1]
>>>
>>> as per the query modification, yes, the query gets modified to handle
>>> multi tenancy in the spark runtime.
>>>
>>> hope this resolves your issues.
>>>
>>> rgds
>>>
>>> [1] https://wso2.org/jira/browse/DAS-329
>>>
>>> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
>>> wrote:
>>>
  I reproduced the error. If we set carbon.spark.master.count value to 2
 this error will occur. Any solution available in this case?


 On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
 wrote:

> After I enabled the debug, it looks like below
>
> [2015-12-10 22:03:00,001]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: httpd_log_analytics for tenant id: -1234
> [2015-12-10 22:03:00,013] DEBUG
> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
> [2015-12-10 22:03:00,013] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.lang.NullPointerException
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
> at
> 

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Roshan Wijesena
Hi  Niranda/DAS team,

I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
when one server is in down situation. I am getting this exception
periodically, and spark scripts are not running at all. it seems we can
*not* survive when one server is in down situation?

TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
executing the scheduled task for the script: is_log_analytics
{org.wso2.carbon.analytics.spark.core.AnalyticsTask}
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Spark SQL Context is not available. Check if the cluster has instantiated
properly.
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

-Roshan







On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:

> Hi Niranda / Inosh,
>
> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
> now. Did not appear for a while.
>
> -Roshan
>
> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera  wrote:
>
>> Hi Roshan,
>>
>> This happens when you have a malformed HA cluster. When you put the
>> master count as 2, the spark cluster would not get initiated until there
>> are 2 members in the analytics cluster. when the count as 2 and there is a
>> task scheduled already, you may come across this issue, until the 2nd node
>> is up and running. You should see that after sometime, the exception gets
>> resolved., and that is when the analytics cluster is at a workable state.
>>
>> But I agree, an NPE is not acceptable here and this has been already
>> fixed in 3.0.1 [1]
>>
>> as per the query modification, yes, the query gets modified to handle
>> multi tenancy in the spark runtime.
>>
>> hope this resolves your issues.
>>
>> rgds
>>
>> [1] https://wso2.org/jira/browse/DAS-329
>>
>> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
>> wrote:
>>
>>>  I reproduced the error. If we set carbon.spark.master.count value to 2
>>> this error will occur. Any solution available in this case?
>>>
>>>
>>> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
>>> wrote:
>>>
 After I enabled the debug, it looks like below

 [2015-12-10 22:03:00,001]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: httpd_log_analytics for tenant id: -1234
 [2015-12-10 22:03:00,013] DEBUG
 {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
 org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
 [2015-12-10 22:03:00,013] ERROR
 {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
 executing task: null
 java.lang.NullPointerException
 at
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
 at
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
 at
 org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
 at
 org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
 at
 org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
 at
 org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
 at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at
 

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Niranda Perera
Hi Roshan,

I agree with Inosh. It takes few seconds for spark to recover, until then
you might see this exception. Do you see this exception through out?

best

On Mon, Jan 25, 2016 at 8:13 AM, Inosh Goonewardena  wrote:

> Hi Roshan,
>
> How many analyzer nodes are there in the cluster? If the master count is
> set to 1 and the master is down, spark cluster will not survive. It you set
> the master count to 2, then if the master is down other node become the
> master and it survive. However until spark context is initialize properly
> in the other node(will take roughly about 5 - 30secs) you will see above
> error.
>
> On Sun, Jan 24, 2016 at 8:25 PM, Roshan Wijesena  wrote:
>
>> Hi  Niranda/DAS team,
>>
>> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
>> when one server is in down situation. I am getting this exception
>> periodically, and spark scripts are not running at all. it seems we can
>> *not* survive when one server is in down situation?
>>
>> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
>> executing the scheduled task for the script: is_log_analytics
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
>> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
>> Spark SQL Context is not available. Check if the cluster has instantiated
>> properly.
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
>> at
>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
>> at
>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> -Roshan
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:
>>
>>> Hi Niranda / Inosh,
>>>
>>> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
>>> now. Did not appear for a while.
>>>
>>> -Roshan
>>>
>>> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
>>> wrote:
>>>
 Hi Roshan,

 This happens when you have a malformed HA cluster. When you put the
 master count as 2, the spark cluster would not get initiated until there
 are 2 members in the analytics cluster. when the count as 2 and there is a
 task scheduled already, you may come across this issue, until the 2nd node
 is up and running. You should see that after sometime, the exception gets
 resolved., and that is when the analytics cluster is at a workable state.

 But I agree, an NPE is not acceptable here and this has been already
 fixed in 3.0.1 [1]

 as per the query modification, yes, the query gets modified to handle
 multi tenancy in the spark runtime.

 hope this resolves your issues.

 rgds

 [1] https://wso2.org/jira/browse/DAS-329

 On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
 wrote:

>  I reproduced the error. If we set carbon.spark.master.count value to
> 2 this error will occur. Any solution available in this case?
>
>
> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
> wrote:
>
>> After I enabled the debug, it looks like below
>>
>> [2015-12-10 22:03:00,001]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: httpd_log_analytics for tenant id: -1234
>> [2015-12-10 22:03:00,013] DEBUG
>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>> [2015-12-10 22:03:00,013] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.lang.NullPointerException
>> at
>> 

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Maninda Edirisooriya
Hi Roshan,

If you need to test only the receiver HA with CEP HA configuration you need
to run the receiver nodes with receiver profile.
(i.e. Start the DAS with *./wso2server.sh -receiverNode*) This will avoid
running spark scripts in receiver nodes.
And also make sure the Axis2 clustering is setup such that all the DAS
nodes are clustered into a single cluster.
(Test by shutting down each node independently and see other nodes are
printing the logs saying a member has left the cluster)
Thanks.



On Mon, Jan 25, 2016 at 7:55 AM, Roshan Wijesena  wrote:

> Hi  Niranda/DAS team,
>
> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
> when one server is in down situation. I am getting this exception
> periodically, and spark scripts are not running at all. it seems we can
> *not* survive when one server is in down situation?
>
> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
> executing the scheduled task for the script: is_log_analytics
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
> Spark SQL Context is not available. Check if the cluster has instantiated
> properly.
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
> at
> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
> at
> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> -Roshan
>
>
>
>
>
>
>
> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:
>
>> Hi Niranda / Inosh,
>>
>> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
>> now. Did not appear for a while.
>>
>> -Roshan
>>
>> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
>> wrote:
>>
>>> Hi Roshan,
>>>
>>> This happens when you have a malformed HA cluster. When you put the
>>> master count as 2, the spark cluster would not get initiated until there
>>> are 2 members in the analytics cluster. when the count as 2 and there is a
>>> task scheduled already, you may come across this issue, until the 2nd node
>>> is up and running. You should see that after sometime, the exception gets
>>> resolved., and that is when the analytics cluster is at a workable state.
>>>
>>> But I agree, an NPE is not acceptable here and this has been already
>>> fixed in 3.0.1 [1]
>>>
>>> as per the query modification, yes, the query gets modified to handle
>>> multi tenancy in the spark runtime.
>>>
>>> hope this resolves your issues.
>>>
>>> rgds
>>>
>>> [1] https://wso2.org/jira/browse/DAS-329
>>>
>>> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
>>> wrote:
>>>
  I reproduced the error. If we set carbon.spark.master.count value to 2
 this error will occur. Any solution available in this case?


 On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
 wrote:

> After I enabled the debug, it looks like below
>
> [2015-12-10 22:03:00,001]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: httpd_log_analytics for tenant id: -1234
> [2015-12-10 22:03:00,013] DEBUG
> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
> [2015-12-10 22:03:00,013] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.lang.NullPointerException
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> 

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Roshan Wijesena
Hi Guys,

I have set carbon.spark.master.count  2 in both DAS server's
repository/conf/analytics/spark/spark-defaults.conf. And this error is
printing periodically, for now, its more that 10 mins.

On Sun, Jan 24, 2016 at 9:02 PM, Niranda Perera  wrote:

> Hi Roshan,
>
> I agree with Inosh. It takes few seconds for spark to recover, until then
> you might see this exception. Do you see this exception through out?
>
> best
>
> On Mon, Jan 25, 2016 at 8:13 AM, Inosh Goonewardena 
> wrote:
>
>> Hi Roshan,
>>
>> How many analyzer nodes are there in the cluster? If the master count is
>> set to 1 and the master is down, spark cluster will not survive. It you set
>> the master count to 2, then if the master is down other node become the
>> master and it survive. However until spark context is initialize properly
>> in the other node(will take roughly about 5 - 30secs) you will see above
>> error.
>>
>> On Sun, Jan 24, 2016 at 8:25 PM, Roshan Wijesena  wrote:
>>
>>> Hi  Niranda/DAS team,
>>>
>>> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
>>> when one server is in down situation. I am getting this exception
>>> periodically, and spark scripts are not running at all. it seems we can
>>> *not* survive when one server is in down situation?
>>>
>>> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
>>> executing the scheduled task for the script: is_log_analytics
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
>>> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
>>> Spark SQL Context is not available. Check if the cluster has instantiated
>>> properly.
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
>>> at
>>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
>>> at
>>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> -Roshan
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena 
>>> wrote:
>>>
 Hi Niranda / Inosh,

 Thanks a lot for the quick call and reply. Yes issue seems to be fixed
 now. Did not appear for a while.

 -Roshan

 On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
 wrote:

> Hi Roshan,
>
> This happens when you have a malformed HA cluster. When you put the
> master count as 2, the spark cluster would not get initiated until there
> are 2 members in the analytics cluster. when the count as 2 and there is a
> task scheduled already, you may come across this issue, until the 2nd node
> is up and running. You should see that after sometime, the exception gets
> resolved., and that is when the analytics cluster is at a workable state.
>
> But I agree, an NPE is not acceptable here and this has been already
> fixed in 3.0.1 [1]
>
> as per the query modification, yes, the query gets modified to handle
> multi tenancy in the spark runtime.
>
> hope this resolves your issues.
>
> rgds
>
> [1] https://wso2.org/jira/browse/DAS-329
>
> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
> wrote:
>
>>  I reproduced the error. If we set carbon.spark.master.count value to
>> 2 this error will occur. Any solution available in this case?
>>
>>
>> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
>> wrote:
>>
>>> After I enabled the debug, it looks like below
>>>
>>> [2015-12-10 22:03:00,001]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: httpd_log_analytics for tenant id: -1234
>>> [2015-12-10 22:03:00,013] DEBUG
>>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>>>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>>> 

[Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2015-12-10 Thread Roshan Wijesena
Hi DAS teaam,

I am getting below null pointer exception while trying to execute a
scheduled task. What I  simply did was created a two node HA cluster and
tried to run this example[1]. It has a scheduled task. However, this error
can not be observed in a single node fresh pack.

The error is,

[2015-12-10 19:32:00,573]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: httpd_log_analytics for tenant id: -1234
[2015-12-10 19:32:21,899]  INFO
{org.wso2.carbon.event.processor.manager.core.internal.CarbonEventManagementService}
-  Starting polling event receivers
[2015-12-10 19:32:32,096] ERROR
{org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
executing task: null
java.lang.NullPointerException
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


[1] https://docs.wso2.com/display/DAS300/Analyzing+HTTPD+Logs

-- 
Roshan Wijesena.
Senior Software Engineer-WSO2 Inc.
Mobile: *+94719154640*
Email: ros...@wso2.com
*WSO2, Inc. :** wso2.com *
lean.enterprise.middleware.
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2015-12-10 Thread Roshan Wijesena
After I enabled the debug, it looks like below

[2015-12-10 22:03:00,001]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: httpd_log_analytics for tenant id: -1234
[2015-12-10 22:03:00,013] DEBUG
{org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
 Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
   OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
[2015-12-10 22:03:00,013] ERROR
{org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
executing task: null
java.lang.NullPointerException
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

does that query got modified?

 CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
   OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")






On Thu, Dec 10, 2015 at 6:53 PM, Roshan Wijesena  wrote:

> Hi DAS teaam,
>
> I am getting below null pointer exception while trying to execute a
> scheduled task. What I  simply did was created a two node HA cluster and
> tried to run this example[1]. It has a scheduled task. However, this error
> can not be observed in a single node fresh pack.
>
> The error is,
>
> [2015-12-10 19:32:00,573]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: httpd_log_analytics for tenant id: -1234
> [2015-12-10 19:32:21,899]  INFO
> {org.wso2.carbon.event.processor.manager.core.internal.CarbonEventManagementService}
> -  Starting polling event receivers
> [2015-12-10 19:32:32,096] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.lang.NullPointerException
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
> at
> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
> at
> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
>
> [1] https://docs.wso2.com/display/DAS300/Analyzing+HTTPD+Logs
>
> --
> Roshan Wijesena.
> Senior Software Engineer-WSO2 Inc.
> Mobile: *+94719154640 <%2B94719154640>*
> Email: ros...@wso2.com
> *WSO2, Inc. :** wso2.com *
> lean.enterprise.middleware.
>



-- 
Roshan Wijesena.
Senior Software Engineer-WSO2 Inc.
Mobile: *+94719154640*
Email: ros...@wso2.com
*WSO2, Inc. :** wso2.com *
lean.enterprise.middleware.
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2015-12-10 Thread Roshan Wijesena
Hi Niranda / Inosh,

Thanks a lot for the quick call and reply. Yes issue seems to be fixed now.
Did not appear for a while.

-Roshan

On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera  wrote:

> Hi Roshan,
>
> This happens when you have a malformed HA cluster. When you put the master
> count as 2, the spark cluster would not get initiated until there are 2
> members in the analytics cluster. when the count as 2 and there is a task
> scheduled already, you may come across this issue, until the 2nd node is up
> and running. You should see that after sometime, the exception gets
> resolved., and that is when the analytics cluster is at a workable state.
>
> But I agree, an NPE is not acceptable here and this has been already fixed
> in 3.0.1 [1]
>
> as per the query modification, yes, the query gets modified to handle
> multi tenancy in the spark runtime.
>
> hope this resolves your issues.
>
> rgds
>
> [1] https://wso2.org/jira/browse/DAS-329
>
> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena  wrote:
>
>>  I reproduced the error. If we set carbon.spark.master.count value to 2
>> this error will occur. Any solution available in this case?
>>
>>
>> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena  wrote:
>>
>>> After I enabled the debug, it looks like below
>>>
>>> [2015-12-10 22:03:00,001]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: httpd_log_analytics for tenant id: -1234
>>> [2015-12-10 22:03:00,013] DEBUG
>>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>>>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>>>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>>> [2015-12-10 22:03:00,013] ERROR
>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>> executing task: null
>>> java.lang.NullPointerException
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
>>> at
>>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
>>> at
>>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> does that query got modified?
>>>
>>>  CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>>>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Dec 10, 2015 at 6:53 PM, Roshan Wijesena 
>>> wrote:
>>>
 Hi DAS teaam,

 I am getting below null pointer exception while trying to execute a
 scheduled task. What I  simply did was created a two node HA cluster and
 tried to run this example[1]. It has a scheduled task. However, this error
 can not be observed in a single node fresh pack.

 The error is,

 [2015-12-10 19:32:00,573]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: httpd_log_analytics for tenant id: -1234
 [2015-12-10 19:32:21,899]  INFO
 {org.wso2.carbon.event.processor.manager.core.internal.CarbonEventManagementService}
 -  Starting polling event receivers
 [2015-12-10 19:32:32,096] ERROR
 {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
 executing task: null
 java.lang.NullPointerException
 at
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
 at
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
 at
 org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
 at
 org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
 at
 

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2015-12-10 Thread Roshan Wijesena
 I reproduced the error. If we set carbon.spark.master.count value to 2
this error will occur. Any solution available in this case?


On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena  wrote:

> After I enabled the debug, it looks like below
>
> [2015-12-10 22:03:00,001]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: httpd_log_analytics for tenant id: -1234
> [2015-12-10 22:03:00,013] DEBUG
> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
> [2015-12-10 22:03:00,013] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.lang.NullPointerException
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
> at
> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
> at
> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> does that query got modified?
>
>  CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>
>
>
>
>
>
> On Thu, Dec 10, 2015 at 6:53 PM, Roshan Wijesena  wrote:
>
>> Hi DAS teaam,
>>
>> I am getting below null pointer exception while trying to execute a
>> scheduled task. What I  simply did was created a two node HA cluster and
>> tried to run this example[1]. It has a scheduled task. However, this error
>> can not be observed in a single node fresh pack.
>>
>> The error is,
>>
>> [2015-12-10 19:32:00,573]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: httpd_log_analytics for tenant id: -1234
>> [2015-12-10 19:32:21,899]  INFO
>> {org.wso2.carbon.event.processor.manager.core.internal.CarbonEventManagementService}
>> -  Starting polling event receivers
>> [2015-12-10 19:32:32,096] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.lang.NullPointerException
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
>> at
>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
>> at
>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>
>> [1] https://docs.wso2.com/display/DAS300/Analyzing+HTTPD+Logs
>>
>> --
>> Roshan Wijesena.
>> Senior Software Engineer-WSO2 Inc.
>> Mobile: *+94719154640 <%2B94719154640>*
>> Email: ros...@wso2.com
>> *WSO2, Inc. :** wso2.com *
>> lean.enterprise.middleware.
>>
>
>
>
> --
> Roshan Wijesena.
> Senior Software Engineer-WSO2 Inc.
> Mobile: *+94719154640 <%2B94719154640>*
> Email: ros...@wso2.com
> *WSO2, Inc. :** wso2.com *
> lean.enterprise.middleware.
>



-- 
Roshan Wijesena.
Senior Software Engineer-WSO2 Inc.
Mobile: *+94719154640*
Email: ros...@wso2.com
*WSO2, Inc. :** wso2.com *
lean.enterprise.middleware.

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2015-12-10 Thread Niranda Perera
Hi Roshan,

This happens when you have a malformed HA cluster. When you put the master
count as 2, the spark cluster would not get initiated until there are 2
members in the analytics cluster. when the count as 2 and there is a task
scheduled already, you may come across this issue, until the 2nd node is up
and running. You should see that after sometime, the exception gets
resolved., and that is when the analytics cluster is at a workable state.

But I agree, an NPE is not acceptable here and this has been already fixed
in 3.0.1 [1]

as per the query modification, yes, the query gets modified to handle multi
tenancy in the spark runtime.

hope this resolves your issues.

rgds

[1] https://wso2.org/jira/browse/DAS-329

On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena  wrote:

>  I reproduced the error. If we set carbon.spark.master.count value to 2
> this error will occur. Any solution available in this case?
>
>
> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena  wrote:
>
>> After I enabled the debug, it looks like below
>>
>> [2015-12-10 22:03:00,001]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: httpd_log_analytics for tenant id: -1234
>> [2015-12-10 22:03:00,013] DEBUG
>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>> [2015-12-10 22:03:00,013] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.lang.NullPointerException
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
>> at
>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
>> at
>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> does that query got modified?
>>
>>  CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>>
>>
>>
>>
>>
>>
>> On Thu, Dec 10, 2015 at 6:53 PM, Roshan Wijesena  wrote:
>>
>>> Hi DAS teaam,
>>>
>>> I am getting below null pointer exception while trying to execute a
>>> scheduled task. What I  simply did was created a two node HA cluster and
>>> tried to run this example[1]. It has a scheduled task. However, this error
>>> can not be observed in a single node fresh pack.
>>>
>>> The error is,
>>>
>>> [2015-12-10 19:32:00,573]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: httpd_log_analytics for tenant id: -1234
>>> [2015-12-10 19:32:21,899]  INFO
>>> {org.wso2.carbon.event.processor.manager.core.internal.CarbonEventManagementService}
>>> -  Starting polling event receivers
>>> [2015-12-10 19:32:32,096] ERROR
>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>> executing task: null
>>> java.lang.NullPointerException
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
>>> at
>>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
>>> at
>>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>>