Hi all,

When I clustered two IS Analytics-5.5.0 nodes using AWS membership scheme,
I get the following error message.

*Error while executing query :         CREATE TEMPORARY TABLE
isSessionAnalyticsPerMinute USING CarbonAnalytics OPTIONS (tableName
"org_wso2_is_analytics_stream_SessionStatPerMinute", schema "meta_tenantId
INT -i, bucketId LONG, bucketStart LONG -i, bucketEnd LONG -i, year INT,
month INT, day INT, hour INT, minute INT, activeSessionCount LONG,
newSessionCount LONG, terminatedSessionCount LONG, _timestamp LONG -i",
primaryKeys "meta_tenantId, bucketId, bucketStart, bucketEnd",
incrementalParams "isSessionAnalyticsPerHour, HOUR", mergeSchema "false")*
*org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Exception in executing query CREATE TEMPORARY TABLE
isSessionAnalyticsPerMinute USING CarbonAnalytics OPTIONS (tableName
"org_wso2_is_analytics_stream_SessionStatPerMinute", schema "meta_tenantId
INT -i, bucketId LONG, bucketStart LONG -i, bucketEnd LONG -i, year INT,
month INT, day INT, hour INT, minute INT, activeSessionCount LONG,
newSessionCount LONG, terminatedSessionCount LONG, _timestamp LONG -i",
primaryKeys "meta_tenantId, bucketId, bucketStart, bucketEnd",
incrementalParams "isSessionAnalyticsPerHour, HOUR", mergeSchema "false")*
*    at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:764)*
*    at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:721)*
*    at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)*
*    at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)*
*    at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:60)*
*    at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)*
*    at org.quartz.core.JobRunShell.run(JobRunShell.java:213)*
*    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*
*    at java.util.concurrent.FutureTask.run(FutureTask.java:266)*
*    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)*
*    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)*
*    at java.lang.Thread.run(Thread.java:745)*
*Caused by:
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Spark SQL Context is not available. Check if the cluster has instantiated
properly.*
*    at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:755)*
*    ... 11 more*

This is similar to the known error, which is mentioned in [1]. Even though
it is expected to occur a few times, this error keeps occurring repeatedly
for me.

I faced this same issue while trying to cluster two EI Analytics nodes too.

Any suggestions are appreciated.

[1] https://docs.wso2.com/display/IS550/Setting+Up+Deployment+Pattern+2

Ching Shi
Software Engineer
WSO2

Email: [email protected]
Mobile: +94770186272
Web: http://wso2.com
[image: http://wso2.com/signature] <http://wso2.com/signature>
_______________________________________________
Dev mailing list
[email protected]
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to