Hi Malith,

Yes, correct! you need to change the script.

Additionally, there are some changes in the carbonJdbc connector as well...
so, you might need to watch out for it!

Please check with the APIM team and ESB team whether we are doing a feature
release with the DAS 310 changes?

cheers

On Wed, Sep 21, 2016 at 5:11 AM, Malith Munasinghe <mali...@wso2.com> wrote:

> Hi All,
>
> While preparing a DAS 3.1.0 to run APIM Analytics I have added features as
> in [1]
> <https://docs.wso2.com/display/AM200/Installing+WSO2+APIM+Analytics+Features>.
> After deploying the CApp for APIM Analytics I run in to below error.
> According to the error that *incrementalProcessing *is not a valid
> option. Also according to [2]
> <https://docs.wso2.com/display/DAS310/Incremental+Processing> the syntax
> to parse this option is *incrementalParams. *In order to get DAS 3.1.0 to
> process APIM Analytics
> do we have to change the scripts with this option as well ?
>
>
> TID: [-1234] [] [2016-09-21 08:54:00,019] ERROR {org.wso2.carbon.analytics.
> spark.core.CarbonAnalyticsProcessorService} -  Error while executing
> query :         CREATE TEMPORARY TABLE APIMGT_PERMINUTE_REQUEST_DATA USING
> CarbonAnalytics OPTIONS(tableName 
> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST",
> schema "    year INT -i, month INT -i, day INT -i, hour INT -i, minute INT
> -i,    consumerKey STRING, context STRING, api_version STRING, api STRING,
> version STRING, requestTime LONG, userId STRING, hostName STRING,
>  apiPublisher STRING, total_request_count LONG, resourceTemplate STRING,
> method STRING, applicationName STRING, tenantDomain STRING,    userAgent
> STRING, resourcePath STRING, request INT, applicationId STRING, tier
> STRING, throttledOut BOOLEAN, clientIp STRING,    applicationOwner STRING,
> _timestamp LONG -i",    primaryKeys "year, month, day, hour, minute,
> consumerKey, context, api_version, userId, hostName, apiPublisher,
> resourceTemplate, method, userAgent, clientIp",    incrementalProcessing
> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",    mergeSchema "false")
> {org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService}
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
> Exception in executing query CREATE TEMPORARY TABLE
> APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "    year INT -i,
> month INT -i, day INT -i, hour INT -i, minute INT -i,    consumerKey
> STRING, context STRING, api_version STRING, api STRING, version STRING,
> requestTime LONG, userId STRING, hostName STRING,    apiPublisher STRING,
> total_request_count LONG, resourceTemplate STRING, method STRING,
> applicationName STRING, tenantDomain STRING,    userAgent STRING,
> resourcePath STRING, request INT, applicationId STRING, tier STRING,
> throttledOut BOOLEAN, clientIp STRING,    applicationOwner STRING,
> _timestamp LONG -i",    primaryKeys "year, month, day, hour, minute,
> consumerKey, context, api_version, userId, hostName, apiPublisher,
> resourceTemplate, method, userAgent, clientIp",    incrementalProcessing
> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",    mergeSchema "false")
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQueryLocal(SparkAnalyticsExecutor.java:764)
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQuery(SparkAnalyticsExecutor.java:721)
> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorServic
> e.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorServic
> e.executeScript(CarbonAnalyticsProcessorService.java:151)
> at org.wso2.carbon.analytics.spark.core.AnalyticsTask.
> execute(AnalyticsTask.java:60)
> at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.
> execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Unknown options :
> incrementalprocessing
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> checkParameters(AnalyticsRelationProvider.java:123)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> setParameters(AnalyticsRelationProvider.java:113)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> createRelation(AnalyticsRelationProvider.java:75)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> createRelation(AnalyticsRelationProvider.java:45)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(
> ResolvedDataSource.scala:158)
> at org.apache.spark.sql.execution.datasources.
> CreateTempTableUsing.run(ddl.scala:92)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(commands.scala:58)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.scala:56)
> at org.apache.spark.sql.execution.ExecutedCommand.
> doExecute(commands.scala:70)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:132)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:130)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(
> QueryExecution.scala:55)
> at org.apache.spark.sql.execution.QueryExecution.
> toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQueryLocal(SparkAnalyticsExecutor.java:760)
> ... 11 more
> TID: [-1234] [] [2016-09-21 08:54:00,020] ERROR 
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
> -  Error while executing the scheduled task for the script:
> APIM_INCREMENTAL_PROCESSING_SCRIPT {org.wso2.carbon.analytics.
> spark.core.AnalyticsTask}
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
> Exception in executing query CREATE TEMPORARY TABLE
> APIMGT_PERMINUTE_REQUEST_DATA USING CarbonAnalytics OPTIONS(tableName
> "ORG_WSO2_APIMGT_STATISTICS_PERMINUTEREQUEST", schema "    year INT -i,
> month INT -i, day INT -i, hour INT -i, minute INT -i,    consumerKey
> STRING, context STRING, api_version STRING, api STRING, version STRING,
> requestTime LONG, userId STRING, hostName STRING,    apiPublisher STRING,
> total_request_count LONG, resourceTemplate STRING, method STRING,
> applicationName STRING, tenantDomain STRING,    userAgent STRING,
> resourcePath STRING, request INT, applicationId STRING, tier STRING,
> throttledOut BOOLEAN, clientIp STRING,    applicationOwner STRING,
> _timestamp LONG -i",    primaryKeys "year, month, day, hour, minute,
> consumerKey, context, api_version, userId, hostName, apiPublisher,
> resourceTemplate, method, userAgent, clientIp",    incrementalProcessing
> "APIMGT_PERMINUTE_REQUEST_DATA, HOUR",    mergeSchema "false")
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQueryLocal(SparkAnalyticsExecutor.java:764)
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQuery(SparkAnalyticsExecutor.java:721)
> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorServic
> e.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorServic
> e.executeScript(CarbonAnalyticsProcessorService.java:151)
> at org.wso2.carbon.analytics.spark.core.AnalyticsTask.
> execute(AnalyticsTask.java:60)
> at org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.
> execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Unknown options :
> incrementalprocessing
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> checkParameters(AnalyticsRelationProvider.java:123)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> setParameters(AnalyticsRelationProvider.java:113)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> createRelation(AnalyticsRelationProvider.java:75)
> at org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider.
> createRelation(AnalyticsRelationProvider.java:45)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(
> ResolvedDataSource.scala:158)
> at org.apache.spark.sql.execution.datasources.
> CreateTempTableUsing.run(ddl.scala:92)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(commands.scala:58)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.scala:56)
> at org.apache.spark.sql.execution.ExecutedCommand.
> doExecute(commands.scala:70)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:132)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:130)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(
> QueryExecution.scala:55)
> at org.apache.spark.sql.execution.QueryExecution.
> toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
> at org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.
> executeQueryLocal(SparkAnalyticsExecutor.java:760)
> ... 11 more
>
>
> [1] https://docs.wso2.com/display/AM200/Installing+WSO2+
> APIM+Analytics+Features
> [2] https://docs.wso2.com/display/DAS310/Incremental+Processing
>
> Regards,
> Malith
> --
> Malith Munasinghe | Software Engineer
> M: +94 (71) 9401122
> E: mali...@wso2.com
> W: http://wso2.com
> <http://wso2.com/signature>
>



-- 
*Niranda Perera*
Software Engineer, WSO2 Inc.
Mobile: +94-71-554-8430
Twitter: @n1r44 <https://twitter.com/N1R44>
https://pythagoreanscript.wordpress.com/
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to