Babulal created CARBONDATA-2743:
-----------------------------------
Summary: [MV] After MV creation limit queries Throws Exceptions
for table which does not have mv data map
Key: CARBONDATA-2743
URL: https://issues.apache.org/jira/browse/CARBONDATA-2743
Project: CarbonData
Issue Type: Bug
Reporter: Babulal
0: jdbc:hive2://10.18.16.173:23040/default> create table mytest_50_s13 (name
string,rownumber string, m1 float) stored by 'carbondata'
TBLPROPERTIES('sort_scope'='global_sort');
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (12.209 seconds)
0: jdbc:hive2://10.18.16.173:23040/default> load data inpath
'hdfs://hacluster/tmp/data/cbo_1.csv' into table mytest_50_s13
options('FILEHEADER'='name,rownumber,m1');
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (79.901 seconds)
0: jdbc:hive2://10.18.16.173:23040/default> create datamap map10 using 'mv' as
select sum(m1),rownumber from mytest_50_s13 group by rownumber;
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (17.05 seconds)
0: jdbc:hive2://10.18.16.173:23040/default> show datamap on table mytest_50_s13;
+--------------+------------+-------------------+---------------------+--+
| DataMapName | ClassName | Associated Table | DataMap Properties |
+--------------+------------+-------------------+---------------------+--+
| map10 | mv | babu.map10_table | |
+--------------+------------+-------------------+---------------------+--+
1 row selected (0.815 seconds)
Now create one more table without mv
: jdbc:hive2://10.18.16.173:23040/default> create table mytest_50_s14 (name
string,rownumber string, m1 float) stored by 'carbondata'
TBLPROPERTIES('sort_scope'='global_sort');
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (12.209 seconds)
0: jdbc:hive2://10.18.16.173:23040/default> load data inpath
'hdfs://hacluster/tmp/data/cbo_1.csv'' into table mytest_50_s14
options('FILEHEADER'='name,rownumber,m1');
+---------+--+
| Result |
+---------+--+
+---------+--+
No rows selected (79.901 seconds)
0: jdbc:hive2://10.18.16.173:23040/default> select * from mytest_50_s14 limit
10;
Error: java.lang.UnsupportedOperationException: unsupported operation: Modular
plan not supported (e.g. has subquery expression) for
GlobalLimit 10
+- LocalLimit 10
+- Relation[name#1026,rownumber#1027,m1#1028] CarbonDatasourceHadoopRelation
[ Database name :babu, Table name :mytest_50_s14, Schema
:Some(StructType(StructField(name,StringType,true),
StructField(rownumber,StringType,true), StructField(m1,DoubleType,true))) ]
(state=,code=0)
2018-07-13 00:42:51,540 | INFO | [pool-25-thread-32] |
OperationId=b5c2c8b2-1ef4-4894-a709-2a738bd81f76 Result=FAIL |
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:280)
2018-07-13 00:42:51,540 | ERROR | [pool-25-thread-32] | Error executing query,
currentState RUNNING, |
org.apache.spark.internal.Logging$class.logError(Logging.scala:91)
java.lang.UnsupportedOperationException: unsupported operation: Modular plan
not supported (e.g. has subquery expression) for
GlobalLimit 10
+- LocalLimit 10
+- Relation[name#1026,rownumber#1027,m1#1028] CarbonDatasourceHadoopRelation
[ Database name :babu, Table name :mytest_50_s14, Schema
:Some(StructType(StructField(name,StringType,true),
StructField(rownumber,StringType,true), StructField(m1,DoubleType,true))) ]
at org.apache.carbondata.mv.plans.package$.supports(package.scala:52)
at
org.apache.carbondata.mv.plans.modular.Modularizer.org$apache$carbondata$mv$plans$modular$Modularizer$$modularizeCore(Modularizer.scala:102)
at
org.apache.carbondata.mv.plans.modular.Modularizer.modularize(Modularizer.scala:65)
at
org.apache.carbondata.mv.rewrite.QueryRewrite.modularPlan$lzycompute(QueryRewrite.scala:50)
at
org.apache.carbondata.mv.rewrite.QueryRewrite.modularPlan(QueryRewrite.scala:49)
at
org.apache.carbondata.mv.rewrite.QueryRewrite.withSummaryData$lzycompute(QueryRewrite.scala:53)
at
org.apache.carbondata.mv.rewrite.QueryRewrite.withSummaryData(QueryRewrite.scala:52)
at
org.apache.carbondata.mv.rewrite.QueryRewrite.withMVTable$lzycompute(QueryRewrite.scala:55)
at
org.apache.carbondata.mv.rewrite.QueryRewrite.withMVTable(QueryRewrite.scala:55)
at
org.apache.carbondata.mv.datamap.MVAnalyzerRule.apply(MVAnalyzerRule.scala:68)
at
org.apache.carbondata.mv.datamap.MVAnalyzerRule.apply(MVAnalyzerRule.scala:38)
at
org.apache.spark.sql.hive.CarbonAnalyzer.execute(CarbonAnalyzer.scala:46)
at
org.apache.spark.sql.hive.CarbonAnalyzer.execute(CarbonAnalyzer.scala:27)
at
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:75)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:73)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:56)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:244)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:176)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:173)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1778)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:186)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-07-13 00:42:51,541 | ERROR | [pool-25-thread-32] | Error running hive
query: |
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:180)
org.apache.hive.service.cli.HiveSQLException:
java.lang.UnsupportedOperationException: unsupported operation: Modular plan
not supported (e.g. has subquery expression) for
GlobalLimit 10
+- LocalLimit 10
+- Relation[name#1026,rownumber#1027,m1#1028] CarbonDatasourceHadoopRelation
[ Database name :babu, Table name :mytest_50_s14, Schema
:Some(StructType(StructField(name,StringType,true),
StructField(rownumber,StringType,true), StructField(m1,DoubleType,true))) ]
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:289)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:176)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:173)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1778)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStat
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)