[ 
https://issues.apache.org/jira/browse/HIVE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036670#comment-16036670
 ] 

liyunzhang_intel commented on HIVE-16780:
-----------------------------------------

[~csun]:  case "multiple sources, single key" pass if 
hive.tez.dynamic.semijoin.reduction is false.
bq.Maybe we should first disable this optimization for Spark in 
DynamicPartitionPruningOptimization
agree, update HIVE-16780.1.patch.

the explain when enabling hive.tez.dynamic.semijoin.reduction
{noformat}
TAGE DEPENDENCIES:
  Stage-2 is a root stage
  Stage-3 depends on stages: Stage-2
  Stage-1 depends on stages: Stage-3
  Stage-0 depends on stages: Stage-1

STAGE PLANS:
  Stage: Stage-2
    Spark
      DagName: root_20170605152828_4c4f4f82-d08f-41e9-9a07-4147b8529dd0:2
      Vertices:
        Map 4 
            Map Operator Tree:
                TableScan
                  alias: srcpart_date
                  filterExpr: ds is not null (type: boolean)
                  Statistics: Num rows: 2 Data size: 42 Basic stats: COMPLETE 
Column stats: NONE
                  Filter Operator
                    predicate: ds is not null (type: boolean)
                    Statistics: Num rows: 2 Data size: 42 Basic stats: COMPLETE 
Column stats: NONE
                    Spark HashTable Sink Operator
                      keys:
                        0 ds (type: string)
                        1 ds (type: string)
                    Select Operator
                      expressions: ds (type: string)
                      outputColumnNames: _col0
                      Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        keys: _col0 (type: string)
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                        Spark Partition Pruning Sink Operator
                          partition key expr: ds
                          Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                          target column name: ds
                          target work: Map 1
            Local Work:
              Map Reduce Local Work
        Map 5 
            Map Operator Tree:
                TableScan
                  alias: srcpart_hour
                  filterExpr: (hr is not null and (hr BETWEEN 
DynamicValue(RS_7_srcpart__col3_min) AND DynamicValue(RS_7_srcpart__col3_max) 
and in_bloom_filter(hr, DynamicValue(RS_7_srcpart__col3_bloom_filter)))) (type: 
boolean)
                  Statistics: Num rows: 2 Data size: 10 Basic stats: COMPLETE 
Column stats: NONE
                  Filter Operator
                    predicate: (hr is not null and (hr BETWEEN 
DynamicValue(RS_7_srcpart__col3_min) AND DynamicValue(RS_7_srcpart__col3_max) 
and in_bloom_filter(hr, DynamicValue(RS_7_srcpart__col3_bloom_filter)))) (type: 
boolean)
                    Statistics: Num rows: 2 Data size: 10 Basic stats: COMPLETE 
Column stats: NONE
                    Spark HashTable Sink Operator
                      keys:
                        0 _col3 (type: string)
                        1 hr (type: string)
                    Select Operator
                      expressions: hr (type: string)
                      outputColumnNames: _col0
                      Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        keys: _col0 (type: string)
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                        Spark Partition Pruning Sink Operator
                          partition key expr: hr
                          Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                          target column name: hr
                          target work: Map 1
            Local Work:
              Map Reduce Local Work

  Stage: Stage-3
    Spark
      DagName: root_20170605152828_4c4f4f82-d08f-41e9-9a07-4147b8529dd0:3
      Vertices:
        Map 4 
            Map Operator Tree:
                TableScan
                  alias: srcpart_date
                  filterExpr: ds is not null (type: boolean)
                  Statistics: Num rows: 2 Data size: 42 Basic stats: COMPLETE 
Column stats: NONE
                  Filter Operator
                    predicate: ds is not null (type: boolean)
                    Statistics: Num rows: 2 Data size: 42 Basic stats: COMPLETE 
Column stats: NONE
                    Spark HashTable Sink Operator
                      keys:
                        0 ds (type: string)
                        1 ds (type: string)
                    Select Operator
                      expressions: ds (type: string)
                      outputColumnNames: _col0
                      Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        keys: _col0 (type: string)
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                        Spark Partition Pruning Sink Operator
                          partition key expr: ds
                          Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                          target column name: ds
                          target work: Map 1
            Local Work:
              Map Reduce Local Work
        Map 5 
            Map Operator Tree:
                TableScan
                  alias: srcpart_hour
                  filterExpr: (hr is not null and (hr BETWEEN 
DynamicValue(RS_7_srcpart__col3_min) AND DynamicValue(RS_7_srcpart__col3_max) 
and in_bloom_filter(hr, DynamicValue(RS_7_srcpart__col3_bloom_filter)))) (type: 
boolean)
                  Statistics: Num rows: 2 Data size: 10 Basic stats: COMPLETE 
Column stats: NONE
                  Filter Operator
                    predicate: (hr is not null and (hr BETWEEN 
DynamicValue(RS_7_srcpart__col3_min) AND DynamicValue(RS_7_srcpart__col3_max) 
and in_bloom_filter(hr, DynamicValue(RS_7_srcpart__col3_bloom_filter)))) (type: 
boolean)
                    Statistics: Num rows: 2 Data size: 10 Basic stats: COMPLETE 
Column stats: NONE
                    Spark HashTable Sink Operator
                      keys:
                        0 _col3 (type: string)
                        1 hr (type: string)
                    Select Operator
                      expressions: hr (type: string)
                      outputColumnNames: _col0
                      Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        keys: _col0 (type: string)
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                        Spark Partition Pruning Sink Operator
                          partition key expr: hr
                          Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                          target column name: hr
                          target work: Map 1
            Local Work:
              Map Reduce Local Work

  Stage: Stage-1
    Spark
      Edges:
        Reducer 2 <- Map 6 (GROUP, 1)
        Reducer 3 <- Map 7 (GROUP, 1)
      DagName: root_20170605152828_4c4f4f82-d08f-41e9-9a07-4147b8529dd0:1
      Vertices:
        Map 6 
            Map Operator Tree:
                TableScan
                  alias: srcpart
                  Statistics: Num rows: 1 Data size: 23248 Basic stats: PARTIAL 
Column stats: NONE
                  Map Join Operator
                    condition map:
                         Inner Join 0 to 1
                    keys:
                      0 ds (type: string)
                      1 ds (type: string)
                    outputColumnNames: _col3
                    input vertices:
                      1 Map 4
                    Statistics: Num rows: 2 Data size: 46 Basic stats: COMPLETE 
Column stats: NONE
                    Select Operator
                      expressions: _col3 (type: string)
                      outputColumnNames: _col0
                      Statistics: Num rows: 2 Data size: 46 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        aggregations: min(_col0), max(_col0), 
bloom_filter(_col0, expectedEntries=1000000)
                        mode: hash
                        outputColumnNames: _col0, _col1, _col2
                        Statistics: Num rows: 1 Data size: 552 Basic stats: 
COMPLETE Column stats: NONE
                        Reduce Output Operator
                          sort order: 
                          Statistics: Num rows: 1 Data size: 552 Basic stats: 
COMPLETE Column stats: NONE
                          value expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: binary)
            Local Work:
              Map Reduce Local Work
        Map 7 
            Map Operator Tree:
                TableScan
                  alias: srcpart
                  Statistics: Num rows: 1 Data size: 23248 Basic stats: PARTIAL 
Column stats: NONE
                  Map Join Operator
                    condition map:
                         Inner Join 0 to 1
                    keys:
                      0 ds (type: string)
                      1 ds (type: string)
                    outputColumnNames: _col3
                    input vertices:
                      1 Map 4
                    Statistics: Num rows: 2 Data size: 46 Basic stats: COMPLETE 
Column stats: NONE
                    Map Join Operator
                      condition map:
                           Inner Join 0 to 1
                      keys:
                        0 _col3 (type: string)
                        1 hr (type: string)
                      input vertices:
                        1 Map 5
                      Statistics: Num rows: 2 Data size: 50 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        aggregations: count()
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 1 Data size: 8 Basic stats: 
COMPLETE Column stats: NONE
                        Reduce Output Operator
                          sort order: 
                          Statistics: Num rows: 1 Data size: 8 Basic stats: 
COMPLETE Column stats: NONE
                          value expressions: _col0 (type: bigint)
            Local Work:
              Map Reduce Local Work
        Reducer 2 
            Reduce Operator Tree:
              Group By Operator
                aggregations: min(VALUE._col0), max(VALUE._col1), 
bloom_filter(VALUE._col2, expectedEntries=1000000)
                mode: final
                outputColumnNames: _col0, _col1, _col2
                Statistics: Num rows: 1 Data size: 552 Basic stats: COMPLETE 
Column stats: NONE
                Reduce Output Operator
                  sort order: 
                  Statistics: Num rows: 1 Data size: 552 Basic stats: COMPLETE 
Column stats: NONE
                  value expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: binary)
        Reducer 3 
            Reduce Operator Tree:
              Group By Operator
                aggregations: count(VALUE._col0)
                mode: mergepartial
                outputColumnNames: _col0
                Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE 
Column stats: NONE
                File Output Operator
                  compressed: false
                  Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE 
Column stats: NONE
                  table:
                      input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat
                      output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
                      serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

  Stage: Stage-0
    Fetch Operator
      limit: -1
      Processor Tree:
        ListSink
{noformat}


the explain when disabling hive.tez.dynamic.semijoin.reduction
{noformat}
STAGE DEPENDENCIES:
  Stage-2 is a root stage
  Stage-1 depends on stages: Stage-2
  Stage-0 depends on stages: Stage-1

STAGE PLANS:
  Stage: Stage-2
    Spark
      DagName: root_20170605153113_6d0bbb45-8a4a-4b10-9b2c-4585abdf2a79:2
      Vertices:
        Map 3 
            Map Operator Tree:
                TableScan
                  alias: srcpart_date
                  filterExpr: ds is not null (type: boolean)
                  Statistics: Num rows: 2 Data size: 42 Basic stats: COMPLETE 
Column stats: NONE
                  Filter Operator
                    predicate: ds is not null (type: boolean)
                    Statistics: Num rows: 2 Data size: 42 Basic stats: COMPLETE 
Column stats: NONE
                    Spark HashTable Sink Operator
                      keys:
                        0 ds (type: string)
                        1 ds (type: string)
                    Select Operator
                      expressions: ds (type: string)
                      outputColumnNames: _col0
                      Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        keys: _col0 (type: string)
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                        Spark Partition Pruning Sink Operator
                          partition key expr: ds
                          Statistics: Num rows: 2 Data size: 42 Basic stats: 
COMPLETE Column stats: NONE
                          target column name: ds
                          target work: Map 1
            Local Work:
              Map Reduce Local Work
        Map 4 
            Map Operator Tree:
                TableScan
                  alias: srcpart_hour
                  filterExpr: hr is not null (type: boolean)
                  Statistics: Num rows: 2 Data size: 10 Basic stats: COMPLETE 
Column stats: NONE
                  Filter Operator
                    predicate: hr is not null (type: boolean)
                    Statistics: Num rows: 2 Data size: 10 Basic stats: COMPLETE 
Column stats: NONE
                    Spark HashTable Sink Operator
                      keys:
                        0 _col3 (type: string)
                        1 hr (type: string)
                    Select Operator
                      expressions: hr (type: string)
                      outputColumnNames: _col0
                      Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        keys: _col0 (type: string)
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                        Spark Partition Pruning Sink Operator
                          partition key expr: hr
                          Statistics: Num rows: 2 Data size: 10 Basic stats: 
COMPLETE Column stats: NONE
                          target column name: hr
                          target work: Map 1
            Local Work:
              Map Reduce Local Work

  Stage: Stage-1
    Spark
      Edges:
        Reducer 2 <- Map 1 (GROUP, 1)
      DagName: root_20170605153113_6d0bbb45-8a4a-4b10-9b2c-4585abdf2a79:1
      Vertices:
        Map 1 
            Map Operator Tree:
                TableScan
                  alias: srcpart
                  Statistics: Num rows: 1 Data size: 23248 Basic stats: PARTIAL 
Column stats: NONE
                  Map Join Operator
                    condition map:
                         Inner Join 0 to 1
                    keys:
                      0 ds (type: string)
                      1 ds (type: string)
                    outputColumnNames: _col3
                    input vertices:
                      1 Map 3
                    Statistics: Num rows: 2 Data size: 46 Basic stats: COMPLETE 
Column stats: NONE
                    Map Join Operator
                      condition map:
                           Inner Join 0 to 1
                      keys:
                        0 _col3 (type: string)
                        1 hr (type: string)
                      input vertices:
                        1 Map 4
                      Statistics: Num rows: 2 Data size: 50 Basic stats: 
COMPLETE Column stats: NONE
                      Group By Operator
                        aggregations: count()
                        mode: hash
                        outputColumnNames: _col0
                        Statistics: Num rows: 1 Data size: 8 Basic stats: 
COMPLETE Column stats: NONE
                        Reduce Output Operator
                          sort order: 
                          Statistics: Num rows: 1 Data size: 8 Basic stats: 
COMPLETE Column stats: NONE
                          value expressions: _col0 (type: bigint)
            Local Work:
              Map Reduce Local Work
        Reducer 2 
            Reduce Operator Tree:
              Group By Operator
                aggregations: count(VALUE._col0)
                mode: mergepartial
                outputColumnNames: _col0
                Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE 
Column stats: NONE
                File Output Operator
                  compressed: false
                  Statistics: Num rows: 1 Data size: 8 Basic stats: COMPLETE 
Column stats: NONE
                  table:
                      input format: 
org.apache.hadoop.mapred.SequenceFileInputFormat
                      output format: 
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
                      serde: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

  Stage: Stage-0
    Fetch Operator
      limit: -1
      Processor Tree:
        ListSink

{noformat}

One interesting thing is when enabling {{hive.tez.dynamic.semijoin.reduction}}, 
there is an extra reduce Reducer 2 <- Map 6 (GROUP, 1).  But what's purpose of 
Reducer 2?
{noformat}
   Reducer 2 
            Reduce Operator Tree:
              Group By Operator
                aggregations: min(VALUE._col0), max(VALUE._col1), 
bloom_filter(VALUE._col2, expectedEntries=1000000)
                mode: final
                outputColumnNames: _col0, _col1, _col2
                Statistics: Num rows: 1 Data size: 552 Basic stats: COMPLETE 
Column stats: NONE
                Reduce Output Operator
                  sort order: 
                  Statistics: Num rows: 1 Data size: 552 Basic stats: COMPLETE 
Column stats: NONE
                  value expressions: _col0 (type: string), _col1 (type: 
string), _col2 (type: binary)
     
{noformat}


> Case "multiple sources, single key" in spark_dynamic_pruning.q fails 
> ---------------------------------------------------------------------
>
>                 Key: HIVE-16780
>                 URL: https://issues.apache.org/jira/browse/HIVE-16780
>             Project: Hive
>          Issue Type: Bug
>            Reporter: liyunzhang_intel
>            Assignee: liyunzhang_intel
>         Attachments: HIVE-16780.patch
>
>
> script.q
> {code}
> set hive.optimize.ppd=true;
> set hive.ppd.remove.duplicatefilters=true;
> set hive.spark.dynamic.partition.pruning=true;
> set hive.optimize.metadataonly=false;
> set hive.optimize.index.filter=true;
> set hive.strict.checks.cartesian.product=false;
> set hive.spark.dynamic.partition.pruning=true;
> -- multiple sources, single key
> select count(*) from srcpart join srcpart_date on (srcpart.ds = 
> srcpart_date.ds) join srcpart_hour on (srcpart.hr = srcpart_hour.hr)
> {code}
> if disabling "hive.optimize.index.filter", case passes otherwise it always 
> hang out in the first job. Exception
> {code}
> 17/05/27 23:39:45 DEBUG Executor task launch worker-0 PerfLogger: </PERFLOG 
> method=SparkInitializeOperators start=1495899585574 end=1495899585933 
> duration=359 from=org.apache.hadoop.hive.ql.exec.spark.SparkRecordHandler>
> 17/05/27 23:39:45 INFO Executor task launch worker-0 Utilities: PLAN PATH = 
> hdfs://bdpe41:8020/tmp/hive/root/029a2d8a-c6e5-4ea9-adea-ef8fbea3cde2/hive_2017-05-27_23-39-06_464_5915518562441677640-1/-mr-10007/617d9dd6-9f9a-4786-8131-a7b98e8abc3e/map.xml
> 17/05/27 23:39:45 DEBUG Executor task launch worker-0 Utilities: Found plan 
> in cache for name: map.xml
> 17/05/27 23:39:45 DEBUG Executor task launch worker-0 DFSClient: Connecting 
> to datanode 10.239.47.162:50010
> 17/05/27 23:39:45 DEBUG Executor task launch worker-0 MapOperator: Processing 
> alias(es) srcpart_hour for file 
> hdfs://bdpe41:8020/user/hive/warehouse/srcpart_hour/000008_0
> 17/05/27 23:39:45 DEBUG Executor task launch worker-0 ObjectCache: Creating 
> root_20170527233906_ac2934e1-2e58-4116-9f0d-35dee302d689_DynamicValueRegistry
> 17/05/27 23:39:45 ERROR Executor task launch worker-0 SparkMapRecordHandler: 
> Error processing row: org.apache.hadoop.hive.ql.metadata.HiveException: Hive 
> Runtime Error while processing row {"hr":"11","hour":"11"}
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"hr":"11","hour":"11"}
>      at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:562)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:136)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
>      at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
>      at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>      at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>      at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
>      at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
>      at 
> org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:1974)
>      at 
> org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:1974)
>      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
>      at org.apache.spark.scheduler.Task.run(Task.scala:85)
>      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>      at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>      at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>      at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: Failed to retrieve dynamic value 
> for RS_7_srcpart__col3_min
>      at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getValue(DynamicValue.java:126)
>      at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getWritableValue(DynamicValue.java:101)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeDynamicValueEvaluator._evaluate(ExprNodeDynamicValueEvaluator.java:51)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>      at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan.evaluate(GenericUDFOPEqualOrGreaterThan.java:108)
>      at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBetween.evaluate(GenericUDFBetween.java:57)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>      at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.evaluate(GenericUDFOPAnd.java:63)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>      at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.evaluate(GenericUDFOPAnd.java:63)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluatorHead._evaluate(ExprNodeEvaluatorHead.java:44)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:68)
>      at 
> org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:112)
>      at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)
>      at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130)
>      at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:148)
>      at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:547)
>      ... 17 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.NullPointerException
>      at 
> org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:62)
>      at 
> org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:51)
>      at 
> org.apache.hadoop.hive.ql.exec.ObjectCacheWrapper.retrieve(ObjectCacheWrapper.java:40)
>      at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getValue(DynamicValue.java:119)
>      ... 41 more
> Caused by: java.lang.NullPointerException
>      at 
> org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:60)
>      ... 44 more
> 17/05/27 23:39:45 ERROR Executor task launch worker-0 Executor: Exception in 
> task 1.0 in stage 0.0 (TID 1)
> java.lang.RuntimeException: Error processing row: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"hr":"11","hour":"11"}
>      at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:149)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85)
>      at 
> scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
>      at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>      at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>      at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
>      at 
> org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$12.apply(AsyncRDDActions.scala:127)
>      at 
> org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:1974)
>      at 
> org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:1974)
>      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
>      at org.apache.spark.scheduler.Task.run(Task.scala:85)
>      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>      at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>      at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>      at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"hr":"11","hour":"11"}
>      at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:562)
>      at 
> org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:136)
>      ... 16 more
> Caused by: java.lang.IllegalStateException: Failed to retrieve dynamic value 
> for RS_7_srcpart__col3_min
>      at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getValue(DynamicValue.java:126)
>      at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getWritableValue(DynamicValue.java:101)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeDynamicValueEvaluator._evaluate(ExprNodeDynamicValueEvaluator.java:51)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>      at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPEqualOrGreaterThan.evaluate(GenericUDFOPEqualOrGreaterThan.java:108)
>      at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBetween.evaluate(GenericUDFBetween.java:57)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>      at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.evaluate(GenericUDFOPAnd.java:63)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator._evaluate(ExprNodeGenericFuncEvaluator.java:187)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:80)
>      at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator$DeferredExprObject.get(ExprNodeGenericFuncEvaluator.java:88)
>      at org.apache.hadoop.hive
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to