difin commented on code in PR #5792:
URL: https://github.com/apache/hive/pull/5792#discussion_r2087292173


##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/compaction/IcebergQueryCompactor.java:
##########
@@ -96,16 +108,39 @@ public boolean run(CompactorContext context) throws 
IOException, HiveException,
         throw new HiveException(ErrorMsg.COMPACTION_NO_PARTITION);
       }
     } else {
-      long partitionHash = IcebergTableUtil.getPartitionHash(icebergTable, 
partSpec);
+      Pair<Integer, StructProjection> partSpecPair =
+          IcebergTableUtil.getPartitionStructWithSpecId(icebergTable, 
partSpec);
+      int specId = partSpecPair.getKey();
+      StructProjection partition = partSpecPair.getValue();
+
+      HiveConf.setBoolVar(conf, ConfVars.HIVE_CONVERT_JOIN, false);

Review Comment:
   When enabled, compaction query is failing with the following exception in 
`convertJoinOpMapJoinOp`:
   
   ```
   INSERT OVERWRITE TABLE default.ice_orc SELECT * FROM default.ice_orc
   WHERE FILE__PATH IN (SELECT FILE_PATH FROM default.ice_orc.FILES WHERE 
partition.event_src_trunc='AAA' AND SPEC_ID=0) AND PARTITION__SPEC__ID = 0
   
   INFO  : Compiling 
command(queryId=hive_20250507190138_8a53bfd0-9ed8-4951-9aa3-548e57b2ba93): 
INSERT OVERWRITE TABLE default.ice_orc SELECT d.* FROM default.ice_orc d, 
default.ice_orc.files f
   WHERE d.FILE__PATH = f.FILE_PATH and f.partition.event_src_trunc='AAA' AND 
f.SPEC_ID=0 AND d.PARTITION__SPEC__ID = 0
   INFO  : No Stats for default@ice_orc, Columns: event_id, event_src, 
event_time
   INFO  : No Stats for default@ice_orc, Columns: file_path, partition, spec_id
   INFO  : No Stats for default@ice_orc, Columns: partition
   ERROR : FAILED: NullPointerException null
   java.lang.NullPointerException
        at 
org.apache.hadoop.hive.ql.plan.ExprNodeDescUtils.indexOf(ExprNodeDescUtils.java:77)
        at 
org.apache.hadoop.hive.ql.plan.ExprNodeDescUtils.indexOf(ExprNodeDescUtils.java:72)
        at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.getMapJoinDesc(MapJoinProcessor.java:1311)
        at 
org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.convertJoinOpMapJoinOp(MapJoinProcessor.java:584)
        at 
org.apache.hadoop.hive.ql.optimizer.ConvertJoinMapJoin.convertJoinMapJoin(ConvertJoinMapJoin.java:1348)
        at 
org.apache.hadoop.hive.ql.optimizer.ConvertJoinMapJoin.process(ConvertJoinMapJoin.java:208)
        at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
        at 
org.apache.hadoop.hive.ql.lib.ForwardWalker.walk(ForwardWalker.java:74)
        at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
        at 
org.apache.hadoop.hive.ql.parse.TezCompiler.runStatsDependentOptimizations(TezCompiler.java:485)
        at 
org.apache.hadoop.hive.ql.parse.TezCompiler.optimizeOperatorPlan(TezCompiler.java:218)
        at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:178)
        at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:13763)
        at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:489)
        at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
        at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:227)
        at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:108)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:203)
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:651)
        at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:596)
        at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:590)
        at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
        at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:209)
        at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
        at 
org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
        at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org

Reply via email to