dengzhhu653 commented on a change in pull request #2473:
URL: https://github.com/apache/hive/pull/2473#discussion_r677261197



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
##########
@@ -2680,9 +2683,9 @@ public void run() {
                   total += estimator.estimate(jobConf, scanOp, 
-1).getTotalLength();
                 }
                 recordSummary(path, new ContentSummary(total, -1, -1));
-              } else {
-                // todo: should nullify summary for non-native tables,
-                // not to be selected as a mapjoin target
+              } else if (handler == null) {

Review comment:
       Last two months, in our cluster we found that join a kudu table(mr 
engine) cloud make the hiveserver2 oom.  
[HMSHandler](https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java#L2385-L2407)
 seems create a path for the non-native table, so `fs.getContentSummary(path)` 
returns the file length with 0(if data does not store under the table dir like 
kudu, hbase), the optimizer will 
[collect](https://github.com/apache/hive/blob/7b3ecf617a6d46f48a3b6f77e0339fd4ad95a420/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/AbstractJoinTaskDispatcher.java#L118)
 the sizes of join tables before converting the join to mapjoin(by default), if 
the non-native table is able to be a stream table(like inner join) and the 
length of `his data` is 0,  the non-native may be tagged as small table and a 
local task will be started to build the hashtable. If the non-native table has 
millions of records, the jvm m
 ay not be able to hold all these records in memory.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to