chinnaraolalam commented on code in PR #4807:
URL: https://github.com/apache/hive/pull/4807#discussion_r1434875269
##########
ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java:
##########
@@ -4104,49 +4104,20 @@ public List<String> getPartitionNames(Table tbl,
ExprNodeGenericFuncDesc expr, S
}
/**
- * get all the partitions that the table has
+ * get all the partitions that the table has along with auth info
*
* @param tbl
* object for which partition is needed
- * @return list of partition objects
+ * @return list of partition objects along with auth info
*/
public List<Partition> getPartitions(Table tbl) throws HiveException {
PerfLogger perfLogger = SessionState.getPerfLogger();
perfLogger.perfLogBegin(CLASS_NAME, PerfLogger.HIVE_GET_PARTITIONS);
-
try {
- if (tbl.isPartitioned()) {
- List<org.apache.hadoop.hive.metastore.api.Partition> tParts;
- try {
- GetPartitionsPsWithAuthRequest req = new
GetPartitionsPsWithAuthRequest();
- req.setTblName(tbl.getTableName());
- req.setDbName(tbl.getDbName());
- req.setUserName(getUserName());
- req.setMaxParts((short) -1);
- req.setGroupNames(getGroupNames());
- if (AcidUtils.isTransactionalTable(tbl)) {
- ValidWriteIdList validWriteIdList =
getValidWriteIdList(tbl.getDbName(), tbl.getTableName());
- req.setValidWriteIdList(validWriteIdList != null ?
validWriteIdList.toString() : null);
- req.setId(tbl.getTTable().getId());
- }
- GetPartitionsPsWithAuthResponse res =
getMSC().listPartitionsWithAuthInfoRequest(req);
- tParts = res.getPartitions();
-
- } catch (NoSuchObjectException nsoe) {
- return Lists.newArrayList();
- } catch (Exception e) {
- LOG.error("Failed getPartitions", e);
- throw new HiveException(e);
- }
- List<Partition> parts = new ArrayList<>(tParts.size());
- for (org.apache.hadoop.hive.metastore.api.Partition tpart : tParts) {
- parts.add(new Partition(tbl, tpart));
- }
-
- return parts;
- } else {
- return Collections.singletonList(new Partition(tbl));
- }
+ int batchSize= MetastoreConf.getIntVar(Hive.get().getConf(),
MetastoreConf.ConfVars.BATCH_RETRIEVE_MAX);
Review Comment:
Now all calls are batch call with retry. Previously no batching and retry,
so which are all the data fit with in 2GB get it in one call. Now those kind of
calls will delay and will have to do more round trips?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]