[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447509066



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();
   }
 } catch (RetryException e) {
-  seedWriteIdOnAcidConversion(rqst);
+  seedWriteId(rqst);
 }
+  }
+
+  @Override
+  public void seedTxnId(SeedTxnIdRequest rqst) throws MetaException {
+try {
+  Connection dbConn = null;
+  Statement stmt = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+stmt = dbConn.createStatement();
+/*
+ * Locking the txnLock an exclusive way, we do not want to set the 
txnId backward accidentally
+ * if there are concurrent open transactions
+ */
+acquireTxnLock(stmt, false);
+long highWaterMark = getHighWaterMark(stmt);
+if (highWaterMark >= rqst.getSeedTxnId()) {

Review comment:
   not quite understand this if condition. You have check in 
validateAndAddMaxTxnIdAndWriteId() if there are already some write ids 
registered in HMS and we try to do repair - throw exception. Could it be 
possible due to lack of locking that when we calculate the write ids there is 
nothing in HMS,  however when we try to seed - some transaction generates a new 
write id - would it cause some dataloss problems or other issues? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447484596



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();
   }
 } catch (RetryException e) {
-  seedWriteIdOnAcidConversion(rqst);
+  seedWriteId(rqst);
 }
+  }
+
+  @Override
+  public void seedTxnId(SeedTxnIdRequest rqst) throws MetaException {
+try {
+  Connection dbConn = null;
+  Statement stmt = null;
+  try {
+lockInternal();

Review comment:
   same, could you please check if this is needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447484200



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();

Review comment:
   not sure why is it used here





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447483691



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);

Review comment:
   minor: you can simply pass params as  Arrays.asList(rqst.getDbName(), 
rqst.getTableName()) instead of using so many local vars





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447482052



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);

Review comment:
   minor: i would use try-with-resources instead of doing explicit 
management  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447480415



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();

Review comment:
   lockInternal is required for Derby to simulate S4U, why use here? 
unlockInternal is not needed as well





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447480415



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();

Review comment:
   lockInternal is required for Derby to simulate S4U





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447478910



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);
+LOG.debug("Going to execute query <" + query.replaceAll("\\?", "{}") + 
">", quoteString(dbName),
+quoteString(tableName));
+rs = pStmt.executeQuery();
+// If there is no record, we never allocated anything
+long maxWriteId = 0l;
+if (rs.next()) {
+  // The row contains the nextId not the previously allocated
+  maxWriteId = rs.getLong(1) - 1;
+}
+return new MaxAllocatedTableWriteIdResponse(maxWriteId);
+  } catch (SQLException e) {
+LOG.error(
+"Exception during reading the max allocated writeId for dbName={}, 
tableName={}. Will retry if possible.",
+dbName, tableName, e);
+rollbackDBConn(dbConn);

Review comment:
   what to rollback, you have select here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447477522



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";

Review comment:
   should we have a query constant?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447476089



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnDbUtil.java
##
@@ -313,6 +313,41 @@ private static void resetTxnSequence(Connection conn, 
Statement stmt) throws SQL
 }
   }
 
+  /**
+   * Restarts the txnId sequence with the given seed value.
+   * It is the responsibility of the caller to not set the sequence backward.
+   * @param conn database connection
+   * @param stmt sql statement
+   * @param seedTxnId the seed value for the sequence
+   * @throws SQLException ex
+   */
+  public static void seedTxnSequence(Connection conn, Statement stmt, long 
seedTxnId) throws SQLException {
+String dbProduct = conn.getMetaData().getDatabaseProductName();
+DatabaseProduct databaseProduct = determineDatabaseProduct(dbProduct);
+switch (databaseProduct) {
+
+case DERBY:

Review comment:
   minor: i would probably create EnumMap for SEED_FN, and use proper one 
based on db type.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447472989



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -383,6 +475,7 @@ public Void execute(int size) throws MetastoreException {
   partsToAdd.add(partition);
   lastBatch.add(part);
   addMsgs.add(String.format(addMsgFormat, 
part.getPartitionName()));
+  LOG.debug(String.format(addMsgFormat, part.getPartitionName()));

Review comment:
   why  not to log content of addMsgs 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447468545



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447462171



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447033768



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   why not use regex with pattern matching? removeVisibilityTxnId probably 
wouldn't even be needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447053621



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447052663



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447050970



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447046016



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447044667



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);

Review comment:
   should we use error level here?





[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447044866



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447042125



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;

Review comment:
   same  success &= closeTxn(qualifiedTableName, success, txnId)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447041559



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;

Review comment:
   you can do success &= writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds)
   for readability





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447037968



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {

Review comment:
   not formatted





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447037520



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {

Review comment:
   you can remove 1 nesting level





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447033768



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   why not use regex? removeVisibilityTxnId probably wouldn't even be needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447033768



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   why not use regex?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447028756



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');

Review comment:
   why not use rest.split('_')

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');

Review comment:
   why not use rest.split('_')?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447003871



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -111,24 +120,24 @@ public IMetaStoreClient getMsc() {
* @param partitions
*  List of partition name value pairs, if null or empty check all
*  partitions
-   * @param table
-   * @param result
-   *  Fill this with the results of the check
+   * @param table Table we want to run the check for.
+   * @return Results of the check
* @throws MetastoreException
*   Failed to get required information from the metastore.
* @throws IOException
*   Most likely filesystem related
*/
-  public void checkMetastore(String catName, String dbName, String tableName,
-  List> partitions, Table table, CheckResult 
result)
+  public CheckResult checkMetastore(String catName, String dbName, String 
tableName,
+  List> partitions, Table table)
   throws MetastoreException, IOException {
-
+CheckResult result = new CheckResult();
 if (dbName == null || "".equalsIgnoreCase(dbName)) {
   dbName = Warehouse.DEFAULT_DATABASE_NAME;
 }
 
 try {
   if (tableName == null || "".equals(tableName)) {
+// TODO: I do not think this is used by anything other than tests

Review comment:
   should we answer this question in a current patch?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447001420



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
##
@@ -8322,6 +8322,22 @@ public AllocateTableWriteIdsResponse 
allocate_table_write_ids(
   return response;
 }
 
+@Override
+public MaxAllocatedTableWriteIdResponse 
get_max_allocated_table_write_id(MaxAllocatedTableWriteIdRequest rqst)

Review comment:
   weird mix of Camel case and underscore?

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
##
@@ -8322,6 +8322,22 @@ public AllocateTableWriteIdsResponse 
allocate_table_write_ids(
   return response;
 }
 
+@Override
+public MaxAllocatedTableWriteIdResponse 
get_max_allocated_table_write_id(MaxAllocatedTableWriteIdRequest rqst)

Review comment:
   weird mix of Camel case and underscore





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-29 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447001420



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
##
@@ -8322,6 +8322,22 @@ public AllocateTableWriteIdsResponse 
allocate_table_write_ids(
   return response;
 }
 
+@Override
+public MaxAllocatedTableWriteIdResponse 
get_max_allocated_table_write_id(MaxAllocatedTableWriteIdRequest rqst)

Review comment:
   why not Camel case?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446134506



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -74,21 +76,21 @@
   @Before
   public void setUp() throws Exception {
 hive = Hive.get();
-
hive.getConf().setIntVar(HiveConf.ConfVars.METASTORE_FS_HANDLER_THREADS_COUNT, 
15);
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"throw");
+
hive.getConf().set(MetastoreConf.ConfVars.FS_HANDLER_THREADS_COUNT.getVarname(),
 "15");

Review comment:
   why not setIntVar?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446143608



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -252,37 +241,165 @@ public void testInvalidPartitionKeyName()
   @Test
   public void testSkipInvalidPartitionKeyName()
 throws HiveException, AlreadyExistsException, IOException, 
MetastoreException {
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"skip");
+
hive.getConf().set(MetastoreConf.ConfVars.MSCK_PATH_VALIDATION.getVarname(), 
"skip");
 checker = new HiveMetaStoreChecker(msc, hive.getConf());
-Table table = createTestTable();
+Table table = createTestTable(false);
 List partitions = hive.getPartitions(table);
 assertEquals(2, partitions.size());
 // add a fake partition dir on fs
 fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
-Path fakePart =
-new Path(table.getDataLocation().toString(), 
"fakedate=2009-01-01/fakecity=sanjose");
-fs.mkdirs(fakePart);
-fs.deleteOnExit(fakePart);
+addFolderToPath(fs, 
table.getDataLocation().toString(),"fakedate=2009-01-01/fakecity=sanjose");
 createPartitionsDirectoriesOnFS(table, 2);
-CheckResult result = new CheckResult();
-checker.checkMetastore(catName, dbName, tableName, null, null, result);
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
 assertEquals(Collections. emptySet(), result.getTablesNotInMs());
 assertEquals(Collections. emptySet(), result.getTablesNotOnFs());
 assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
 // only 2 valid partitions should be added
 assertEquals(2, result.getPartitionsNotInMs().size());
   }
 
-  private Table createTestTable() throws HiveException, AlreadyExistsException 
{
+  /*
+   * Tests the case when we have normal delta_dirs in the partition folder
+   * does not throw HiveException
+   */
+  @Test
+  public void testAddPartitionNormalDeltas() throws Exception {
+Table table = createTestTable(true);
+List partitions = hive.getPartitions(table);
+assertEquals(2, partitions.size());
+// add a partition dir on fs
+fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
+Path newPart = addFolderToPath(fs, table.getDataLocation().toString(),
+partDateName + "=2017-01-01/" + partCityName + "=paloalto");
+
+// Add a few deltas
+addFolderToPath(fs, newPart.toString(), "delta_001_001_");
+addFolderToPath(fs, newPart.toString(), "delta_010_010_");
+addFolderToPath(fs, newPart.toString(), "delta_101_101_");
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
+assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
+assertEquals(1, result.getPartitionsNotInMs().size());
+// Found the highest writeId
+assertEquals(101, 
result.getPartitionsNotInMs().iterator().next().getMaxWriteId());
+assertEquals(0, 
result.getPartitionsNotInMs().iterator().next().getMaxTxnId());
+  }
+  /*
+   * Tests the case when we have normal delta_dirs in the partition folder
+   * does not throw HiveException
+   */
+  @Test
+  public void testAddPartitionCompactedDeltas() throws Exception {
+Table table = createTestTable(true);
+List partitions = hive.getPartitions(table);
+assertEquals(2, partitions.size());
+// add a partition dir on fs
+fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
+Path newPart = addFolderToPath(fs, table.getDataLocation().toString(),
+partDateName + "=2017-01-01/" + partCityName + "=paloalto");
+
+// Add a few deltas
+addFolderToPath(fs, newPart.toString(), "delta_001_001_");
+addFolderToPath(fs, newPart.toString(), "delta_010_015_v067");
+addFolderToPath(fs, newPart.toString(), "delta_101_120_v087");
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
+assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
+assertEquals(1, result.getPartitionsNotInMs().size());
+// Found the highest writeId
+assertEquals(120, 
result.getPartitionsNotInMs().iterator().next().getMaxWriteId());
+assertEquals(87, 
result.getPartitionsNotInMs().iterator().next().getMaxTxnId());
+  }
+  @Test
+  public void testAddPartitionCompactedBase() throws Exception {
+Table table = createTestTable(true);
+List partitions = hive.getPartitions(table);
+assertEquals(2, partitions.size());
+// add a partition dir on fs
+fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
+Path newPart = addFolderToPath(fs, table.getDataLocation().toString(),
+partDateName + "=2017-01-01/" + partCityName + "=paloalto");
+
+// Add a few 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446134506



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -74,21 +76,21 @@
   @Before
   public void setUp() throws Exception {
 hive = Hive.get();
-
hive.getConf().setIntVar(HiveConf.ConfVars.METASTORE_FS_HANDLER_THREADS_COUNT, 
15);
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"throw");
+
hive.getConf().set(MetastoreConf.ConfVars.FS_HANDLER_THREADS_COUNT.getVarname(),
 "15");

Review comment:
   setIntVar





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446133519



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TxnCommandsBaseForTests.java
##
@@ -162,9 +163,23 @@ protected String getWarehouseDir() {
* takes raw data and turns it into a string as if from Driver.getResults()
* sorts rows in dictionary order
*/
-  List stringifyValues(int[][] rowsIn) {
-return TestTxnCommands2.stringifyValues(rowsIn);
+  public static List stringifyValues(int[][] rowsIn) {
+assert rowsIn.length > 0;
+int[][] rows = rowsIn.clone();
+Arrays.sort(rows, new TestTxnCommands2.RowComp());

Review comment:
   move RowComp from TestTxnCommands2 to TxnCommandsBaseForTests





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446131988



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
##
@@ -2209,20 +2209,7 @@ public void testAcidOrcWritePreservesFieldNames() throws 
Exception {
* sorts rows in dictionary order
*/
   static List stringifyValues(int[][] rowsIn) {
-assert rowsIn.length > 0;
-int[][] rows = rowsIn.clone();
-Arrays.sort(rows, new RowComp());
-List rs = new ArrayList();
-for(int[] row : rows) {
-  assert row.length > 0;
-  StringBuilder sb = new StringBuilder();
-  for(int value : row) {
-sb.append(value).append("\t");
-  }
-  sb.setLength(sb.length() - 1);
-  rs.add(sb.toString());
-}
-return rs;
+return TxnCommandsBaseForTests.stringifyValues(rowsIn);

Review comment:
   could you extend TestTxnCommands2 from TxnCommandsBaseForTests and 
remove static?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446131988



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
##
@@ -2209,20 +2209,7 @@ public void testAcidOrcWritePreservesFieldNames() throws 
Exception {
* sorts rows in dictionary order
*/
   static List stringifyValues(int[][] rowsIn) {
-assert rowsIn.length > 0;
-int[][] rows = rowsIn.clone();
-Arrays.sort(rows, new RowComp());
-List rs = new ArrayList();
-for(int[] row : rows) {
-  assert row.length > 0;
-  StringBuilder sb = new StringBuilder();
-  for(int value : row) {
-sb.append(value).append("\t");
-  }
-  sb.setLength(sb.length() - 1);
-  rs.add(sb.toString());
-}
-return rs;
+return TxnCommandsBaseForTests.stringifyValues(rowsIn);

Review comment:
   remove it and use static call





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446126904



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;
+  if (writeId < 1) {

Review comment:
   is it ever a valid condition?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446125639



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {

Review comment:
   i think it can't be null, it returns ThreadLocal variable





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446123907



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;

Review comment:
   redundant check (sessionTxnMgr != null), see if condition above 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-26 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446123907



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;

Review comment:
   redundant check, see if condition above 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org