[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=458686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-458686
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 14/Jul/20 14:51
Start Date: 14/Jul/20 14:51
Worklog Time Spent: 10m 
  Work Description: deniskuzZ merged pull request #1087:
URL: https://github.com/apache/hive/pull/1087


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 458686)
Time Spent: 11h 50m  (was: 11h 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455385=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455385
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 11:43
Start Date: 07/Jul/20 11:43
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450801959



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455371=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455371
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 11:26
Start Date: 07/Jul/20 11:26
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450793830



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();
   }
 } catch (RetryException e) {
-  seedWriteIdOnAcidConversion(rqst);
+  seedWriteId(rqst);
 }
+  }
+
+  @Override
+  public void seedTxnId(SeedTxnIdRequest rqst) throws MetaException {
+try {
+  Connection dbConn = null;
+  Statement stmt = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+stmt = dbConn.createStatement();
+/*
+ * Locking the txnLock an exclusive way, we do not want to set the 
txnId backward accidentally
+ * if there are concurrent open transactions
+ */
+acquireTxnLock(stmt, false);
+long highWaterMark = getHighWaterMark(stmt);
+if (highWaterMark >= rqst.getSeedTxnId()) {

Review comment:
   This is not about the writeIds, it is about the txnId. If you have a 
data from a database where there were high amount of transaction and the 
compaction run on the table, you will have some high txnId in the 
visibilityTxnId in the name of the compacted folder.
   If you then move this data to a cluster with less transaction (ex. a test 
cluster) and you run the msck repair, you have to skip the txnId forward so the 
next query will read the compacted folder. Here the race condition is, that 
somehow the txnId sequence gets ahead of you between the check and the seeding 
the value, in that case we throw this exception to not to set the sequence 
backward. Anyway, in this case if you run the msck repair again it will 
succeed, since the txnid will be high enough. 
   The writeId race condition won't cause a problem I think, since if some 
other transaction allocated the first writeId the seedWriteId will fail on the 
unique constraint on the table





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455371)
Time Spent: 11.5h  (was: 11h 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455362=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455362
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 11:17
Start Date: 07/Jul/20 11:17
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450789075



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();

Review comment:
   fixed, the unique key on NEXT_WRITE_ID is enough





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455362)
Time Spent: 11h 20m  (was: 11h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455355=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455355
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 11:14
Start Date: 07/Jul/20 11:14
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450787844



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455355)
Time Spent: 11h 10m  (was: 11h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455354=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455354
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 11:12
Start Date: 07/Jul/20 11:12
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450786917



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);

Review comment:
   most of Txnhandler uses this pattern, instead of using three nested 
try-with for dbconn, statement and resultset





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455354)
Time Spent: 11h  (was: 10h 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455351=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455351
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 11:10
Start Date: 07/Jul/20 11:10
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450785738



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";

Review comment:
   fixed

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);
+LOG.debug("Going to execute query <" + query.replaceAll("\\?", "{}") + 
">", quoteString(dbName),
+quoteString(tableName));
+rs = pStmt.executeQuery();
+// If there is no record, we never allocated anything
+long maxWriteId = 0l;
+if (rs.next()) {
+  // The row contains the nextId not the previously allocated
+  maxWriteId = rs.getLong(1) - 1;
+}
+return new MaxAllocatedTableWriteIdResponse(maxWriteId);
+  } catch (SQLException e) {
+LOG.error(
+"Exception during reading the max allocated writeId for dbName={}, 
tableName={}. Will retry if possible.",
+dbName, tableName, e);
+rollbackDBConn(dbConn);

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455351)
Time Spent: 10h 40m  (was: 10.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455352=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455352
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 11:10
Start Date: 07/Jul/20 11:10
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450785863



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455352)
Time Spent: 10h 50m  (was: 10h 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455344=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455344
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 10:53
Start Date: 07/Jul/20 10:53
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r45058



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnDbUtil.java
##
@@ -313,6 +313,41 @@ private static void resetTxnSequence(Connection conn, 
Statement stmt) throws SQL
 }
   }
 
+  /**
+   * Restarts the txnId sequence with the given seed value.
+   * It is the responsibility of the caller to not set the sequence backward.
+   * @param conn database connection
+   * @param stmt sql statement
+   * @param seedTxnId the seed value for the sequence
+   * @throws SQLException ex
+   */
+  public static void seedTxnSequence(Connection conn, Statement stmt, long 
seedTxnId) throws SQLException {
+String dbProduct = conn.getMetaData().getDatabaseProductName();
+DatabaseProduct databaseProduct = determineDatabaseProduct(dbProduct);
+switch (databaseProduct) {
+
+case DERBY:

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455344)
Time Spent: 10.5h  (was: 10h 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455341=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455341
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 10:29
Start Date: 07/Jul/20 10:29
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450765905



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -383,6 +475,7 @@ public Void execute(int size) throws MetastoreException {
   partsToAdd.add(partition);
   lastBatch.add(part);
   addMsgs.add(String.format(addMsgFormat, 
part.getPartitionName()));
+  LOG.debug(String.format(addMsgFormat, part.getPartitionName()));

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 455341)
Time Spent: 10h 20m  (was: 10h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455339=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455339
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 10:18
Start Date: 07/Jul/20 10:18
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450760163



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455336=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455336
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 10:12
Start Date: 07/Jul/20 10:12
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450756822



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455334=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455334
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 10:07
Start Date: 07/Jul/20 10:07
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450754328



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455333=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455333
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 10:07
Start Date: 07/Jul/20 10:07
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450754040



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=455332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-455332
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 07/Jul/20 10:06
Start Date: 07/Jul/20 10:06
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450753396



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454961=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454961
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 17:06
Start Date: 06/Jul/20 17:06
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450362382



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454959=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454959
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 17:05
Start Date: 06/Jul/20 17:05
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450361792



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454959)
Time Spent: 9h  (was: 8h 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454960=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454960
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 17:05
Start Date: 06/Jul/20 17:05
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450361883



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454958
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 17:04
Start Date: 06/Jul/20 17:04
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450361266



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454958)
Time Spent: 8h 50m  (was: 8h 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454956=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454956
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 17:03
Start Date: 06/Jul/20 17:03
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450360969



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454956)
Time Spent: 8h 40m  (was: 8.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454954=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454954
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 17:01
Start Date: 06/Jul/20 17:01
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450360016



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454954)
Time Spent: 8.5h  (was: 8h 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454936=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454936
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 16:25
Start Date: 06/Jul/20 16:25
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450338348



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454936)
Time Spent: 8h 20m  (was: 8h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454934=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454934
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 16:19
Start Date: 06/Jul/20 16:19
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450334396



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -111,24 +120,24 @@ public IMetaStoreClient getMsc() {
* @param partitions
*  List of partition name value pairs, if null or empty check all
*  partitions
-   * @param table
-   * @param result
-   *  Fill this with the results of the check
+   * @param table Table we want to run the check for.
+   * @return Results of the check
* @throws MetastoreException
*   Failed to get required information from the metastore.
* @throws IOException
*   Most likely filesystem related
*/
-  public void checkMetastore(String catName, String dbName, String tableName,
-  List> partitions, Table table, CheckResult 
result)
+  public CheckResult checkMetastore(String catName, String dbName, String 
tableName,
+  List> partitions, Table table)
   throws MetastoreException, IOException {
-
+CheckResult result = new CheckResult();
 if (dbName == null || "".equalsIgnoreCase(dbName)) {
   dbName = Warehouse.DEFAULT_DATABASE_NAME;
 }
 
 try {
   if (tableName == null || "".equals(tableName)) {
+// TODO: I do not think this is used by anything other than tests

Review comment:
   I do not know. If I understand correctly there is no way currently to 
call MSCK repair without a table specified, but it seems like someone made some 
effort to create that feature and tests for it. But i don't know if we ever 
want that in production (calling msck repair for every table seems like a quick 
way to overwhelm the system)  I left this comment here, for anybody who tries 
to makes sense of this code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454934)
Time Spent: 8h 10m  (was: 8h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454923=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454923
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 16:07
Start Date: 06/Jul/20 16:07
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450326885



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
##
@@ -8322,6 +8322,22 @@ public AllocateTableWriteIdsResponse 
allocate_table_write_ids(
   return response;
 }
 
+@Override
+public MaxAllocatedTableWriteIdResponse 
get_max_allocated_table_write_id(MaxAllocatedTableWriteIdRequest rqst)

Review comment:
   All the functions in HMS looks like this I don't want to break the 
pattern. On the second glance, I had to change the seedWriteId and seedTxnId to 
look like this...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454923)
Time Spent: 8h  (was: 7h 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454918=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454918
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 15:54
Start Date: 06/Jul/20 15:54
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450318970



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TxnCommandsBaseForTests.java
##
@@ -162,9 +163,23 @@ protected String getWarehouseDir() {
* takes raw data and turns it into a string as if from Driver.getResults()
* sorts rows in dictionary order
*/
-  List stringifyValues(int[][] rowsIn) {
-return TestTxnCommands2.stringifyValues(rowsIn);
+  public static List stringifyValues(int[][] rowsIn) {
+assert rowsIn.length > 0;
+int[][] rows = rowsIn.clone();
+Arrays.sort(rows, new TestTxnCommands2.RowComp());

Review comment:
   fixed

##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -252,37 +241,165 @@ public void testInvalidPartitionKeyName()
   @Test
   public void testSkipInvalidPartitionKeyName()
 throws HiveException, AlreadyExistsException, IOException, 
MetastoreException {
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"skip");
+
hive.getConf().set(MetastoreConf.ConfVars.MSCK_PATH_VALIDATION.getVarname(), 
"skip");
 checker = new HiveMetaStoreChecker(msc, hive.getConf());
-Table table = createTestTable();
+Table table = createTestTable(false);
 List partitions = hive.getPartitions(table);
 assertEquals(2, partitions.size());
 // add a fake partition dir on fs
 fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
-Path fakePart =
-new Path(table.getDataLocation().toString(), 
"fakedate=2009-01-01/fakecity=sanjose");
-fs.mkdirs(fakePart);
-fs.deleteOnExit(fakePart);
+addFolderToPath(fs, 
table.getDataLocation().toString(),"fakedate=2009-01-01/fakecity=sanjose");
 createPartitionsDirectoriesOnFS(table, 2);
-CheckResult result = new CheckResult();
-checker.checkMetastore(catName, dbName, tableName, null, null, result);
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
 assertEquals(Collections. emptySet(), result.getTablesNotInMs());
 assertEquals(Collections. emptySet(), result.getTablesNotOnFs());
 assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
 // only 2 valid partitions should be added
 assertEquals(2, result.getPartitionsNotInMs().size());
   }
 
-  private Table createTestTable() throws HiveException, AlreadyExistsException 
{
+  /*
+   * Tests the case when we have normal delta_dirs in the partition folder
+   * does not throw HiveException
+   */
+  @Test
+  public void testAddPartitionNormalDeltas() throws Exception {
+Table table = createTestTable(true);
+List partitions = hive.getPartitions(table);
+assertEquals(2, partitions.size());
+// add a partition dir on fs
+fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
+Path newPart = addFolderToPath(fs, table.getDataLocation().toString(),
+partDateName + "=2017-01-01/" + partCityName + "=paloalto");
+
+// Add a few deltas
+addFolderToPath(fs, newPart.toString(), "delta_001_001_");
+addFolderToPath(fs, newPart.toString(), "delta_010_010_");
+addFolderToPath(fs, newPart.toString(), "delta_101_101_");
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
+assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
+assertEquals(1, result.getPartitionsNotInMs().size());
+// Found the highest writeId
+assertEquals(101, 
result.getPartitionsNotInMs().iterator().next().getMaxWriteId());
+assertEquals(0, 
result.getPartitionsNotInMs().iterator().next().getMaxTxnId());
+  }
+  /*
+   * Tests the case when we have normal delta_dirs in the partition folder
+   * does not throw HiveException
+   */
+  @Test
+  public void testAddPartitionCompactedDeltas() throws Exception {
+Table table = createTestTable(true);
+List partitions = hive.getPartitions(table);
+assertEquals(2, partitions.size());
+// add a partition dir on fs
+fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
+Path newPart = addFolderToPath(fs, table.getDataLocation().toString(),
+partDateName + "=2017-01-01/" + partCityName + "=paloalto");
+
+// Add a few deltas
+addFolderToPath(fs, newPart.toString(), "delta_001_001_");
+addFolderToPath(fs, newPart.toString(), "delta_010_015_v067");

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454916=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454916
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 15:54
Start Date: 06/Jul/20 15:54
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450318861



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454916)
Time Spent: 7h 40m  (was: 7.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454914
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 15:52
Start Date: 06/Jul/20 15:52
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450317451



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -74,21 +76,21 @@
   @Before
   public void setUp() throws Exception {
 hive = Hive.get();
-
hive.getConf().setIntVar(HiveConf.ConfVars.METASTORE_FS_HANDLER_THREADS_COUNT, 
15);
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"throw");
+
hive.getConf().set(MetastoreConf.ConfVars.FS_HANDLER_THREADS_COUNT.getVarname(),
 "15");

Review comment:
   It does not work with MetasoreConf.ConfVars





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454914)
Time Spent: 7.5h  (was: 7h 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454912=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454912
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 15:48
Start Date: 06/Jul/20 15:48
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450315397



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
##
@@ -2209,20 +2209,7 @@ public void testAcidOrcWritePreservesFieldNames() throws 
Exception {
* sorts rows in dictionary order
*/
   static List stringifyValues(int[][] rowsIn) {
-assert rowsIn.length > 0;
-int[][] rows = rowsIn.clone();
-Arrays.sort(rows, new RowComp());
-List rs = new ArrayList();
-for(int[] row : rows) {
-  assert row.length > 0;
-  StringBuilder sb = new StringBuilder();
-  for(int value : row) {
-sb.append(value).append("\t");
-  }
-  sb.setLength(sb.length() - 1);
-  rs.add(sb.toString());
-}
-return rs;
+return TxnCommandsBaseForTests.stringifyValues(rowsIn);

Review comment:
   I will do this in a separate Jira, started it, but it requires more 
change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454912)
Time Spent: 7h 20m  (was: 7h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454906=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454906
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 15:28
Start Date: 06/Jul/20 15:28
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450302033



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;
+  if (writeId < 1) {

Review comment:
   The comment said so: "stats updater that doesn't have write ID is 
currently "create table"; writeId would be 0/-1 here."





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454906)
Time Spent: 7h 10m  (was: 7h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-07-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=454905=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-454905
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 06/Jul/20 15:27
Start Date: 06/Jul/20 15:27
Worklog Time Spent: 10m 
  Work Description: pvargacl commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r450301348



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {

Review comment:
   It can be null, if the sessionstate was not set properly, it was failing 
one of the tests, i can't remember which.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 454905)
Time Spent: 7h  (was: 6h 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452867
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 08:36
Start Date: 30/Jun/20 08:36
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447509066



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();
   }
 } catch (RetryException e) {
-  seedWriteIdOnAcidConversion(rqst);
+  seedWriteId(rqst);
 }
+  }
+
+  @Override
+  public void seedTxnId(SeedTxnIdRequest rqst) throws MetaException {
+try {
+  Connection dbConn = null;
+  Statement stmt = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+stmt = dbConn.createStatement();
+/*
+ * Locking the txnLock an exclusive way, we do not want to set the 
txnId backward accidentally
+ * if there are concurrent open transactions
+ */
+acquireTxnLock(stmt, false);
+long highWaterMark = getHighWaterMark(stmt);
+if (highWaterMark >= rqst.getSeedTxnId()) {

Review comment:
   not quite understand this if condition. You have check in 
validateAndAddMaxTxnIdAndWriteId() if there are already some write ids 
registered in HMS and we try to do repair - throw exception. Could it be 
possible due to lack of locking that when we calculate the write ids there is 
nothing in HMS,  however when we try to seed - some transaction generates a new 
write id - would it cause some dataloss problems or other issues? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452867)
Time Spent: 6h 50m  (was: 6h 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452861=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452861
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:55
Start Date: 30/Jun/20 07:55
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447484596



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();
   }
 } catch (RetryException e) {
-  seedWriteIdOnAcidConversion(rqst);
+  seedWriteId(rqst);
 }
+  }
+
+  @Override
+  public void seedTxnId(SeedTxnIdRequest rqst) throws MetaException {
+try {
+  Connection dbConn = null;
+  Statement stmt = null;
+  try {
+lockInternal();

Review comment:
   same, could you please check if this is needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452861)
Time Spent: 6h 40m  (was: 6.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452860
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:54
Start Date: 30/Jun/20 07:54
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447484200



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();

Review comment:
   not sure why is it used here





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452860)
Time Spent: 6.5h  (was: 6h 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452859
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:53
Start Date: 30/Jun/20 07:53
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447483691



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);

Review comment:
   minor: you can simply pass params as  Arrays.asList(rqst.getDbName(), 
rqst.getTableName()) instead of using so many local vars





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452859)
Time Spent: 6h 20m  (was: 6h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452858=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452858
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:51
Start Date: 30/Jun/20 07:51
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447482052



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);

Review comment:
   minor: i would use try-with-resources instead of doing explicit 
management  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452858)
Time Spent: 6h 10m  (was: 6h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452857=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452857
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:49
Start Date: 30/Jun/20 07:49
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447480415



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();

Review comment:
   lockInternal is required for Derby to simulate S4U, why use here? 
unlockInternal is not needed as well





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452857)
Time Spent: 6h  (was: 5h 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452856=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452856
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:48
Start Date: 30/Jun/20 07:48
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447480415



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();

Review comment:
   lockInternal is required for Derby to simulate S4U





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452856)
Time Spent: 5h 50m  (was: 5h 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452854=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452854
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:46
Start Date: 30/Jun/20 07:46
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447478910



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);
+LOG.debug("Going to execute query <" + query.replaceAll("\\?", "{}") + 
">", quoteString(dbName),
+quoteString(tableName));
+rs = pStmt.executeQuery();
+// If there is no record, we never allocated anything
+long maxWriteId = 0l;
+if (rs.next()) {
+  // The row contains the nextId not the previously allocated
+  maxWriteId = rs.getLong(1) - 1;
+}
+return new MaxAllocatedTableWriteIdResponse(maxWriteId);
+  } catch (SQLException e) {
+LOG.error(
+"Exception during reading the max allocated writeId for dbName={}, 
tableName={}. Will retry if possible.",
+dbName, tableName, e);
+rollbackDBConn(dbConn);

Review comment:
   what to rollback, you have select here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452854)
Time Spent: 5h 40m  (was: 5.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452852=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452852
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:43
Start Date: 30/Jun/20 07:43
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447477522



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";

Review comment:
   should we have a query constant?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452852)
Time Spent: 5.5h  (was: 5h 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452850=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452850
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:41
Start Date: 30/Jun/20 07:41
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447476089



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnDbUtil.java
##
@@ -313,6 +313,41 @@ private static void resetTxnSequence(Connection conn, 
Statement stmt) throws SQL
 }
   }
 
+  /**
+   * Restarts the txnId sequence with the given seed value.
+   * It is the responsibility of the caller to not set the sequence backward.
+   * @param conn database connection
+   * @param stmt sql statement
+   * @param seedTxnId the seed value for the sequence
+   * @throws SQLException ex
+   */
+  public static void seedTxnSequence(Connection conn, Statement stmt, long 
seedTxnId) throws SQLException {
+String dbProduct = conn.getMetaData().getDatabaseProductName();
+DatabaseProduct databaseProduct = determineDatabaseProduct(dbProduct);
+switch (databaseProduct) {
+
+case DERBY:

Review comment:
   minor: i would probably create EnumMap for SEED_FN, and use proper one 
based on db type.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452850)
Time Spent: 5h 20m  (was: 5h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452848=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452848
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:36
Start Date: 30/Jun/20 07:36
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447472989



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -383,6 +475,7 @@ public Void execute(int size) throws MetastoreException {
   partsToAdd.add(partition);
   lastBatch.add(part);
   addMsgs.add(String.format(addMsgFormat, 
part.getPartitionName()));
+  LOG.debug(String.format(addMsgFormat, part.getPartitionName()));

Review comment:
   why  not to log content of addMsgs 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452848)
Time Spent: 5h 10m  (was: 5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452846=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452846
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:28
Start Date: 30/Jun/20 07:28
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447468545



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452841
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:17
Start Date: 30/Jun/20 07:17
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447462171



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452838=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452838
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 30/Jun/20 07:13
Start Date: 30/Jun/20 07:13
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447033768



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   why not use regex with pattern matching? removeVisibilityTxnId probably 
wouldn't even be needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452838)
Time Spent: 4h 40m  (was: 4.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452446=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452446
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:21
Start Date: 29/Jun/20 15:21
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447053621



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452444=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452444
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:20
Start Date: 29/Jun/20 15:20
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447052663



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452442=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452442
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:18
Start Date: 29/Jun/20 15:18
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447050970



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452432
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:11
Start Date: 29/Jun/20 15:11
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447046016



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452430=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452430
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:09
Start Date: 29/Jun/20 15:09
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447044866



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452428=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452428
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:09
Start Date: 29/Jun/20 15:09
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447044667



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452422=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452422
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:05
Start Date: 29/Jun/20 15:05
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447041559



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;

Review comment:
   you can do success &= writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds)
   for readability





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452422)
Time Spent: 3h 20m  (was: 3h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452425=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452425
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:05
Start Date: 29/Jun/20 15:05
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447042125



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;

Review comment:
   same  success &= closeTxn(qualifiedTableName, success, txnId)





This is an automated message from the 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452416=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452416
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 15:00
Start Date: 29/Jun/20 15:00
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447037968



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {

Review comment:
   not formatted





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452416)
Time Spent: 3h 10m  (was: 3h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452415
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 14:59
Start Date: 29/Jun/20 14:59
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447037520



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {

Review comment:
   you can remove 1 nesting level





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452415)
Time Spent: 3h  (was: 2h 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452412=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452412
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 14:55
Start Date: 29/Jun/20 14:55
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447033768



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   why not use regex? removeVisibilityTxnId probably wouldn't even be needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452412)
Time Spent: 2h 50m  (was: 2h 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452411=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452411
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 14:54
Start Date: 29/Jun/20 14:54
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447033768



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   why not use regex?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452411)
Time Spent: 2h 40m  (was: 2.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452396=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452396
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 14:47
Start Date: 29/Jun/20 14:47
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447028756



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');

Review comment:
   why not use rest.split('_')

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');

Review comment:
   why not use rest.split('_')?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452396)
Time Spent: 2.5h  (was: 2h 20m)

> 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452364=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452364
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 14:14
Start Date: 29/Jun/20 14:14
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447003871



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -111,24 +120,24 @@ public IMetaStoreClient getMsc() {
* @param partitions
*  List of partition name value pairs, if null or empty check all
*  partitions
-   * @param table
-   * @param result
-   *  Fill this with the results of the check
+   * @param table Table we want to run the check for.
+   * @return Results of the check
* @throws MetastoreException
*   Failed to get required information from the metastore.
* @throws IOException
*   Most likely filesystem related
*/
-  public void checkMetastore(String catName, String dbName, String tableName,
-  List> partitions, Table table, CheckResult 
result)
+  public CheckResult checkMetastore(String catName, String dbName, String 
tableName,
+  List> partitions, Table table)
   throws MetastoreException, IOException {
-
+CheckResult result = new CheckResult();
 if (dbName == null || "".equalsIgnoreCase(dbName)) {
   dbName = Warehouse.DEFAULT_DATABASE_NAME;
 }
 
 try {
   if (tableName == null || "".equals(tableName)) {
+// TODO: I do not think this is used by anything other than tests

Review comment:
   should we answer this question in a current patch?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452364)
Time Spent: 2h 20m  (was: 2h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452357=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452357
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 14:11
Start Date: 29/Jun/20 14:11
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447001420



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
##
@@ -8322,6 +8322,22 @@ public AllocateTableWriteIdsResponse 
allocate_table_write_ids(
   return response;
 }
 
+@Override
+public MaxAllocatedTableWriteIdResponse 
get_max_allocated_table_write_id(MaxAllocatedTableWriteIdRequest rqst)

Review comment:
   weird mix of Camel case and underscore?

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
##
@@ -8322,6 +8322,22 @@ public AllocateTableWriteIdsResponse 
allocate_table_write_ids(
   return response;
 }
 
+@Override
+public MaxAllocatedTableWriteIdResponse 
get_max_allocated_table_write_id(MaxAllocatedTableWriteIdRequest rqst)

Review comment:
   weird mix of Camel case and underscore





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452357)
Time Spent: 2h 10m  (was: 2h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=452354=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-452354
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 29/Jun/20 14:11
Start Date: 29/Jun/20 14:11
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447001420



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
##
@@ -8322,6 +8322,22 @@ public AllocateTableWriteIdsResponse 
allocate_table_write_ids(
   return response;
 }
 
+@Override
+public MaxAllocatedTableWriteIdResponse 
get_max_allocated_table_write_id(MaxAllocatedTableWriteIdRequest rqst)

Review comment:
   why not Camel case?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 452354)
Time Spent: 2h  (was: 1h 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451528=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451528
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 12:10
Start Date: 26/Jun/20 12:10
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446134506



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -74,21 +76,21 @@
   @Before
   public void setUp() throws Exception {
 hive = Hive.get();
-
hive.getConf().setIntVar(HiveConf.ConfVars.METASTORE_FS_HANDLER_THREADS_COUNT, 
15);
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"throw");
+
hive.getConf().set(MetastoreConf.ConfVars.FS_HANDLER_THREADS_COUNT.getVarname(),
 "15");

Review comment:
   why not setIntVar?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451528)
Time Spent: 1h 50m  (was: 1h 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451527=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451527
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 12:07
Start Date: 26/Jun/20 12:07
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446143608



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -252,37 +241,165 @@ public void testInvalidPartitionKeyName()
   @Test
   public void testSkipInvalidPartitionKeyName()
 throws HiveException, AlreadyExistsException, IOException, 
MetastoreException {
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"skip");
+
hive.getConf().set(MetastoreConf.ConfVars.MSCK_PATH_VALIDATION.getVarname(), 
"skip");
 checker = new HiveMetaStoreChecker(msc, hive.getConf());
-Table table = createTestTable();
+Table table = createTestTable(false);
 List partitions = hive.getPartitions(table);
 assertEquals(2, partitions.size());
 // add a fake partition dir on fs
 fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
-Path fakePart =
-new Path(table.getDataLocation().toString(), 
"fakedate=2009-01-01/fakecity=sanjose");
-fs.mkdirs(fakePart);
-fs.deleteOnExit(fakePart);
+addFolderToPath(fs, 
table.getDataLocation().toString(),"fakedate=2009-01-01/fakecity=sanjose");
 createPartitionsDirectoriesOnFS(table, 2);
-CheckResult result = new CheckResult();
-checker.checkMetastore(catName, dbName, tableName, null, null, result);
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
 assertEquals(Collections. emptySet(), result.getTablesNotInMs());
 assertEquals(Collections. emptySet(), result.getTablesNotOnFs());
 assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
 // only 2 valid partitions should be added
 assertEquals(2, result.getPartitionsNotInMs().size());
   }
 
-  private Table createTestTable() throws HiveException, AlreadyExistsException 
{
+  /*
+   * Tests the case when we have normal delta_dirs in the partition folder
+   * does not throw HiveException
+   */
+  @Test
+  public void testAddPartitionNormalDeltas() throws Exception {
+Table table = createTestTable(true);
+List partitions = hive.getPartitions(table);
+assertEquals(2, partitions.size());
+// add a partition dir on fs
+fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
+Path newPart = addFolderToPath(fs, table.getDataLocation().toString(),
+partDateName + "=2017-01-01/" + partCityName + "=paloalto");
+
+// Add a few deltas
+addFolderToPath(fs, newPart.toString(), "delta_001_001_");
+addFolderToPath(fs, newPart.toString(), "delta_010_010_");
+addFolderToPath(fs, newPart.toString(), "delta_101_101_");
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
+assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
+assertEquals(1, result.getPartitionsNotInMs().size());
+// Found the highest writeId
+assertEquals(101, 
result.getPartitionsNotInMs().iterator().next().getMaxWriteId());
+assertEquals(0, 
result.getPartitionsNotInMs().iterator().next().getMaxTxnId());
+  }
+  /*
+   * Tests the case when we have normal delta_dirs in the partition folder
+   * does not throw HiveException
+   */
+  @Test
+  public void testAddPartitionCompactedDeltas() throws Exception {
+Table table = createTestTable(true);
+List partitions = hive.getPartitions(table);
+assertEquals(2, partitions.size());
+// add a partition dir on fs
+fs = partitions.get(0).getDataLocation().getFileSystem(hive.getConf());
+Path newPart = addFolderToPath(fs, table.getDataLocation().toString(),
+partDateName + "=2017-01-01/" + partCityName + "=paloalto");
+
+// Add a few deltas
+addFolderToPath(fs, newPart.toString(), "delta_001_001_");
+addFolderToPath(fs, newPart.toString(), "delta_010_015_v067");
+addFolderToPath(fs, newPart.toString(), "delta_101_120_v087");
+CheckResult result = checker.checkMetastore(catName, dbName, tableName, 
null, null);
+assertEquals(Collections. emptySet(), 
result.getPartitionsNotOnFs());
+assertEquals(1, result.getPartitionsNotInMs().size());
+// Found the highest writeId
+assertEquals(120, 
result.getPartitionsNotInMs().iterator().next().getMaxWriteId());
+assertEquals(87, 
result.getPartitionsNotInMs().iterator().next().getMaxTxnId());
+  }
+  @Test
+  public void testAddPartitionCompactedBase() throws 

[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451514=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451514
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:46
Start Date: 26/Jun/20 11:46
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446134506



##
File path: 
ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java
##
@@ -74,21 +76,21 @@
   @Before
   public void setUp() throws Exception {
 hive = Hive.get();
-
hive.getConf().setIntVar(HiveConf.ConfVars.METASTORE_FS_HANDLER_THREADS_COUNT, 
15);
-hive.getConf().set(HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION.varname, 
"throw");
+
hive.getConf().set(MetastoreConf.ConfVars.FS_HANDLER_THREADS_COUNT.getVarname(),
 "15");

Review comment:
   setIntVar





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451514)
Time Spent: 1.5h  (was: 1h 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451512=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451512
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:44
Start Date: 26/Jun/20 11:44
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446133519



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TxnCommandsBaseForTests.java
##
@@ -162,9 +163,23 @@ protected String getWarehouseDir() {
* takes raw data and turns it into a string as if from Driver.getResults()
* sorts rows in dictionary order
*/
-  List stringifyValues(int[][] rowsIn) {
-return TestTxnCommands2.stringifyValues(rowsIn);
+  public static List stringifyValues(int[][] rowsIn) {
+assert rowsIn.length > 0;
+int[][] rows = rowsIn.clone();
+Arrays.sort(rows, new TestTxnCommands2.RowComp());

Review comment:
   move RowComp from TestTxnCommands2 to TxnCommandsBaseForTests





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451512)
Time Spent: 1h 20m  (was: 1h 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451511=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451511
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:42
Start Date: 26/Jun/20 11:42
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446131988



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
##
@@ -2209,20 +2209,7 @@ public void testAcidOrcWritePreservesFieldNames() throws 
Exception {
* sorts rows in dictionary order
*/
   static List stringifyValues(int[][] rowsIn) {
-assert rowsIn.length > 0;
-int[][] rows = rowsIn.clone();
-Arrays.sort(rows, new RowComp());
-List rs = new ArrayList();
-for(int[] row : rows) {
-  assert row.length > 0;
-  StringBuilder sb = new StringBuilder();
-  for(int value : row) {
-sb.append(value).append("\t");
-  }
-  sb.setLength(sb.length() - 1);
-  rs.add(sb.toString());
-}
-return rs;
+return TxnCommandsBaseForTests.stringifyValues(rowsIn);

Review comment:
   could you extend TestTxnCommands2 from TxnCommandsBaseForTests and 
remove static?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451511)
Time Spent: 1h 10m  (was: 1h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451510=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451510
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:40
Start Date: 26/Jun/20 11:40
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446131988



##
File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java
##
@@ -2209,20 +2209,7 @@ public void testAcidOrcWritePreservesFieldNames() throws 
Exception {
* sorts rows in dictionary order
*/
   static List stringifyValues(int[][] rowsIn) {
-assert rowsIn.length > 0;
-int[][] rows = rowsIn.clone();
-Arrays.sort(rows, new RowComp());
-List rs = new ArrayList();
-for(int[] row : rows) {
-  assert row.length > 0;
-  StringBuilder sb = new StringBuilder();
-  for(int value : row) {
-sb.append(value).append("\t");
-  }
-  sb.setLength(sb.length() - 1);
-  rs.add(sb.toString());
-}
-return rs;
+return TxnCommandsBaseForTests.stringifyValues(rowsIn);

Review comment:
   remove it and use static call





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451510)
Time Spent: 1h  (was: 50m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451508=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451508
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:27
Start Date: 26/Jun/20 11:27
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446126904



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;
+  if (writeId < 1) {

Review comment:
   is it ever a valid condition?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451508)
Time Spent: 50m  (was: 40m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451507=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451507
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:24
Start Date: 26/Jun/20 11:24
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446125639



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {

Review comment:
   i think it can't be null, it returns ThreadLocal variable





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451507)
Time Spent: 40m  (was: 0.5h)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451503=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451503
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:20
Start Date: 26/Jun/20 11:20
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446123907



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;

Review comment:
   redundant check, see if condition above 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451503)
Time Spent: 20m  (was: 10m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=451504=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-451504
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 26/Jun/20 11:20
Start Date: 26/Jun/20 11:20
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r446123907



##
File path: ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java
##
@@ -2392,33 +2392,29 @@ public static TableSnapshot 
getTableSnapshot(Configuration conf,
 long writeId = -1;
 ValidWriteIdList validWriteIdList = null;
 
-HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
-String fullTableName = getFullTableName(dbName, tblName);
-if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
-  validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
-  if (isStatsUpdater) {
-writeId = SessionState.get().getTxnMgr() != null ?
-SessionState.get().getTxnMgr().getAllocatedTableWriteId(
-  dbName, tblName) : -1;
-if (writeId < 1) {
-  // TODO: this is not ideal... stats updater that doesn't have write 
ID is currently
-  //   "create table"; writeId would be 0/-1 here. No need to call 
this w/true.
-  LOG.debug("Stats updater for {}.{} doesn't have a write ID ({})",
-  dbName, tblName, writeId);
+if (SessionState.get() != null) {
+  HiveTxnManager sessionTxnMgr = SessionState.get().getTxnMgr();
+  String fullTableName = getFullTableName(dbName, tblName);
+  if (sessionTxnMgr != null && sessionTxnMgr.getCurrentTxnId() > 0) {
+validWriteIdList = getTableValidWriteIdList(conf, fullTableName);
+if (isStatsUpdater) {
+  writeId = sessionTxnMgr != null ? 
sessionTxnMgr.getAllocatedTableWriteId(dbName, tblName) : -1;

Review comment:
   redundant check (sessionTxnMgr != null), see if condition above 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 451504)
Time Spent: 0.5h  (was: 20m)

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23671) MSCK repair should handle transactional tables in certain usecases

2020-06-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23671?focusedWorklogId=443663=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-443663
 ]

ASF GitHub Bot logged work on HIVE-23671:
-

Author: ASF GitHub Bot
Created on: 10/Jun/20 11:07
Start Date: 10/Jun/20 11:07
Worklog Time Spent: 10m 
  Work Description: pvargacl opened a new pull request #1087:
URL: https://github.com/apache/hive/pull/1087


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HIVE-X: Fix a typo in YYY)
   For more details, please see 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 443663)
Remaining Estimate: 0h
Time Spent: 10m

> MSCK repair should handle transactional tables in certain usecases
> --
>
> Key: HIVE-23671
> URL: https://issues.apache.org/jira/browse/HIVE-23671
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The MSCK REPAIR tool does not handle transactional tables too well. It can 
> find and add new partitions the same way as for non-transactional tables, but 
> since the writeId differences are not handled, the data can not read back 
> from the new partitions.
> We could handle some usecases when the writeIds in the HMS and the underlying 
> data are not conflicting. If the HMS does not contains allocated writes for 
> the table we can seed the table with the writeIds read from the directory 
> structrure.
> Real life use cases could be:
>  * Copy data files from one cluster to another with different HMS, create the 
> table and call MSCK REPAIR
>  * If the HMS db is lost, recreate the table and call MSCK REPAIR
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)