[GitHub] [hive] pkumarsinha commented on a change in pull request #1193: HIVE-23784 : Fix Replication Metrics Sink to DB

2020-06-30 Thread GitBox


pkumarsinha commented on a change in pull request #1193:
URL: https://github.com/apache/hive/pull/1193#discussion_r448113867



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/repl/metric/MetricSink.java
##
@@ -64,8 +64,10 @@ public static MetricSink getInstance() {
   public synchronized void init(HiveConf conf) {
 if (!isInitialised) {
   this.conf = conf;
-  this.executorService.schedule(new MetricSinkWriter(conf), 
getFrequencyInSecs(), TimeUnit.SECONDS);
+  this.executorService.scheduleAtFixedRate(new MetricSinkWriter(conf), 0,
+getFrequencyInSecs(), TimeUnit.SECONDS);

Review comment:
   Frequency could be in Minutes only? Conversion to secs isn't required in 
getFrequencyInSecs()





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] github-actions[bot] closed pull request #977: HIVE-23040 : Checkpointing for repl dump incremental phase

2020-06-30 Thread GitBox


github-actions[bot] closed pull request #977:
URL: https://github.com/apache/hive/pull/977


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] github-actions[bot] closed pull request #987: HIVE-23235 Fix checkpointing for ORC Tables

2020-06-30 Thread GitBox


github-actions[bot] closed pull request #987:
URL: https://github.com/apache/hive/pull/987


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kishendas commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


kishendas commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r448023056



##
File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStore.java
##
@@ -1566,160 +1643,6 @@ private Table createUnpartitionedTableObject(Database 
db) {
 return tbl;
   }
 
-  private TableAndColStats createUnpartitionedTableObjectWithColStats(Database 
db) {

Review comment:
   Ok, thanks for the cleanup. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kishendas commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


kishendas commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r448023085



##
File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStore.java
##
@@ -136,17 +129,19 @@
 MetaStoreTestUtils.setConfForStandloneMode(conf);
 ObjectStore objectStore = new ObjectStore();
 objectStore.setConf(conf);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db1Utbl1.getDbName(), 
db1Utbl1.getTableName());
-Deadline.startTimer("");
-objectStore.dropPartitions(DEFAULT_CATALOG_NAME, db1Ptbl1.getDbName(), 
db1Ptbl1.getTableName(), db1Ptbl1PtnNames);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db1Ptbl1.getDbName(), 
db1Ptbl1.getTableName());
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db2Utbl1.getDbName(), 
db2Utbl1.getTableName());
-Deadline.startTimer("");
-objectStore.dropPartitions(DEFAULT_CATALOG_NAME, db2Ptbl1.getDbName(), 
db2Ptbl1.getTableName(), db2Ptbl1PtnNames);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db2Ptbl1.getDbName(), 
db2Ptbl1.getTableName());
-objectStore.dropDatabase(DEFAULT_CATALOG_NAME, db1.getName());
-objectStore.dropDatabase(DEFAULT_CATALOG_NAME, db2.getName());
+for (String clg : objectStore.getCatalogs()) {

Review comment:
   Sounds good.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] zabetak commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


zabetak commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447999634



##
File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStore.java
##
@@ -1566,160 +1643,6 @@ private Table createUnpartitionedTableObject(Database 
db) {
 return tbl;
   }
 
-  private TableAndColStats createUnpartitionedTableObjectWithColStats(Database 
db) {

Review comment:
   This is not a test case it is just a private unused method.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] zabetak commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


zabetak commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447999118



##
File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStore.java
##
@@ -136,17 +129,19 @@
 MetaStoreTestUtils.setConfForStandloneMode(conf);
 ObjectStore objectStore = new ObjectStore();
 objectStore.setConf(conf);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db1Utbl1.getDbName(), 
db1Utbl1.getTableName());
-Deadline.startTimer("");
-objectStore.dropPartitions(DEFAULT_CATALOG_NAME, db1Ptbl1.getDbName(), 
db1Ptbl1.getTableName(), db1Ptbl1PtnNames);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db1Ptbl1.getDbName(), 
db1Ptbl1.getTableName());
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db2Utbl1.getDbName(), 
db2Utbl1.getTableName());
-Deadline.startTimer("");
-objectStore.dropPartitions(DEFAULT_CATALOG_NAME, db2Ptbl1.getDbName(), 
db2Ptbl1.getTableName(), db2Ptbl1PtnNames);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db2Ptbl1.getDbName(), 
db2Ptbl1.getTableName());
-objectStore.dropDatabase(DEFAULT_CATALOG_NAME, db1.getName());
-objectStore.dropDatabase(DEFAULT_CATALOG_NAME, db2.getName());
+for (String clg : objectStore.getCatalogs()) {

Review comment:
   The `setUp` method adds things in the Metastore that are not necessarily 
needed by every test and this was the case even before my changes. In the new 
test, I added some more tables/columns cause I think it made things easier to 
understand at least for me. If I was to follow the current pattern I would have 
to add the tables, cols, partitions, etc., in the `setUp` method and then drop 
them in the `teardown` method but then they would be visible to all other tests 
as well. 
   
   I like the philosophy where unit tests are minimal (least possible state) 
thus I would prefer if every test starts with a clean Metastore and ends with a 
clean Metastore. I didn't change the `setUp` in order not to introduce too many 
unrelated changes but I made one step forward in the `teardown`.  
   
   If every test in this class was using the same state then I would definitely 
agree that the `setUp` and `teardown` methods should be the way they are right 
now. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kishendas commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


kishendas commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447987117



##
File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStore.java
##
@@ -1566,160 +1643,6 @@ private Table createUnpartitionedTableObject(Database 
db) {
 return tbl;
   }
 
-  private TableAndColStats createUnpartitionedTableObjectWithColStats(Database 
db) {

Review comment:
   Why is this test case being removed ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kishendas commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


kishendas commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447985439



##
File path: 
standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/cache/TestCachedStore.java
##
@@ -136,17 +129,19 @@
 MetaStoreTestUtils.setConfForStandloneMode(conf);
 ObjectStore objectStore = new ObjectStore();
 objectStore.setConf(conf);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db1Utbl1.getDbName(), 
db1Utbl1.getTableName());
-Deadline.startTimer("");
-objectStore.dropPartitions(DEFAULT_CATALOG_NAME, db1Ptbl1.getDbName(), 
db1Ptbl1.getTableName(), db1Ptbl1PtnNames);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db1Ptbl1.getDbName(), 
db1Ptbl1.getTableName());
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db2Utbl1.getDbName(), 
db2Utbl1.getTableName());
-Deadline.startTimer("");
-objectStore.dropPartitions(DEFAULT_CATALOG_NAME, db2Ptbl1.getDbName(), 
db2Ptbl1.getTableName(), db2Ptbl1PtnNames);
-objectStore.dropTable(DEFAULT_CATALOG_NAME, db2Ptbl1.getDbName(), 
db2Ptbl1.getTableName());
-objectStore.dropDatabase(DEFAULT_CATALOG_NAME, db1.getName());
-objectStore.dropDatabase(DEFAULT_CATALOG_NAME, db2.getName());
+for (String clg : objectStore.getCatalogs()) {

Review comment:
   Why is this change required ? I think in the test case, it's preferable 
to explicitly add drop statements , so that we know what's going on. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] zabetak commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


zabetak commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447985240



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -2264,6 +2255,8 @@ private MergedColumnStatsForPartitions 
mergeColStatsForPartitions(String catName
   if (colStatsMap.size() < 1) {
 LOG.debug("No stats data found for: dbName={} tblName= {} partNames= 
{} colNames= ", dbName, tblName, partNames,
 colNames);
+// TODO: If we don't find any stats then most likely we should return 
null. Returning an empty object will not
+// trigger the lookup in the raw store and we will end up with missing 
stats.

Review comment:
   Yeap, check https://issues.apache.org/jira/browse/HIVE-23781. I was 
planning to push a new commit now to update the comment.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kishendas commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


kishendas commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447984106



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -900,13 +898,6 @@ private void updateTablePartitionColStats(RawStore 
rawStore, String catName, Str
   rawStore.getPartitionColumnStatistics(catName, dbName, tblName, 
partNames, colNames, CacheUtils.HIVE_ENGINE);
   Deadline.stopTimer();
   sharedCache.refreshPartitionColStatsInCache(catName, dbName, 
tblName, partitionColStats);
-  Deadline.startTimer("getPartitionsByNames");
-  List parts = rawStore.getPartitionsByNames(catName, 
dbName, tblName, partNames);
-  Deadline.stopTimer();
-  // Also save partitions for consistency as they have the stats state.
-  for (Partition part : parts) {
-sharedCache.alterPartitionInCache(catName, dbName, tblName, 
part.getValues(), part);

Review comment:
   Ok, in review mode, only last two lines appear. I can see that they are 
removed in "Files changed" tab.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kishendas commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


kishendas commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447984350



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -2264,6 +2255,8 @@ private MergedColumnStatsForPartitions 
mergeColStatsForPartitions(String catName
   if (colStatsMap.size() < 1) {
 LOG.debug("No stats data found for: dbName={} tblName= {} partNames= 
{} colNames= ", dbName, tblName, partNames,
 colNames);
+// TODO: If we don't find any stats then most likely we should return 
null. Returning an empty object will not
+// trigger the lookup in the raw store and we will end up with missing 
stats.

Review comment:
   Did you create a JIRA for this ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] zabetak commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


zabetak commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447983144



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -900,13 +898,6 @@ private void updateTablePartitionColStats(RawStore 
rawStore, String catName, Str
   rawStore.getPartitionColumnStatistics(catName, dbName, tblName, 
partNames, colNames, CacheUtils.HIVE_ENGINE);
   Deadline.stopTimer();
   sharedCache.refreshPartitionColStatsInCache(catName, dbName, 
tblName, partitionColStats);
-  Deadline.startTimer("getPartitionsByNames");
-  List parts = rawStore.getPartitionsByNames(catName, 
dbName, tblName, partNames);
-  Deadline.stopTimer();
-  // Also save partitions for consistency as they have the stats state.
-  for (Partition part : parts) {
-sharedCache.alterPartitionInCache(catName, dbName, tblName, 
part.getValues(), part);

Review comment:
   They are already removed aren't they? Maybe, I didn't understand what 
you mean.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] dengzhhu653 commented on pull request #1191: HIVE-23779: BasicStatsTask Info is not getting printed in beeline console

2020-06-30 Thread GitBox


dengzhhu653 commented on pull request #1191:
URL: https://github.com/apache/hive/pull/1191#issuecomment-651699390


   +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] zabetak commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


zabetak commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447550132



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -851,8 +851,6 @@ private void updateTableColStats(RawStore rawStore, String 
catName, String dbNam
 
sharedCache.refreshTableColStatsInCache(StringUtils.normalizeIdentifier(catName),
 StringUtils.normalizeIdentifier(dbName), 
StringUtils.normalizeIdentifier(tblName),
 tableColStats.getStatsObj());
-// Update the table to get consistent stats state.
-sharedCache.alterTableInCache(catName, dbName, tblName, table);

Review comment:
   If my understanding is correct the cache is already updated by the time 
we arrive here. Check my comment regarding `alterPartitionsInCache` and if 
something is not clear I can elaborate more.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] zabetak commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


zabetak commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447547869



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -900,13 +898,6 @@ private void updateTablePartitionColStats(RawStore 
rawStore, String catName, Str
   rawStore.getPartitionColumnStatistics(catName, dbName, tblName, 
partNames, colNames, CacheUtils.HIVE_ENGINE);
   Deadline.stopTimer();
   sharedCache.refreshPartitionColStatsInCache(catName, dbName, 
tblName, partitionColStats);
-  Deadline.startTimer("getPartitionsByNames");
-  List parts = rawStore.getPartitionsByNames(catName, 
dbName, tblName, partNames);
-  Deadline.stopTimer();
-  // Also save partitions for consistency as they have the stats state.
-  for (Partition part : parts) {
-sharedCache.alterPartitionInCache(catName, dbName, tblName, 
part.getValues(), part);

Review comment:
   The explanation is hidden in the commit message :)
   
   Using alterPartitionInCache is problematic:
   1. It empties all relevant stats about partitions even those updated just 
now.
   2. It is not necessary since partitions were updated/refreshed just before 
calling the method updateTablePartitionColStats via 
[updateTablePartitions](https://github.com/apache/hive/blob/31ee14644bf6105360d6266baa8c6c8060d38ea3/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java#L870).
   3. It is not meant to be used for syncing the cache but rather handle 
updates in partitions.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] John1Tang commented on pull request #967: HIVE-23074: add "IF NOT EXISTS" for create & "IF EXISTS" for drop to …

2020-06-30 Thread GitBox


John1Tang commented on pull request #967:
URL: https://github.com/apache/hive/pull/967#issuecomment-651657855


   my pull request only changed the postgres schematool, this test failed with 
hive jdbc storage handler...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447509066



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();
   }
 } catch (RetryException e) {
-  seedWriteIdOnAcidConversion(rqst);
+  seedWriteId(rqst);
 }
+  }
+
+  @Override
+  public void seedTxnId(SeedTxnIdRequest rqst) throws MetaException {
+try {
+  Connection dbConn = null;
+  Statement stmt = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+stmt = dbConn.createStatement();
+/*
+ * Locking the txnLock an exclusive way, we do not want to set the 
txnId backward accidentally
+ * if there are concurrent open transactions
+ */
+acquireTxnLock(stmt, false);
+long highWaterMark = getHighWaterMark(stmt);
+if (highWaterMark >= rqst.getSeedTxnId()) {

Review comment:
   not quite understand this if condition. You have check in 
validateAndAddMaxTxnIdAndWriteId() if there are already some write ids 
registered in HMS and we try to do repair - throw exception. Could it be 
possible due to lack of locking that when we calculate the write ids there is 
nothing in HMS,  however when we try to seed - some transaction generates a new 
write id - would it cause some dataloss problems or other issues? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447484596



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();
   }
 } catch (RetryException e) {
-  seedWriteIdOnAcidConversion(rqst);
+  seedWriteId(rqst);
 }
+  }
+
+  @Override
+  public void seedTxnId(SeedTxnIdRequest rqst) throws MetaException {
+try {
+  Connection dbConn = null;
+  Statement stmt = null;
+  try {
+lockInternal();

Review comment:
   same, could you please check if this is needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447484200



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2032,28 +2077,61 @@ public void 
seedWriteIdOnAcidConversion(InitializeTableWriteIdsRequest rqst)
 // The initial value for write id should be 1 and hence we add 1 with 
number of write ids
 // allocated here
 String s = "INSERT INTO \"NEXT_WRITE_ID\" (\"NWI_DATABASE\", 
\"NWI_TABLE\", \"NWI_NEXT\") VALUES (?, ?, "
-+ Long.toString(rqst.getSeeWriteId() + 1) + ")";
-pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTblName()));
++ Long.toString(rqst.getSeedWriteId() + 1) + ")";
+pst = sqlGenerator.prepareStmtWithParameters(dbConn, s, 
Arrays.asList(rqst.getDbName(), rqst.getTableName()));
 LOG.debug("Going to execute insert <" + s.replaceAll("\\?", "{}") + 
">",
-quoteString(rqst.getDbName()), quoteString(rqst.getTblName()));
+quoteString(rqst.getDbName()), 
quoteString(rqst.getTableName()));
 pst.execute();
 LOG.debug("Going to commit");
 dbConn.commit();
   } catch (SQLException e) {
-LOG.debug("Going to rollback");
 rollbackDBConn(dbConn);
-checkRetryable(dbConn, e, "seedWriteIdOnAcidConversion(" + rqst + ")");
-throw new MetaException("Unable to update transaction database "
-+ StringUtils.stringifyException(e));
+checkRetryable(dbConn, e, "seedWriteId(" + rqst + ")");
+throw new MetaException("Unable to update transaction database " + 
StringUtils.stringifyException(e));
   } finally {
 close(null, pst, dbConn);
 unlockInternal();

Review comment:
   not sure why is it used here





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447483691



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);

Review comment:
   minor: you can simply pass params as  Arrays.asList(rqst.getDbName(), 
rqst.getTableName()) instead of using so many local vars





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447482052



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);

Review comment:
   minor: i would use try-with-resources instead of doing explicit 
management  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447480415



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();

Review comment:
   lockInternal is required for Derby to simulate S4U, why use here? 
unlockInternal is not needed as well





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447480415



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();

Review comment:
   lockInternal is required for Derby to simulate S4U





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447478910



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";
+pStmt = sqlGenerator.prepareStmtWithParameters(dbConn, query, params);
+LOG.debug("Going to execute query <" + query.replaceAll("\\?", "{}") + 
">", quoteString(dbName),
+quoteString(tableName));
+rs = pStmt.executeQuery();
+// If there is no record, we never allocated anything
+long maxWriteId = 0l;
+if (rs.next()) {
+  // The row contains the nextId not the previously allocated
+  maxWriteId = rs.getLong(1) - 1;
+}
+return new MaxAllocatedTableWriteIdResponse(maxWriteId);
+  } catch (SQLException e) {
+LOG.error(
+"Exception during reading the max allocated writeId for dbName={}, 
tableName={}. Will retry if possible.",
+dbName, tableName, e);
+rollbackDBConn(dbConn);

Review comment:
   what to rollback, you have select here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447477522



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2015,8 +2019,49 @@ public AllocateTableWriteIdsResponse 
allocateTableWriteIds(AllocateTableWriteIds
   return allocateTableWriteIds(rqst);
 }
   }
+
+  @Override
+  public MaxAllocatedTableWriteIdResponse 
getMaxAllocatedTableWrited(MaxAllocatedTableWriteIdRequest rqst) throws 
MetaException {
+String dbName = rqst.getDbName();
+String tableName = rqst.getTableName();
+try {
+  Connection dbConn = null;
+  PreparedStatement pStmt = null;
+  ResultSet rs = null;
+  try {
+lockInternal();
+dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+List params = Arrays.asList(dbName, tableName);
+String query = "SELECT \"NWI_NEXT\" FROM \"NEXT_WRITE_ID\" WHERE 
\"NWI_DATABASE\" = ? AND \"NWI_TABLE\" = ?";

Review comment:
   should we have a query constant?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447476089



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnDbUtil.java
##
@@ -313,6 +313,41 @@ private static void resetTxnSequence(Connection conn, 
Statement stmt) throws SQL
 }
   }
 
+  /**
+   * Restarts the txnId sequence with the given seed value.
+   * It is the responsibility of the caller to not set the sequence backward.
+   * @param conn database connection
+   * @param stmt sql statement
+   * @param seedTxnId the seed value for the sequence
+   * @throws SQLException ex
+   */
+  public static void seedTxnSequence(Connection conn, Statement stmt, long 
seedTxnId) throws SQLException {
+String dbProduct = conn.getMetaData().getDatabaseProductName();
+DatabaseProduct databaseProduct = determineDatabaseProduct(dbProduct);
+switch (databaseProduct) {
+
+case DERBY:

Review comment:
   minor: i would probably create EnumMap for SEED_FN, and use proper one 
based on db type.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447472989



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -383,6 +475,7 @@ public Void execute(int size) throws MetastoreException {
   partsToAdd.add(partition);
   lastBatch.add(part);
   addMsgs.add(String.format(addMsgFormat, 
part.getPartitionName()));
+  LOG.debug(String.format(addMsgFormat, part.getPartitionName()));

Review comment:
   why  not to log content of addMsgs 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447468545



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] klcopp merged pull request #1085: Hive 22255 : Hive don't trigger Major Compaction automatically if table contains only base files

2020-06-30 Thread GitBox


klcopp merged pull request #1085:
URL: https://github.com/apache/hive/pull/1085


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kishendas commented on a change in pull request #1186: Hive 23768: Metastore's update service wrongly strips partition column stats from the cache

2020-06-30 Thread GitBox


kishendas commented on a change in pull request #1186:
URL: https://github.com/apache/hive/pull/1186#discussion_r447465847



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -2264,6 +2255,8 @@ private MergedColumnStatsForPartitions 
mergeColStatsForPartitions(String catName
   if (colStatsMap.size() < 1) {
 LOG.debug("No stats data found for: dbName={} tblName= {} partNames= 
{} colNames= ", dbName, tblName, partNames,
 colNames);
+// TODO: If we don't find any stats then most likely we should return 
null. Returning an empty object will not
+// trigger the lookup in the raw store and we will end up with missing 
stats.

Review comment:
   Please create a JIRA for this TODO and add a reference in the comment. 

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -851,8 +851,6 @@ private void updateTableColStats(RawStore rawStore, String 
catName, String dbNam
 
sharedCache.refreshTableColStatsInCache(StringUtils.normalizeIdentifier(catName),
 StringUtils.normalizeIdentifier(dbName), 
StringUtils.normalizeIdentifier(tblName),
 tableColStats.getStatsObj());
-// Update the table to get consistent stats state.
-sharedCache.alterTableInCache(catName, dbName, tblName, table);

Review comment:
   Sorry, I am bit confused looking at this diff. So, the original issue 
seems to be - "Metastore's update service wrongly strips partition column stats 
from the cache in an attempt to update them." . How are we fixing this issue by 
not updating the stats in the cache ? Wouldn't the right fix for this is to to 
ensure sharedCache.alterTableInCache and sharedCache.alterPartitionInCache 
methods do the right thing and not incorrectly remove partition column stats ? 

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/cache/CachedStore.java
##
@@ -900,13 +898,6 @@ private void updateTablePartitionColStats(RawStore 
rawStore, String catName, Str
   rawStore.getPartitionColumnStatistics(catName, dbName, tblName, 
partNames, colNames, CacheUtils.HIVE_ENGINE);
   Deadline.stopTimer();
   sharedCache.refreshPartitionColStatsInCache(catName, dbName, 
tblName, partitionColStats);
-  Deadline.startTimer("getPartitionsByNames");
-  List parts = rawStore.getPartitionsByNames(catName, 
dbName, tblName, partNames);
-  Deadline.stopTimer();
-  // Also save partitions for consistency as they have the stats state.
-  for (Partition part : parts) {
-sharedCache.alterPartitionInCache(catName, dbName, tblName, 
part.getValues(), part);

Review comment:
   Same concern here as previous. How do we fix this issue by not updating 
the cache at all ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447462171



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java
##
@@ -229,102 +239,168 @@ public int repair(MsckInfo msckInfo) {
 throw new MetastoreException(e);
   }
 }
+if (transactionalTable && !MetaStoreServerUtils.isPartitioned(table)) {
+  if (result.getMaxWriteId() > 0) {
+if (txnId < 0) {
+  // We need the txnId to check against even if we didn't do the 
locking
+  txnId = getMsc().openTxn(getUserName());
+}
+
+validateAndAddMaxTxnIdAndWriteId(result.getMaxWriteId(), 
result.getMaxTxnId(),
+table.getDbName(), table.getTableName(), txnId);
+  }
+}
   }
   success = true;
 } catch (Exception e) {
   LOG.warn("Failed to run metacheck: ", e);
   success = false;
-  ret = 1;
 } finally {
-  if (msckInfo.getResFile() != null) {
-BufferedWriter resultOut = null;
-try {
-  Path resFile = new Path(msckInfo.getResFile());
-  FileSystem fs = resFile.getFileSystem(getConf());
-  resultOut = new BufferedWriter(new 
OutputStreamWriter(fs.create(resFile)));
-
-  boolean firstWritten = false;
-  firstWritten |= writeMsckResult(result.getTablesNotInMs(),
-"Tables not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getTablesNotOnFs(),
-"Tables missing on filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotInMs(),
-"Partitions not in metastore:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getPartitionsNotOnFs(),
-"Partitions missing from filesystem:", resultOut, firstWritten);
-  firstWritten |= writeMsckResult(result.getExpiredPartitions(),
-"Expired partitions (retention period: " + partitionExpirySeconds 
+ "s) :", resultOut, firstWritten);
-  // sorting to stabilize qfile output (msck_repair_drop.q)
-  Collections.sort(repairOutput);
-  for (String rout : repairOutput) {
-if (firstWritten) {
-  resultOut.write(terminator);
-} else {
-  firstWritten = true;
-}
-resultOut.write(rout);
-  }
-} catch (IOException e) {
-  LOG.warn("Failed to save metacheck output: ", e);
-  ret = 1;
-} finally {
-  if (resultOut != null) {
-try {
-  resultOut.close();
-} catch (IOException e) {
-  LOG.warn("Failed to close output file: ", e);
-  ret = 1;
-}
-  }
+  if (result!=null) {
+logResult(result);
+if (msckInfo.getResFile() != null) {
+  success = writeResultToFile(msckInfo, result, repairOutput, 
partitionExpirySeconds) && success;
 }
   }
 
-  LOG.info("Tables not in metastore: {}", result.getTablesNotInMs());
-  LOG.info("Tables missing on filesystem: {}", result.getTablesNotOnFs());
-  LOG.info("Partitions not in metastore: {}", 
result.getPartitionsNotInMs());
-  LOG.info("Partitions missing from filesystem: {}", 
result.getPartitionsNotOnFs());
-  LOG.info("Expired partitions: {}", result.getExpiredPartitions());
-  if (acquireLock && txnId > 0) {
-  if (success) {
-try {
-  LOG.info("txnId: {} succeeded. Committing..", txnId);
-  getMsc().commitTxn(txnId);
-} catch (Exception e) {
-  LOG.warn("Error while committing txnId: {} for table: {}", 
txnId, qualifiedTableName, e);
-  ret = 1;
-}
-  } else {
-try {
-  LOG.info("txnId: {} failed. Aborting..", txnId);
-  getMsc().abortTxns(Lists.newArrayList(txnId));
-} catch (Exception e) {
-  LOG.warn("Error while aborting txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
-  ret = 1;
-}
-  }
+  if (txnId > 0) {
+success = closeTxn(qualifiedTableName, success, txnId) && success;
   }
   if (getMsc() != null) {
 getMsc().close();
 msc = null;
   }
 }
+return success ? 0 : 1;
+  }
 
+  private boolean closeTxn(String qualifiedTableName, boolean success, long 
txnId) {
+boolean ret = true;
+if (success) {
+  try {
+LOG.info("txnId: {} succeeded. Committing..", txnId);
+getMsc().commitTxn(txnId);
+  } catch (Exception e) {
+LOG.warn("Error while committing txnId: {} for table: {}", txnId, 
qualifiedTableName, e);
+ret = false;
+  }
+} else {
+  try {
+LOG.info("txnId: {} failed. 

[GitHub] [hive] deniskuzZ commented on a change in pull request #1087: HIVE-23671: MSCK repair should handle transactional tables

2020-06-30 Thread GitBox


deniskuzZ commented on a change in pull request #1087:
URL: https://github.com/apache/hive/pull/1087#discussion_r447033768



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreChecker.java
##
@@ -429,6 +451,75 @@ void findUnknownPartitions(Table table, Set 
partPaths,
 LOG.debug("Number of partitions not in metastore : " + 
result.getPartitionsNotInMs().size());
   }
 
+  /**
+   * Calculate the maximum seen writeId from the acid directory structure
+   * @param partPath Path of the partition directory
+   * @param res Partition result to write the max ids
+   * @throws IOException ex
+   */
+  private void setMaxTxnAndWriteIdFromPartition(Path partPath, 
CheckResult.PartitionResult res) throws IOException {
+FileSystem fs = partPath.getFileSystem(conf);
+FileStatus[] deltaOrBaseFiles = fs.listStatus(partPath, 
HIDDEN_FILES_PATH_FILTER);
+
+// Read the writeIds from every base and delta directory and find the max
+long maxWriteId = 0L;
+long maxVisibilityId = 0L;
+for(FileStatus fileStatus : deltaOrBaseFiles) {
+  if (!fileStatus.isDirectory()) {
+continue;
+  }
+  long writeId = 0L;
+  long visibilityId = 0L;
+  String folder = fileStatus.getPath().getName();
+  if (folder.startsWith(BASE_PREFIX)) {
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+writeId = Long.parseLong(folder.substring(BASE_PREFIX.length()));
+  } else if (folder.startsWith(DELTA_PREFIX) || 
folder.startsWith(DELETE_DELTA_PREFIX)) {
+// See AcidUtils.parseDelta
+visibilityId = getVisibilityTxnId(folder);
+if (visibilityId > 0) {
+  folder = removeVisibilityTxnId(folder);
+}
+boolean isDeleteDelta = folder.startsWith(DELETE_DELTA_PREFIX);
+String rest = folder.substring((isDeleteDelta ? DELETE_DELTA_PREFIX : 
DELTA_PREFIX).length());
+int split = rest.indexOf('_');
+//split2 may be -1 if no statementId
+int split2 = rest.indexOf('_', split + 1);
+// We always want the second part (it is either the same or greater if 
it is a compacted delta)
+writeId = split2 == -1 ? Long.parseLong(rest.substring(split + 1)) : 
Long
+.parseLong(rest.substring(split + 1, split2));
+  }
+  if (writeId > maxWriteId) {
+maxWriteId = writeId;
+  }
+  if (visibilityId > maxVisibilityId) {
+maxVisibilityId = visibilityId;
+  }
+}
+LOG.debug("Max writeId {}, max txnId {} found in partition {}", 
maxWriteId, maxVisibilityId,
+partPath.toUri().toString());
+res.setMaxWriteId(maxWriteId);
+res.setMaxTxnId(maxVisibilityId);
+  }
+  private long getVisibilityTxnId(String folder) {
+int idxOfVis = folder.indexOf(VISIBILITY_PREFIX);

Review comment:
   why not use regex with pattern matching? removeVisibilityTxnId probably 
wouldn't even be needed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] ayushtkn edited a comment on pull request #1187: HIVE-23772. Reallocate calcite-core to prevent NoSuchFiledError.

2020-06-30 Thread GitBox


ayushtkn edited a comment on pull request #1187:
URL: https://github.com/apache/hive/pull/1187#issuecomment-651592714


   The assertion error is due to the fact both shaded and non shaded jars are 
there in classpath.
   ``` Caused by: java.lang.AssertionError
at 
org.apache.calcite.avatica.AvaticaUtils.instantiatePlugin(AvaticaUtils.java:229)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$5.apply(ConnectionConfigImpl.java:392)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.get_(ConnectionConfigImpl.java:173)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.getPlugin(ConnectionConfigImpl.java:296)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.getPlugin(ConnectionConfigImpl.java:282)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.config.CalciteConnectionConfigImpl.typeSystem(CalciteConnectionConfigImpl.java:155)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:127)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:115)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53) 
~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
 ~[avatica-1.12.0.jar:1.12.0]
at java.sql.DriverManager.getConnection(DriverManager.java:664) 
~[?:1.8.0_181]
at java.sql.DriverManager.getConnection(DriverManager.java:208) 
~[?:1.8.0_181]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:175)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:125)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1555)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:?]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:540)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:?]  
   ```
   
   Post ``at java.sql.DriverManager.getConnection(DriverManager.java:664) 
~[?:1.8.0_181]`` rather than coming back to ``hive-exec`` it goes back to the 
non shaded jars ``calcite-core``
   
   The driver gets registered with original jar only so it executes call to the 
non-shaded ones. 
   Ideally we can remove the original ones for itests??
   Removing that takes away that error, but driver registration is done as 
``HiveDriver`` and while getting connection it wants a `jdbc:calcite` 
hard-coded in `Framework` class so it gets a:
   ``` 
   Caused by: java.sql.SQLException: No suitable driver found for jdbc:calcite:
at java.sql.DriverManager.getConnection(DriverManager.java:689) 
~[?:1.8.0_181]
at java.sql.DriverManager.getConnection(DriverManager.java:208) 
~[?:1.8.0_181]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:175)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
   ```

   Is there are configuration or workaround for the driver-connection. If you 
have any idea, Let me know



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] ayushtkn edited a comment on pull request #1187: HIVE-23772. Reallocate calcite-core to prevent NoSuchFiledError.

2020-06-30 Thread GitBox


ayushtkn edited a comment on pull request #1187:
URL: https://github.com/apache/hive/pull/1187#issuecomment-651592714


   The assertion error is due to the fact both shaded and non shaded jars are 
there in classpath.
   ``` Caused by: java.lang.AssertionError
at 
org.apache.calcite.avatica.AvaticaUtils.instantiatePlugin(AvaticaUtils.java:229)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$5.apply(ConnectionConfigImpl.java:392)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.get_(ConnectionConfigImpl.java:173)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.getPlugin(ConnectionConfigImpl.java:296)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.getPlugin(ConnectionConfigImpl.java:282)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.config.CalciteConnectionConfigImpl.typeSystem(CalciteConnectionConfigImpl.java:155)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:127)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:115)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53) 
~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
 ~[avatica-1.12.0.jar:1.12.0]
at java.sql.DriverManager.getConnection(DriverManager.java:664) 
~[?:1.8.0_181]
at java.sql.DriverManager.getConnection(DriverManager.java:208) 
~[?:1.8.0_181]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:175)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:125)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1555)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:?]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:540)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:?]  
   ```
   
   Post ``at java.sql.DriverManager.getConnection(DriverManager.java:664) 
~[?:1.8.0_181]`` rather than coming back to ``hive-exec`` it goes back to the 
non shaded jars ``calcite-core``
   
   The driver gets registered with original jar only so it executes call to the 
non-shaded ones. 
   Ideally we can remove the original ones for itests??
   Removing that takes away that error, but driver registration is done as 
``HiveDriver`` and while getting connection it wants a `jdbc:calcite` 
hard-coded in `Framework` class so it gets a:
   ``` Caused by: java.sql.SQLException: No suitable driver found for 
jdbc:calcite:
at java.sql.DriverManager.getConnection(DriverManager.java:689) 
~[?:1.8.0_181]
at java.sql.DriverManager.getConnection(DriverManager.java:208) 
~[?:1.8.0_181]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:175)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
   ```

   Is there are configuration or workaround for the driver-connection. If you 
have any idea, Let me know



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] ayushtkn commented on pull request #1187: HIVE-23772. Reallocate calcite-core to prevent NoSuchFiledError.

2020-06-30 Thread GitBox


ayushtkn commented on pull request #1187:
URL: https://github.com/apache/hive/pull/1187#issuecomment-651592714


   The assertion error is due to the fact both shaded and non shaded jars are 
there in classpath.
   ``` Caused by: java.lang.AssertionError
at 
org.apache.calcite.avatica.AvaticaUtils.instantiatePlugin(AvaticaUtils.java:229)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$5.apply(ConnectionConfigImpl.java:392)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.get_(ConnectionConfigImpl.java:173)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.getPlugin(ConnectionConfigImpl.java:296)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.avatica.ConnectionConfigImpl$PropEnv.getPlugin(ConnectionConfigImpl.java:282)
 ~[avatica-1.12.0.jar:1.12.0]
at 
org.apache.calcite.config.CalciteConnectionConfigImpl.typeSystem(CalciteConnectionConfigImpl.java:155)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:127)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:115)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
 ~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53) 
~[calcite-core-1.21.0.jar:1.21.0]
at 
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
 ~[avatica-1.12.0.jar:1.12.0]
at java.sql.DriverManager.getConnection(DriverManager.java:664) 
~[?:1.8.0_181]
at java.sql.DriverManager.getConnection(DriverManager.java:208) 
~[?:1.8.0_181]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:175)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:125)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1555)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:?]
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:540)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:?]  
   ```
   
   Post ``at java.sql.DriverManager.getConnection(DriverManager.java:664) 
~[?:1.8.0_181]`` rather than coming back to ``hive-exec`` it goes back to the 
non shaded jars ``calcite-core``
   
   The driver gets registered with original jar only so it executes call to the 
non-shaded ones. 
   Ideally we can remove the original ones for itests??
   Removing that takes away that error, but driver registration is done as 
``HiveDriver`` and while getting connection it wants a `jdbc:calcite` 
hard-coded in `Framework` class so it gets a:
   ``` Caused by: java.sql.SQLException: No suitable driver found for 
jdbc:calcite:
at java.sql.DriverManager.getConnection(DriverManager.java:689) 
~[?:1.8.0_181]
at java.sql.DriverManager.getConnection(DriverManager.java:208) 
~[?:1.8.0_181]
at 
org.apache.hive.org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:175)
 ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]```

   Is there are configuration or workaround for the driver-connection. If you 
have any idea, Let me know



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kgyrtkirk commented on pull request #1187: HIVE-23772. Reallocate calcite-core to prevent NoSuchFiledError.

2020-06-30 Thread GitBox


kgyrtkirk commented on pull request #1187:
URL: https://github.com/apache/hive/pull/1187#issuecomment-651580913


   iirc the point where the switch to the "non-shaded calcite-core" happens is 
when the jdbc driver is loaded  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org



[GitHub] [hive] kasakrisz merged pull request #1188: HIVE-19549: Enable TestAcidOnTez#testCtasTezUnion

2020-06-30 Thread GitBox


kasakrisz merged pull request #1188:
URL: https://github.com/apache/hive/pull/1188


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org
For additional commands, e-mail: gitbox-h...@hive.apache.org