[jira] [Work logged] (HIVE-24286) Render date and time with progress of Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-24286?focusedWorklogId=502953=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502953 ] ASF GitHub Bot logged work on HIVE-24286: - Author: ASF GitHub Bot Created on: 21/Oct/20 02:43 Start Date: 21/Oct/20 02:43 Worklog Time Spent: 10m Work Description: okumin commented on a change in pull request #1588: URL: https://github.com/apache/hive/pull/1588#discussion_r508954919 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java ## @@ -57,7 +60,8 @@ public void update(DAGStatus status, Map vertexProgressMap) { renderProgress(monitor.progressMonitor(status, vertexProgressMap)); String report = getReport(vertexProgressMap); if (showReport(report)) { -renderReport(report); +final String time = FORMATTER.format(LocalDateTime.now()); Review comment: @dengzhhu653 Thanks for clarifying that! I removed the final. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502953) Time Spent: 50m (was: 40m) > Render date and time with progress of Hive on Tez > - > > Key: HIVE-24286 > URL: https://issues.apache.org/jira/browse/HIVE-24286 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: okumin >Assignee: okumin >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Add date/time to each line written by RenderStrategy like MapReduce and Spark. > > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java#L350] > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RenderStrategy.java#L64-L67] > > This ticket would add the current time to the head of each line. > > {code:java} > 2020-10-19 13:32:41,162 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:44,231 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:46,813 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:49,878 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,416 Map 1: 1/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,936 Map 1: 1/1 Reducer 2: 0(+1)/1 > 2020-10-19 13:32:52,877 Map 1: 1/1 Reducer 2: 1/1 > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24286) Render date and time with progress of Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-24286?focusedWorklogId=502948=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502948 ] ASF GitHub Bot logged work on HIVE-24286: - Author: ASF GitHub Bot Created on: 21/Oct/20 02:31 Start Date: 21/Oct/20 02:31 Worklog Time Spent: 10m Work Description: dengzhhu653 commented on a change in pull request #1588: URL: https://github.com/apache/hive/pull/1588#discussion_r508951394 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java ## @@ -57,7 +60,8 @@ public void update(DAGStatus status, Map vertexProgressMap) { renderProgress(monitor.progressMonitor(status, vertexProgressMap)); String report = getReport(vertexProgressMap); if (showReport(report)) { -renderReport(report); +final String time = FORMATTER.format(LocalDateTime.now()); Review comment: yes, align with other variables and there seems no neccesarry to declare it as final, as no one would change it after the if branch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502948) Time Spent: 40m (was: 0.5h) > Render date and time with progress of Hive on Tez > - > > Key: HIVE-24286 > URL: https://issues.apache.org/jira/browse/HIVE-24286 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: okumin >Assignee: okumin >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Add date/time to each line written by RenderStrategy like MapReduce and Spark. > > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java#L350] > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RenderStrategy.java#L64-L67] > > This ticket would add the current time to the head of each line. > > {code:java} > 2020-10-19 13:32:41,162 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:44,231 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:46,813 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:49,878 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,416 Map 1: 1/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,936 Map 1: 1/1 Reducer 2: 0(+1)/1 > 2020-10-19 13:32:52,877 Map 1: 1/1 Reducer 2: 1/1 > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24282) Show columns shouldn't sort output columns unless explicitly mentioned.
[ https://issues.apache.org/jira/browse/HIVE-24282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R resolved HIVE-24282. --- Resolution: Fixed > Show columns shouldn't sort output columns unless explicitly mentioned. > --- > > Key: HIVE-24282 > URL: https://issues.apache.org/jira/browse/HIVE-24282 > Project: Hive > Issue Type: Improvement >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > CREATE TABLE foo_n7(c INT, a INT, b INT); > show columns in foo_n7; > {code:java} > // current output > a > b > c > // expected > c > a > b{code} > HIVE-18373 changed the original behaviour to sorted output. > Suggesting to provide an optional keyword sorted to sort the show columns > output > eg., > {code:java} > show sorted columns in foo_n7; > a > b > c > show columns in foo_n7 > c > a > b{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24282) Show columns shouldn't sort output columns unless explicitly mentioned.
[ https://issues.apache.org/jira/browse/HIVE-24282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-24282: -- Fix Version/s: 4.0.0 > Show columns shouldn't sort output columns unless explicitly mentioned. > --- > > Key: HIVE-24282 > URL: https://issues.apache.org/jira/browse/HIVE-24282 > Project: Hive > Issue Type: Improvement >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > CREATE TABLE foo_n7(c INT, a INT, b INT); > show columns in foo_n7; > {code:java} > // current output > a > b > c > // expected > c > a > b{code} > HIVE-18373 changed the original behaviour to sorted output. > Suggesting to provide an optional keyword sorted to sort the show columns > output > eg., > {code:java} > show sorted columns in foo_n7; > a > b > c > show columns in foo_n7 > c > a > b{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24286) Render date and time with progress of Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-24286?focusedWorklogId=502943=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502943 ] ASF GitHub Bot logged work on HIVE-24286: - Author: ASF GitHub Bot Created on: 21/Oct/20 02:12 Start Date: 21/Oct/20 02:12 Worklog Time Spent: 10m Work Description: okumin commented on a change in pull request #1588: URL: https://github.com/apache/hive/pull/1588#discussion_r508946873 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java ## @@ -57,7 +60,8 @@ public void update(DAGStatus status, Map vertexProgressMap) { renderProgress(monitor.progressMonitor(status, vertexProgressMap)); String report = getReport(vertexProgressMap); if (showReport(report)) { -renderReport(report); +final String time = FORMATTER.format(LocalDateTime.now()); Review comment: @dengzhhu653 I don't disagree, but what's your thought? To align with other variables? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502943) Time Spent: 0.5h (was: 20m) > Render date and time with progress of Hive on Tez > - > > Key: HIVE-24286 > URL: https://issues.apache.org/jira/browse/HIVE-24286 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: okumin >Assignee: okumin >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Add date/time to each line written by RenderStrategy like MapReduce and Spark. > > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java#L350] > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RenderStrategy.java#L64-L67] > > This ticket would add the current time to the head of each line. > > {code:java} > 2020-10-19 13:32:41,162 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:44,231 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:46,813 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:49,878 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,416 Map 1: 1/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,936 Map 1: 1/1 Reducer 2: 0(+1)/1 > 2020-10-19 13:32:52,877 Map 1: 1/1 Reducer 2: 1/1 > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24286) Render date and time with progress of Hive on Tez
[ https://issues.apache.org/jira/browse/HIVE-24286?focusedWorklogId=502940=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502940 ] ASF GitHub Bot logged work on HIVE-24286: - Author: ASF GitHub Bot Created on: 21/Oct/20 01:59 Start Date: 21/Oct/20 01:59 Worklog Time Spent: 10m Work Description: dengzhhu653 commented on a change in pull request #1588: URL: https://github.com/apache/hive/pull/1588#discussion_r508943308 ## File path: ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java ## @@ -57,7 +60,8 @@ public void update(DAGStatus status, Map vertexProgressMap) { renderProgress(monitor.progressMonitor(status, vertexProgressMap)); String report = getReport(vertexProgressMap); if (showReport(report)) { -renderReport(report); +final String time = FORMATTER.format(LocalDateTime.now()); Review comment: Can we remove the final here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502940) Time Spent: 20m (was: 10m) > Render date and time with progress of Hive on Tez > - > > Key: HIVE-24286 > URL: https://issues.apache.org/jira/browse/HIVE-24286 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0 >Reporter: okumin >Assignee: okumin >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Add date/time to each line written by RenderStrategy like MapReduce and Spark. > > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java#L350] > * > [https://github.com/apache/hive/blob/31c1658d9884eb4f31b06eaa718dfef8b1d92d22/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RenderStrategy.java#L64-L67] > > This ticket would add the current time to the head of each line. > > {code:java} > 2020-10-19 13:32:41,162 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:44,231 Map 1: 0/1 Reducer 2: 0/1 > 2020-10-19 13:32:46,813 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:49,878 Map 1: 0(+1)/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,416 Map 1: 1/1 Reducer 2: 0/1 > 2020-10-19 13:32:51,936 Map 1: 1/1 Reducer 2: 0(+1)/1 > 2020-10-19 13:32:52,877 Map 1: 1/1 Reducer 2: 1/1 > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24037) Parallelize hash table constructions in map joins
[ https://issues.apache.org/jira/browse/HIVE-24037?focusedWorklogId=502921=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502921 ] ASF GitHub Bot logged work on HIVE-24037: - Author: ASF GitHub Bot Created on: 21/Oct/20 00:57 Start Date: 21/Oct/20 00:57 Worklog Time Spent: 10m Work Description: github-actions[bot] commented on pull request #1401: URL: https://github.com/apache/hive/pull/1401#issuecomment-713224730 This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Feel free to reach out on the d...@hive.apache.org list if the patch is in need of reviews. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502921) Time Spent: 20m (was: 10m) > Parallelize hash table constructions in map joins > - > > Key: HIVE-24037 > URL: https://issues.apache.org/jira/browse/HIVE-24037 > Project: Hive > Issue Type: Improvement >Reporter: Ramesh Kumar Thangarajan >Assignee: Ramesh Kumar Thangarajan >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Parallelize hash table constructions in map joins -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24053) Pluggable HttpRequestInterceptor for Hive JDBC
[ https://issues.apache.org/jira/browse/HIVE-24053?focusedWorklogId=502919=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502919 ] ASF GitHub Bot logged work on HIVE-24053: - Author: ASF GitHub Bot Created on: 21/Oct/20 00:57 Start Date: 21/Oct/20 00:57 Worklog Time Spent: 10m Work Description: github-actions[bot] commented on pull request #1417: URL: https://github.com/apache/hive/pull/1417#issuecomment-713224719 This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Feel free to reach out on the d...@hive.apache.org list if the patch is in need of reviews. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502919) Time Spent: 20m (was: 10m) > Pluggable HttpRequestInterceptor for Hive JDBC > -- > > Key: HIVE-24053 > URL: https://issues.apache.org/jira/browse/HIVE-24053 > Project: Hive > Issue Type: New Feature > Components: JDBC >Affects Versions: 3.1.2 >Reporter: Ying Wang >Assignee: Ying Wang >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Allows client to pass in the name of a customize HttpRequestInterceptor, > instantiate the class and adds it to HttpClient. > Example usage: We would like to pass in a HttpRequestInterceptor for OAuth2.0 > Authentication purpose. The HttpRequestInterceptor will acquire and/or > refresh the access token and add it as authentication header each time > HiveConnection sends the HttpRequest. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-15157) Partition Table With timestamp type on S3 storage --> Error in getting fields from serde.Invalid Field null
[ https://issues.apache.org/jira/browse/HIVE-15157?focusedWorklogId=502920=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502920 ] ASF GitHub Bot logged work on HIVE-15157: - Author: ASF GitHub Bot Created on: 21/Oct/20 00:57 Start Date: 21/Oct/20 00:57 Worklog Time Spent: 10m Work Description: github-actions[bot] closed pull request #840: URL: https://github.com/apache/hive/pull/840 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502920) Time Spent: 1h 10m (was: 1h) > Partition Table With timestamp type on S3 storage --> Error in getting fields > from serde.Invalid Field null > --- > > Key: HIVE-15157 > URL: https://issues.apache.org/jira/browse/HIVE-15157 > Project: Hive > Issue Type: Bug > Components: Clients >Affects Versions: 2.1.0 > Environment: JDK 1.8 101 >Reporter: thauvin damien >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Labels: pull-request-available, timestamp > Attachments: HIVE-15157.01.patch, HIVE-15157.02.patch, > HIVE-15157.03.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Hello > I get the error above when i try to perform : > hive> DESCRIBE formatted table partition (tsbucket='2016-10-28 16%3A00%3A00'); > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Error in getting fields from > serde.Invalid Field null > Here is the description of the issue. > --External table Hive with dynamic partition enable on Aws S3 storage. > --Partition Table with timestamp type . > When i perform "show partition table;" everything is fine : > hive> show partitions table; > OK > tsbucket=2016-10-01 11%3A00%3A00 > tsbucket=2016-10-28 16%3A00%3A00 > And when i perform "describe FORMATTED table;" everything is fine > Is this a bug ? > The stacktrace of hive.log : > 2016-11-08T10:30:20,868 ERROR [ac3e0d48-22c5-4d04-a788-aeb004ea94f3 > main([])]: exec.DDLTask (DDLTask.java:failed(574)) - > org.apache.hadoop.hive.ql.metadata.HiveException: Error in getting fields > from serde.Invalid Field null > at > org.apache.hadoop.hive.ql.metadata.Hive.getFieldsFromDeserializer(Hive.java:3414) > at > org.apache.hadoop.hive.ql.exec.DDLTask.describeTable(DDLTask.java:3109) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:408) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1858) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1562) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1313) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1084) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1072) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > Caused by: MetaException(message:Invalid Field null) > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getFieldsFromDeserializer(MetaStoreUtils.java:1336) > at > org.apache.hadoop.hive.ql.metadata.Hive.getFieldsFromDeserializer(Hive.java:3409) > ... 21 more -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-21052) Make sure transactions get cleaned if they are aborted before addPartitions is called
[ https://issues.apache.org/jira/browse/HIVE-21052?focusedWorklogId=502838=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502838 ] ASF GitHub Bot logged work on HIVE-21052: - Author: ASF GitHub Bot Created on: 20/Oct/20 20:14 Start Date: 20/Oct/20 20:14 Worklog Time Spent: 10m Work Description: deniskuzZ commented on a change in pull request #1548: URL: https://github.com/apache/hive/pull/1548#discussion_r508809944 ## File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java ## @@ -589,4 +593,9 @@ private void checkInterrupt() throws InterruptedException { throw new InterruptedException("Compaction execution is interrupted"); } } -} + + private static boolean isDynPartAbort(Table t, CompactionInfo ci) { Review comment: could be, do you know if there is some helper class I could move isDynPartAbort method? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502838) Time Spent: 11h 40m (was: 11.5h) > Make sure transactions get cleaned if they are aborted before addPartitions > is called > - > > Key: HIVE-21052 > URL: https://issues.apache.org/jira/browse/HIVE-21052 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0, 3.1.1 >Reporter: Jaume M >Assignee: Jaume M >Priority: Critical > Labels: pull-request-available > Attachments: Aborted Txn w_Direct Write.pdf, HIVE-21052.1.patch, > HIVE-21052.10.patch, HIVE-21052.11.patch, HIVE-21052.12.patch, > HIVE-21052.2.patch, HIVE-21052.3.patch, HIVE-21052.4.patch, > HIVE-21052.5.patch, HIVE-21052.6.patch, HIVE-21052.7.patch, > HIVE-21052.8.patch, HIVE-21052.9.patch > > Time Spent: 11h 40m > Remaining Estimate: 0h > > If the transaction is aborted between openTxn and addPartitions and data has > been written on the table the transaction manager will think it's an empty > transaction and no cleaning will be done. > This is currently an issue in the streaming API and in micromanaged tables. > As proposed by [~ekoifman] this can be solved by: > * Writing an entry with a special marker to TXN_COMPONENTS at openTxn and > when addPartitions is called remove this entry from TXN_COMPONENTS and add > the corresponding partition entry to TXN_COMPONENTS. > * If the cleaner finds and entry with a special marker in TXN_COMPONENTS that > specifies that a transaction was opened and it was aborted it must generate > jobs for the worker for every possible partition available. > cc [~ewohlstadter] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-21052) Make sure transactions get cleaned if they are aborted before addPartitions is called
[ https://issues.apache.org/jira/browse/HIVE-21052?focusedWorklogId=502837=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502837 ] ASF GitHub Bot logged work on HIVE-21052: - Author: ASF GitHub Bot Created on: 20/Oct/20 20:10 Start Date: 20/Oct/20 20:10 Worklog Time Spent: 10m Work Description: deniskuzZ commented on a change in pull request #1548: URL: https://github.com/apache/hive/pull/1548#discussion_r508807499 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ## @@ -414,76 +403,30 @@ public void markCleaned(CompactionInfo info) throws MetaException { * aborted TXN_COMPONENTS above tc_writeid (and consequently about aborted txns). * See {@link ql.txn.compactor.Cleaner.removeFiles()} */ -s = "SELECT DISTINCT \"TXN_ID\" FROM \"TXNS\", \"TXN_COMPONENTS\" WHERE \"TXN_ID\" = \"TC_TXNID\" " -+ "AND \"TXN_STATE\" = " + TxnStatus.ABORTED + " AND \"TC_DATABASE\" = ? AND \"TC_TABLE\" = ?"; -if (info.highestWriteId != 0) s += " AND \"TC_WRITEID\" <= ?"; -if (info.partName != null) s += " AND \"TC_PARTITION\" = ?"; - +s = "DELETE FROM \"TXN_COMPONENTS\" WHERE \"TC_TXNID\" IN (" + Review comment: @pvary, could you please take a quick look? thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502837) Time Spent: 11.5h (was: 11h 20m) > Make sure transactions get cleaned if they are aborted before addPartitions > is called > - > > Key: HIVE-21052 > URL: https://issues.apache.org/jira/browse/HIVE-21052 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0, 3.1.1 >Reporter: Jaume M >Assignee: Jaume M >Priority: Critical > Labels: pull-request-available > Attachments: Aborted Txn w_Direct Write.pdf, HIVE-21052.1.patch, > HIVE-21052.10.patch, HIVE-21052.11.patch, HIVE-21052.12.patch, > HIVE-21052.2.patch, HIVE-21052.3.patch, HIVE-21052.4.patch, > HIVE-21052.5.patch, HIVE-21052.6.patch, HIVE-21052.7.patch, > HIVE-21052.8.patch, HIVE-21052.9.patch > > Time Spent: 11.5h > Remaining Estimate: 0h > > If the transaction is aborted between openTxn and addPartitions and data has > been written on the table the transaction manager will think it's an empty > transaction and no cleaning will be done. > This is currently an issue in the streaming API and in micromanaged tables. > As proposed by [~ekoifman] this can be solved by: > * Writing an entry with a special marker to TXN_COMPONENTS at openTxn and > when addPartitions is called remove this entry from TXN_COMPONENTS and add > the corresponding partition entry to TXN_COMPONENTS. > * If the cleaner finds and entry with a special marker in TXN_COMPONENTS that > specifies that a transaction was opened and it was aborted it must generate > jobs for the worker for every possible partition available. > cc [~ewohlstadter] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-21052) Make sure transactions get cleaned if they are aborted before addPartitions is called
[ https://issues.apache.org/jira/browse/HIVE-21052?focusedWorklogId=502835=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502835 ] ASF GitHub Bot logged work on HIVE-21052: - Author: ASF GitHub Bot Created on: 20/Oct/20 20:05 Start Date: 20/Oct/20 20:05 Worklog Time Spent: 10m Work Description: deniskuzZ commented on a change in pull request #1548: URL: https://github.com/apache/hive/pull/1548#discussion_r508805163 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ## @@ -414,76 +403,30 @@ public void markCleaned(CompactionInfo info) throws MetaException { * aborted TXN_COMPONENTS above tc_writeid (and consequently about aborted txns). * See {@link ql.txn.compactor.Cleaner.removeFiles()} */ -s = "SELECT DISTINCT \"TXN_ID\" FROM \"TXNS\", \"TXN_COMPONENTS\" WHERE \"TXN_ID\" = \"TC_TXNID\" " -+ "AND \"TXN_STATE\" = " + TxnStatus.ABORTED + " AND \"TC_DATABASE\" = ? AND \"TC_TABLE\" = ?"; -if (info.highestWriteId != 0) s += " AND \"TC_WRITEID\" <= ?"; -if (info.partName != null) s += " AND \"TC_PARTITION\" = ?"; - +s = "DELETE FROM \"TXN_COMPONENTS\" WHERE \"TC_TXNID\" IN (" + Review comment: this is an optimization that makes everything in 1 db request instead of 2 (select + delete) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502835) Time Spent: 11h 20m (was: 11h 10m) > Make sure transactions get cleaned if they are aborted before addPartitions > is called > - > > Key: HIVE-21052 > URL: https://issues.apache.org/jira/browse/HIVE-21052 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0, 3.1.1 >Reporter: Jaume M >Assignee: Jaume M >Priority: Critical > Labels: pull-request-available > Attachments: Aborted Txn w_Direct Write.pdf, HIVE-21052.1.patch, > HIVE-21052.10.patch, HIVE-21052.11.patch, HIVE-21052.12.patch, > HIVE-21052.2.patch, HIVE-21052.3.patch, HIVE-21052.4.patch, > HIVE-21052.5.patch, HIVE-21052.6.patch, HIVE-21052.7.patch, > HIVE-21052.8.patch, HIVE-21052.9.patch > > Time Spent: 11h 20m > Remaining Estimate: 0h > > If the transaction is aborted between openTxn and addPartitions and data has > been written on the table the transaction manager will think it's an empty > transaction and no cleaning will be done. > This is currently an issue in the streaming API and in micromanaged tables. > As proposed by [~ekoifman] this can be solved by: > * Writing an entry with a special marker to TXN_COMPONENTS at openTxn and > when addPartitions is called remove this entry from TXN_COMPONENTS and add > the corresponding partition entry to TXN_COMPONENTS. > * If the cleaner finds and entry with a special marker in TXN_COMPONENTS that > specifies that a transaction was opened and it was aborted it must generate > jobs for the worker for every possible partition available. > cc [~ewohlstadter] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-21052) Make sure transactions get cleaned if they are aborted before addPartitions is called
[ https://issues.apache.org/jira/browse/HIVE-21052?focusedWorklogId=502834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502834 ] ASF GitHub Bot logged work on HIVE-21052: - Author: ASF GitHub Bot Created on: 20/Oct/20 20:04 Start Date: 20/Oct/20 20:04 Worklog Time Spent: 10m Work Description: deniskuzZ commented on a change in pull request #1548: URL: https://github.com/apache/hive/pull/1548#discussion_r508804039 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ## @@ -400,11 +389,11 @@ public void markCleaned(CompactionInfo info) throws MetaException { pStmt.setString(paramCount++, info.partName); } if(info.highestWriteId != 0) { - pStmt.setLong(paramCount++, info.highestWriteId); + pStmt.setLong(paramCount, info.highestWriteId); Review comment: redundant post increment ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ## @@ -134,9 +132,6 @@ public CompactionTxnHandler() { response.add(info); } } - -LOG.debug("Going to rollback"); -dbConn.rollback(); Review comment: no idea :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502834) Time Spent: 11h 10m (was: 11h) > Make sure transactions get cleaned if they are aborted before addPartitions > is called > - > > Key: HIVE-21052 > URL: https://issues.apache.org/jira/browse/HIVE-21052 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0, 3.1.1 >Reporter: Jaume M >Assignee: Jaume M >Priority: Critical > Labels: pull-request-available > Attachments: Aborted Txn w_Direct Write.pdf, HIVE-21052.1.patch, > HIVE-21052.10.patch, HIVE-21052.11.patch, HIVE-21052.12.patch, > HIVE-21052.2.patch, HIVE-21052.3.patch, HIVE-21052.4.patch, > HIVE-21052.5.patch, HIVE-21052.6.patch, HIVE-21052.7.patch, > HIVE-21052.8.patch, HIVE-21052.9.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > If the transaction is aborted between openTxn and addPartitions and data has > been written on the table the transaction manager will think it's an empty > transaction and no cleaning will be done. > This is currently an issue in the streaming API and in micromanaged tables. > As proposed by [~ekoifman] this can be solved by: > * Writing an entry with a special marker to TXN_COMPONENTS at openTxn and > when addPartitions is called remove this entry from TXN_COMPONENTS and add > the corresponding partition entry to TXN_COMPONENTS. > * If the cleaner finds and entry with a special marker in TXN_COMPONENTS that > specifies that a transaction was opened and it was aborted it must generate > jobs for the worker for every possible partition available. > cc [~ewohlstadter] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-23962) Make bin/hive pick user defined jdbc url
[ https://issues.apache.org/jira/browse/HIVE-23962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17217885#comment-17217885 ] Naveen Gangam commented on HIVE-23962: -- [~Rajkumar Singh] We understand that beeline-site.xml is for this purpose. This fix is a further enhancement to that feature. This allows users to control the beeline profile being used via an environment variable. With existing workloads, to change the beeline URL, user will have to 1) either change the existing URL in beeline-site.xml or 2) add a new URL to beeline-site.xml and make this the default. The latter option works fine if you want to change it across all users. But users may want to selectively do so across some sessions but not all. Having an environment variable aids with this without having to change anything on the workload. For example, you might want to run a workload to test performance acid tables vs non-acid tables. We can now create a new profiles in beeline-site.xml, one that enables acid tables by default and other that uses legacy-mode where tables are non-acid by default and run the workload again each like so BEELINE_URL_LEGACY=acid beeline -f "workload.hql" BEELINE_URL_LEGACY=non-acid beeline -f "workload.hql" [~Xiaomeng Zhang] [~ychena] Could you please review this? I made a minor change to the prior fix in PR1344 where we would only use "beeline -c" when service="cli". It doesnt work when we invoke beeline.sh script. Thanks > Make bin/hive pick user defined jdbc url > - > > Key: HIVE-23962 > URL: https://issues.apache.org/jira/browse/HIVE-23962 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 4.0.0 >Reporter: Xiaomeng Zhang >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Currently hive command will trigger bin/hive which run "beeline" by default. > We want to pass a env variable so that user can define which url beeline use. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24262) Optimise NullScanTaskDispatcher for cloud storage
[ https://issues.apache.org/jira/browse/HIVE-24262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17217872#comment-17217872 ] Ashutosh Chauhan commented on HIVE-24262: - I don't see any advantage of scanning the FS to determine if path is empty or not. Queries under considerations are select count(distinct part_col) from t. Such queries since don't reference any non-partition column are applicable to metadataonly optimizer. Now, where partition objects exist in HMS but there is no data on FS, question is whether to return empty resultset or to return actual values from HMS. I think this is a corner-case which I am not sure if standard has a clear answer. I think we can chose to return resultset containing partition column. If we do that, we don't need to determine if dirs are empty or not and can skip this expensive operation. Also, there already is config for this, so for some reason alternative resultset is desired that's also possible by turning off optimization completely. > Optimise NullScanTaskDispatcher for cloud storage > - > > Key: HIVE-24262 > URL: https://issues.apache.org/jira/browse/HIVE-24262 > Project: Hive > Issue Type: Improvement >Reporter: Rajesh Balamohan >Assignee: Mustafa Iman >Priority: Major > > {noformat} > select count(DISTINCT ss_sold_date_sk) from store_sales; > -- > VERTICES MODESTATUS TOTAL COMPLETED RUNNING PENDING > FAILED KILLED > -- > Map 1 .. container SUCCEEDED 1 100 > 0 0 > Reducer 2 .. container SUCCEEDED 1 100 > 0 0 > -- > VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 5.55 s > -- > INFO : Status: DAG finished successfully in 5.44 seconds > INFO : > INFO : Query Execution Summary > INFO : > -- > INFO : OPERATIONDURATION > INFO : > -- > INFO : Compile Query 102.02s > INFO : Prepare Plan0.51s > INFO : Get Query Coordinator (AM) 0.01s > INFO : Submit Plan 0.33s > INFO : Start DAG 0.56s > INFO : Run DAG 5.44s > INFO : > -- > {noformat} > Reason for "102 seconds" compilation time is that, it ends up doing > "isEmptyPath" check for every partition path and takes lot of time in > compilation phase. > If the parent directory of all paths belong to the same path, we could just > do a recursive listing just once (instead of listing each directory one at a > time sequentially) in cloud storage systems. > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/NullScanTaskDispatcher.java#L158 > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/NullScanTaskDispatcher.java#L121 > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/NullScanTaskDispatcher.java#L101 > With a temp hacky fix, it comes down to 2 seconds from 100+ seconds. > {noformat} > INFO : Dag name: select count(DISTINCT ss_sold_...store_sales (Stage-1) > INFO : Status: Running (Executing on YARN cluster with App id > application_1602500203747_0003) > -- > VERTICES MODESTATUS TOTAL COMPLETED RUNNING PENDING > FAILED KILLED > -- > Map 1 .. container SUCCEEDED 1 100 > 0 0 > Reducer 2 .. container SUCCEEDED 1 100 > 0 0 > -- > VERTICES: 02/02 [==>>] 100% ELAPSED TIME: 1.23 s > -- > INFO : Status: DAG finished successfully in 1.20 seconds > INFO : > INFO :
[jira] [Commented] (HIVE-24282) Show columns shouldn't sort output columns unless explicitly mentioned.
[ https://issues.apache.org/jira/browse/HIVE-24282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17217822#comment-17217822 ] Naresh P R commented on HIVE-24282: --- Thanks for the review & merge [~mgergely] !!! > Show columns shouldn't sort output columns unless explicitly mentioned. > --- > > Key: HIVE-24282 > URL: https://issues.apache.org/jira/browse/HIVE-24282 > Project: Hive > Issue Type: Improvement >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > CREATE TABLE foo_n7(c INT, a INT, b INT); > show columns in foo_n7; > {code:java} > // current output > a > b > c > // expected > c > a > b{code} > HIVE-18373 changed the original behaviour to sorted output. > Suggesting to provide an optional keyword sorted to sort the show columns > output > eg., > {code:java} > show sorted columns in foo_n7; > a > b > c > show columns in foo_n7 > c > a > b{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24282) Show columns shouldn't sort output columns unless explicitly mentioned.
[ https://issues.apache.org/jira/browse/HIVE-24282?focusedWorklogId=502803=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502803 ] ASF GitHub Bot logged work on HIVE-24282: - Author: ASF GitHub Bot Created on: 20/Oct/20 17:40 Start Date: 20/Oct/20 17:40 Worklog Time Spent: 10m Work Description: nareshpr commented on pull request #1584: URL: https://github.com/apache/hive/pull/1584#issuecomment-713027214 Thanks for the review & merge @miklosgergely This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502803) Time Spent: 0.5h (was: 20m) > Show columns shouldn't sort output columns unless explicitly mentioned. > --- > > Key: HIVE-24282 > URL: https://issues.apache.org/jira/browse/HIVE-24282 > Project: Hive > Issue Type: Improvement >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > CREATE TABLE foo_n7(c INT, a INT, b INT); > show columns in foo_n7; > {code:java} > // current output > a > b > c > // expected > c > a > b{code} > HIVE-18373 changed the original behaviour to sorted output. > Suggesting to provide an optional keyword sorted to sort the show columns > output > eg., > {code:java} > show sorted columns in foo_n7; > a > b > c > show columns in foo_n7 > c > a > b{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-21052) Make sure transactions get cleaned if they are aborted before addPartitions is called
[ https://issues.apache.org/jira/browse/HIVE-21052?focusedWorklogId=502802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502802 ] ASF GitHub Bot logged work on HIVE-21052: - Author: ASF GitHub Bot Created on: 20/Oct/20 17:25 Start Date: 20/Oct/20 17:25 Worklog Time Spent: 10m Work Description: klcopp commented on a change in pull request #1548: URL: https://github.com/apache/hive/pull/1548#discussion_r508641433 ## File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Worker.java ## @@ -589,4 +593,9 @@ private void checkInterrupt() throws InterruptedException { throw new InterruptedException("Compaction execution is interrupted"); } } -} + + private static boolean isDynPartAbort(Table t, CompactionInfo ci) { Review comment: This can be consolidated with most of isDynPartIngest in CompactionUtils ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ## @@ -400,11 +389,11 @@ public void markCleaned(CompactionInfo info) throws MetaException { pStmt.setString(paramCount++, info.partName); } if(info.highestWriteId != 0) { - pStmt.setLong(paramCount++, info.highestWriteId); + pStmt.setLong(paramCount, info.highestWriteId); Review comment: Why was this changed? ## File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands2.java ## @@ -2128,24 +2129,601 @@ public void testCleanerForTxnToWriteId() throws Exception { 0, TxnDbUtil.countQueryAgent(hiveConf, "select count(*) from TXN_TO_WRITE_ID")); } - private void verifyDirAndResult(int expectedDeltas) throws Exception { -FileSystem fs = FileSystem.get(hiveConf); -// Verify the content of subdirs -FileStatus[] status = fs.listStatus(new Path(TEST_WAREHOUSE_DIR + "/" + -(Table.MMTBL).toString().toLowerCase()), FileUtils.HIDDEN_FILES_PATH_FILTER); + @Test + public void testMmTableAbortWithCompaction() throws Exception { Review comment: FYI MM tests are usually in TestTxnCommandsForMmTable.java but I don't really care about this ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ## @@ -134,9 +132,6 @@ public CompactionTxnHandler() { response.add(info); } } - -LOG.debug("Going to rollback"); -dbConn.rollback(); Review comment: Any ideas about why this was here? Just curious ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java ## @@ -414,76 +403,30 @@ public void markCleaned(CompactionInfo info) throws MetaException { * aborted TXN_COMPONENTS above tc_writeid (and consequently about aborted txns). * See {@link ql.txn.compactor.Cleaner.removeFiles()} */ -s = "SELECT DISTINCT \"TXN_ID\" FROM \"TXNS\", \"TXN_COMPONENTS\" WHERE \"TXN_ID\" = \"TC_TXNID\" " -+ "AND \"TXN_STATE\" = " + TxnStatus.ABORTED + " AND \"TC_DATABASE\" = ? AND \"TC_TABLE\" = ?"; -if (info.highestWriteId != 0) s += " AND \"TC_WRITEID\" <= ?"; -if (info.partName != null) s += " AND \"TC_PARTITION\" = ?"; - +s = "DELETE FROM \"TXN_COMPONENTS\" WHERE \"TC_TXNID\" IN (" + Review comment: This is just refactoring right? LGTM but can you make sure @pvary sees this as well? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502802) Time Spent: 11h (was: 10h 50m) > Make sure transactions get cleaned if they are aborted before addPartitions > is called > - > > Key: HIVE-21052 > URL: https://issues.apache.org/jira/browse/HIVE-21052 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0, 3.1.1 >Reporter: Jaume M >Assignee: Jaume M >Priority: Critical > Labels: pull-request-available > Attachments: Aborted Txn w_Direct Write.pdf, HIVE-21052.1.patch, > HIVE-21052.10.patch, HIVE-21052.11.patch, HIVE-21052.12.patch, > HIVE-21052.2.patch, HIVE-21052.3.patch, HIVE-21052.4.patch, > HIVE-21052.5.patch, HIVE-21052.6.patch, HIVE-21052.7.patch, > HIVE-21052.8.patch, HIVE-21052.9.patch > > Time Spent: 11h >
[jira] [Work logged] (HIVE-24217) HMS storage backend for HPL/SQL stored procedures
[ https://issues.apache.org/jira/browse/HIVE-24217?focusedWorklogId=502753=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502753 ] ASF GitHub Bot logged work on HIVE-24217: - Author: ASF GitHub Bot Created on: 20/Oct/20 15:51 Start Date: 20/Oct/20 15:51 Worklog Time Spent: 10m Work Description: zeroflag commented on a change in pull request #1542: URL: https://github.com/apache/hive/pull/1542#discussion_r508638143 ## File path: standalone-metastore/metastore-server/src/main/resources/package.jdo ## @@ -1549,6 +1549,83 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Review comment: Discussed this further offline, let's try the string based approach for now and see how it goes. I'll modify the patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502753) Time Spent: 2h 40m (was: 2.5h) > HMS storage backend for HPL/SQL stored procedures > - > > Key: HIVE-24217 > URL: https://issues.apache.org/jira/browse/HIVE-24217 > Project: Hive > Issue Type: Bug > Components: Hive, hpl/sql, Metastore >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Labels: pull-request-available > Attachments: HPL_SQL storedproc HMS storage.pdf > > Time Spent: 2h 40m > Remaining Estimate: 0h > > HPL/SQL procedures are currently stored in text files. The goal of this Jira > is to implement a Metastore backend for storing and loading these procedures. > This is an incremental step towards having fully capable stored procedures in > Hive. > > See the attached design for more information. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-23962) Make bin/hive pick user defined jdbc url
[ https://issues.apache.org/jira/browse/HIVE-23962?focusedWorklogId=502713=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502713 ] ASF GitHub Bot logged work on HIVE-23962: - Author: ASF GitHub Bot Created on: 20/Oct/20 14:16 Start Date: 20/Oct/20 14:16 Worklog Time Spent: 10m Work Description: nrg4878 commented on pull request #1591: URL: https://github.com/apache/hive/pull/1591#issuecomment-712885001 the original fix was authored by Xiaomeng Zhang which I had reviewed. So will submit this fix This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502713) Time Spent: 1h (was: 50m) > Make bin/hive pick user defined jdbc url > - > > Key: HIVE-23962 > URL: https://issues.apache.org/jira/browse/HIVE-23962 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 4.0.0 >Reporter: Xiaomeng Zhang >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Currently hive command will trigger bin/hive which run "beeline" by default. > We want to pass a env variable so that user can define which url beeline use. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24291) Compaction Cleaner prematurely cleans up deltas
[ https://issues.apache.org/jira/browse/HIVE-24291?focusedWorklogId=502705=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502705 ] ASF GitHub Bot logged work on HIVE-24291: - Author: ASF GitHub Bot Created on: 20/Oct/20 13:57 Start Date: 20/Oct/20 13:57 Worklog Time Spent: 10m Work Description: pvargacl commented on pull request #1592: URL: https://github.com/apache/hive/pull/1592#issuecomment-712871640 @klcopp @pvary Could any of you review this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502705) Time Spent: 20m (was: 10m) > Compaction Cleaner prematurely cleans up deltas > --- > > Key: HIVE-24291 > URL: https://issues.apache.org/jira/browse/HIVE-24291 > Project: Hive > Issue Type: Bug >Reporter: Peter Varga >Assignee: Peter Varga >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Since HIVE-23107 the cleaner can clean up deltas that are still used by > running queries. > Example: > * TxnId 1-5 writes to a partition, all commits > * Compactor starts with txnId=6 > * Long running query starts with txnId=7, it sees txnId=6 as open in its > snapshot > * Compaction commits > * Cleaner runs > Previously min_history_level table would have prevented the Cleaner to delete > the deltas1-5 until txnId=7 is open, but now they will be deleted and the > long running query may fail if its tries to access the files. > Solution could be to not run the cleaner until any txn is open that was > opened before the compaction was committed (CQ_NEXT_TXN_ID) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24291) Compaction Cleaner prematurely cleans up deltas
[ https://issues.apache.org/jira/browse/HIVE-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-24291: -- Labels: pull-request-available (was: ) > Compaction Cleaner prematurely cleans up deltas > --- > > Key: HIVE-24291 > URL: https://issues.apache.org/jira/browse/HIVE-24291 > Project: Hive > Issue Type: Bug >Reporter: Peter Varga >Assignee: Peter Varga >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Since HIVE-23107 the cleaner can clean up deltas that are still used by > running queries. > Example: > * TxnId 1-5 writes to a partition, all commits > * Compactor starts with txnId=6 > * Long running query starts with txnId=7, it sees txnId=6 as open in its > snapshot > * Compaction commits > * Cleaner runs > Previously min_history_level table would have prevented the Cleaner to delete > the deltas1-5 until txnId=7 is open, but now they will be deleted and the > long running query may fail if its tries to access the files. > Solution could be to not run the cleaner until any txn is open that was > opened before the compaction was committed (CQ_NEXT_TXN_ID) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24291) Compaction Cleaner prematurely cleans up deltas
[ https://issues.apache.org/jira/browse/HIVE-24291?focusedWorklogId=502687=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502687 ] ASF GitHub Bot logged work on HIVE-24291: - Author: ASF GitHub Bot Created on: 20/Oct/20 13:34 Start Date: 20/Oct/20 13:34 Worklog Time Spent: 10m Work Description: pvargacl opened a new pull request #1592: URL: https://github.com/apache/hive/pull/1592 ### What changes were proposed in this pull request? Compaction cleaner should wait for all previous txns to commit ### Why are the changes needed? Example buggy scenario in the Jira ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Unit tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502687) Remaining Estimate: 0h Time Spent: 10m > Compaction Cleaner prematurely cleans up deltas > --- > > Key: HIVE-24291 > URL: https://issues.apache.org/jira/browse/HIVE-24291 > Project: Hive > Issue Type: Bug >Reporter: Peter Varga >Assignee: Peter Varga >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Since HIVE-23107 the cleaner can clean up deltas that are still used by > running queries. > Example: > * TxnId 1-5 writes to a partition, all commits > * Compactor starts with txnId=6 > * Long running query starts with txnId=7, it sees txnId=6 as open in its > snapshot > * Compaction commits > * Cleaner runs > Previously min_history_level table would have prevented the Cleaner to delete > the deltas1-5 until txnId=7 is open, but now they will be deleted and the > long running query may fail if its tries to access the files. > Solution could be to not run the cleaner until any txn is open that was > opened before the compaction was committed (CQ_NEXT_TXN_ID) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24231) Enhance shared work optimizer to merge scans with filters on both sides
[ https://issues.apache.org/jira/browse/HIVE-24231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich resolved HIVE-24231. - Fix Version/s: 4.0.0 Resolution: Fixed merged into master. Thank you Jesus for reviewing the changes! > Enhance shared work optimizer to merge scans with filters on both sides > --- > > Key: HIVE-24231 > URL: https://issues.apache.org/jira/browse/HIVE-24231 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Time Spent: 3h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24231) Enhance shared work optimizer to merge scans with filters on both sides
[ https://issues.apache.org/jira/browse/HIVE-24231?focusedWorklogId=502643=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502643 ] ASF GitHub Bot logged work on HIVE-24231: - Author: ASF GitHub Bot Created on: 20/Oct/20 11:56 Start Date: 20/Oct/20 11:56 Worklog Time Spent: 10m Work Description: kgyrtkirk merged pull request #1553: URL: https://github.com/apache/hive/pull/1553 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502643) Time Spent: 3h 40m (was: 3.5h) > Enhance shared work optimizer to merge scans with filters on both sides > --- > > Key: HIVE-24231 > URL: https://issues.apache.org/jira/browse/HIVE-24231 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-24231) Enhance shared work optimizer to merge scans with filters on both sides
[ https://issues.apache.org/jira/browse/HIVE-24231?focusedWorklogId=502642=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-502642 ] ASF GitHub Bot logged work on HIVE-24231: - Author: ASF GitHub Bot Created on: 20/Oct/20 11:56 Start Date: 20/Oct/20 11:56 Worklog Time Spent: 10m Work Description: kgyrtkirk commented on a change in pull request #1553: URL: https://github.com/apache/hive/pull/1553#discussion_r508438776 ## File path: ql/src/java/org/apache/hadoop/hive/ql/optimizer/SharedWorkOptimizer.java ## @@ -284,6 +304,54 @@ public ParseContext transform(ParseContext pctx) throws SemanticException { return pctx; } + /** SharedWorkOptimization strategy modes */ + public enum Mode { +/** + * Merges two identical subtrees. + */ +SubtreeMerge, +/** + * Merges a filtered scan into a non-filtered scan. + * + * In case we are already scanning the whole table - we should not scan it twice. + */ +RemoveSemijoin, +/** + * Fuses two filtered table scans into a single one. + * + * Dynamic filter subtree is kept on both sides - but the table is onlt scanned once. Review comment: added fix to HIVE-24241 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 502642) Time Spent: 3.5h (was: 3h 20m) > Enhance shared work optimizer to merge scans with filters on both sides > --- > > Key: HIVE-24231 > URL: https://issues.apache.org/jira/browse/HIVE-24231 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Labels: pull-request-available > Time Spent: 3.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anishek Agarwal updated HIVE-24227: --- Resolution: Fixed Status: Resolved (was: Patch Available) Patch committed to master, Thanks for the patch [~^sharma] and review [~aasha]! > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.04.patch, HIVE-24227.05.patch, > HIVE-24227.06.patch, HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-24291) Compaction Cleaner prematurely cleans up deltas
[ https://issues.apache.org/jira/browse/HIVE-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Varga reassigned HIVE-24291: -- > Compaction Cleaner prematurely cleans up deltas > --- > > Key: HIVE-24291 > URL: https://issues.apache.org/jira/browse/HIVE-24291 > Project: Hive > Issue Type: Bug >Reporter: Peter Varga >Assignee: Peter Varga >Priority: Major > > Since HIVE-23107 the cleaner can clean up deltas that are still used by > running queries. > Example: > * TxnId 1-5 writes to a partition, all commits > * Compactor starts with txnId=6 > * Long running query starts with txnId=7, it sees txnId=6 as open in its > snapshot > * Compaction commits > * Cleaner runs > Previously min_history_level table would have prevented the Cleaner to delete > the deltas1-5 until txnId=7 is open, but now they will be deleted and the > long running query may fail if its tries to access the files. > Solution could be to not run the cleaner until any txn is open that was > opened before the compaction was committed (CQ_NEXT_TXN_ID) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: (was: HIVE-24227.08.patch) > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.04.patch, HIVE-24227.05.patch, > HIVE-24227.06.patch, HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.08.patch Status: Patch Available (was: Open) > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.04.patch, HIVE-24227.05.patch, > HIVE-24227.06.patch, HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: (was: HIVE-24227.03.patch) > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.04.patch, HIVE-24227.05.patch, > HIVE-24227.06.patch, HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: (was: HIVE-24227.02.patch) > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.04.patch, HIVE-24227.05.patch, > HIVE-24227.06.patch, HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.08.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.02.patch, HIVE-24227.03.patch, > HIVE-24227.04.patch, HIVE-24227.05.patch, HIVE-24227.06.patch, > HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.06.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.02.patch, HIVE-24227.03.patch, > HIVE-24227.04.patch, HIVE-24227.05.patch, HIVE-24227.06.patch, > HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.07.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.02.patch, HIVE-24227.03.patch, > HIVE-24227.04.patch, HIVE-24227.05.patch, HIVE-24227.06.patch, > HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: (was: HIVE-24227.01.patch) > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.02.patch, HIVE-24227.03.patch, > HIVE-24227.04.patch, HIVE-24227.05.patch, HIVE-24227.06.patch, > HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.05.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.02.patch, HIVE-24227.03.patch, > HIVE-24227.04.patch, HIVE-24227.05.patch, HIVE-24227.06.patch, > HIVE-24227.07.patch, HIVE-24227.08.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.02.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.01.patch, HIVE-24227.02.patch, > HIVE-24227.03.patch, HIVE-24227.04.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.01.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.01.patch, HIVE-24227.02.patch, > HIVE-24227.03.patch, HIVE-24227.04.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.04.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.01.patch, HIVE-24227.02.patch, > HIVE-24227.03.patch, HIVE-24227.04.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arko Sharma updated HIVE-24227: --- Attachment: HIVE-24227.03.patch > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24227.01.patch, HIVE-24227.02.patch, > HIVE-24227.03.patch, HIVE-24227.04.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-18045) can VectorizedOrcAcidRowBatchReader be used all the time
[ https://issues.apache.org/jira/browse/HIVE-18045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saurabh Seth reassigned HIVE-18045: --- Assignee: (was: Saurabh Seth) > can VectorizedOrcAcidRowBatchReader be used all the time > > > Key: HIVE-18045 > URL: https://issues.apache.org/jira/browse/HIVE-18045 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Eugene Koifman >Priority: Blocker > > Can we use VectorizedOrcAcidRowBatchReader for non-vectorized queries? > It would just need a wrapper on top of it to turn VRBs into rows. > This would mean there is just 1 acid reader to maintain - not 2. > Would this be an issue for sorted reader/SMB support? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24227) sys.replication_metrics table shows incorrect status for failed policies
[ https://issues.apache.org/jira/browse/HIVE-24227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17217401#comment-17217401 ] Aasha Medhi commented on HIVE-24227: +1 > sys.replication_metrics table shows incorrect status for failed policies > > > Key: HIVE-24227 > URL: https://issues.apache.org/jira/browse/HIVE-24227 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)