[jira] [Commented] (HIVE-19644) change WM syntax to avoid conflicts with identifiers starting with a number
[ https://issues.apache.org/jira/browse/HIVE-19644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491527#comment-16491527 ] Hive QA commented on HIVE-19644: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 38s{color} | {color:blue} ql in master has 2323 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 45s{color} | {color:red} ql generated 2 new + 2320 unchanged - 3 fixed = 2322 total (was 2323) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Dead store to LA29_5 in org.apache.hadoop.hive.ql.parse.HiveLexer$DFA29.specialStateTransition(int, IntStream) At HiveLexer.java:org.apache.hadoop.hive.ql.parse.HiveLexer$DFA29.specialStateTransition(int, IntStream) At HiveLexer.java:[line 12643] | | | Should org.apache.hadoop.hive.ql.parse.HiveLexer$DFA34 be a _static_ inner class? At HiveLexer.java:inner class? At HiveLexer.java:[lines 14968-15059] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11222/dev-support/hive-personality.sh | | git revision | master / cbebe69 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-11222/yetus/new-findbugs-ql.html | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11222/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > change WM syntax to avoid conflicts with identifiers starting with a number > --- > > Key: HIVE-19644 > URL: https://issues.apache.org/jira/browse/HIVE-19644 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19644.01.patch, HIVE-19644.patch > > > Time/etc literals conflict with non-ANSI query column names starting with a > number that were previously supported without quotes (e.g. 30days). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19598) Acid V1 to V2 upgrade
[ https://issues.apache.org/jira/browse/HIVE-19598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491518#comment-16491518 ] Hive QA commented on HIVE-19598: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925220/HIVE-19598.05.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14395 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11221/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11221/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11221/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12925220 - PreCommit-HIVE-Build > Acid V1 to V2 upgrade > - > > Key: HIVE-19598 > URL: https://issues.apache.org/jira/browse/HIVE-19598 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-19598.02.patch, HIVE-19598.05.patch > > > The on-disk layout for full acid (transactional) tables has changed 3.0. > Any transactional table that has any update/delete events in any deltas that > have not been Major compacted, must go through a Major compaction before > upgrading to 3.0. No more update/delete/merge should be run after/during > major compaction. > Not doing so will result in data corruption/loss. > > Need to create a utility tool to help with this process. HIVE-19233 started > this but it needs more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19598) Acid V1 to V2 upgrade
[ https://issues.apache.org/jira/browse/HIVE-19598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491504#comment-16491504 ] Hive QA commented on HIVE-19598: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 32s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 35s{color} | {color:blue} ql in master has 2323 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 46s{color} | {color:blue} standalone-metastore in master has 216 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 42s{color} | {color:red} root: The patch generated 415 new + 47 unchanged - 16 fixed = 462 total (was 63) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} The patch packaging passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s{color} | {color:red} ql: The patch generated 1 new + 1 unchanged - 2 fixed = 2 total (was 3) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} standalone-metastore: The patch generated 0 new + 46 unchanged - 14 fixed = 46 total (was 60) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} upgrade-acid: The patch generated 414 new + 0 unchanged - 0 fixed = 414 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green} ql in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 49s{color} | {color:green} standalone-metastore generated 0 new + 215 unchanged - 1 fixed = 215 total (was 216) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s{color} | {color:green} upgrade-acid in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11221/dev-support/hive-personality.sh | | git revision | master / cbebe69 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11221/yetus/diff-checkstyle-root.txt | | checkstyle |
[jira] [Commented] (HIVE-19687) Export table on acid partitioned table is failing
[ https://issues.apache.org/jira/browse/HIVE-19687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491498#comment-16491498 ] Ashutosh Chauhan commented on HIVE-19687: - +1 > Export table on acid partitioned table is failing > - > > Key: HIVE-19687 > URL: https://issues.apache.org/jira/browse/HIVE-19687 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19687.1.patch, HIVE-19687.2.patch > > > *Reproducer* > {code:sql} > create table exportPartitionTable(id int, name string) partitioned by(country > string) clustered by (id) into 2 buckets stored as orc tblproperties > ("transactional"="true"); > export table exportPartitionTable PARTITION (country='india') to > '/tmp/exportDataStore'; > {code} > *Error* > {noformat} > FAILED: SemanticException [Error 10004]: Line 1:165 Invalid table alias or > column reference 'india': (possible column names are: id, name, country) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19675) Cast to timestamps on Druid time column leads to an exception
[ https://issues.apache.org/jira/browse/HIVE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491497#comment-16491497 ] Ashutosh Chauhan commented on HIVE-19675: - ok +1 > Cast to timestamps on Druid time column leads to an exception > - > > Key: HIVE-19675 > URL: https://issues.apache.org/jira/browse/HIVE-19675 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19675.2.patch, HIVE-19675.patch > > > The following query fail due to a formatting issue. > {code} > SELECT CAST(`ssb_druid_100`.`__time` AS TIMESTAMP) AS `x_time`, > . . . . . . . . . . . . . . . .> SUM(`ssb_druid_100`.`lo_revenue`) AS > `sum_lo_revenue_ok` > . . . . . . . . . . . . . . . .> FROM `druid_ssb`.`ssb_druid_100` > `ssb_druid_100` > . . . . . . . . . . . . . . . .> GROUP BY CAST(`ssb_druid_100`.`__time` AS > TIMESTAMP); > {code} > Exception > {code} > Error: java.io.IOException: java.lang.NumberFormatException: For input > string: "1991-12-31 19:00:00" (state=,code=0) > {code} > [~jcamachorodriguez] maybe this is fixed by your upcoming patches. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18748) Rename table impacts the ACID behavior as table names are not updated in meta-tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491492#comment-16491492 ] Hive QA commented on HIVE-18748: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12924857/HIVE-18748.06-branch-3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11220/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11220/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11220/ Messages: {noformat} This message was trimmed, see log for full details [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/security/SecurityUtil.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/util/GenericOptionsParser.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-rewrite/9.3.8.v20160314/jetty-rewrite-9.3.8.v20160314.jar(org/eclipse/jetty/rewrite/handler/RedirectPatternRule.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-rewrite/9.3.8.v20160314/jetty-rewrite-9.3.8.v20160314.jar(org/eclipse/jetty/rewrite/handler/RewriteHandler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/Handler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/Server.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/ServerConnector.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/handler/HandlerList.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-servlet/9.3.8.v20160314/jetty-servlet-9.3.8.v20160314.jar(org/eclipse/jetty/servlet/FilterHolder.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-servlet/9.3.8.v20160314/jetty-servlet-9.3.8.v20160314.jar(org/eclipse/jetty/servlet/ServletContextHandler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-servlet/9.3.8.v20160314/jetty-servlet-9.3.8.v20160314.jar(org/eclipse/jetty/servlet/ServletHolder.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-xml/9.3.8.v20160314/jetty-xml-9.3.8.v20160314.jar(org/eclipse/jetty/xml/XmlConfiguration.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/slf4j/jul-to-slf4j/1.7.10/jul-to-slf4j-1.7.10.jar(org/slf4j/bridge/SLF4JBridgeHandler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/DispatcherType.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/Filter.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/FilterChain.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/FilterConfig.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletException.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletRequest.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletResponse.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/annotation/WebFilter.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletRequest.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletResponse.class)]] [loading
[jira] [Commented] (HIVE-19687) Export table on acid partitioned table is failing
[ https://issues.apache.org/jira/browse/HIVE-19687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491489#comment-16491489 ] Hive QA commented on HIVE-19687: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925092/HIVE-19687.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14394 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_export] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_retry_failure] (batchId=171) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11219/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11219/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11219/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12925092 - PreCommit-HIVE-Build > Export table on acid partitioned table is failing > - > > Key: HIVE-19687 > URL: https://issues.apache.org/jira/browse/HIVE-19687 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19687.1.patch, HIVE-19687.2.patch > > > *Reproducer* > {code:sql} > create table exportPartitionTable(id int, name string) partitioned by(country > string) clustered by (id) into 2 buckets stored as orc tblproperties > ("transactional"="true"); > export table exportPartitionTable PARTITION (country='india') to > '/tmp/exportDataStore'; > {code} > *Error* > {noformat} > FAILED: SemanticException [Error 10004]: Line 1:165 Invalid table alias or > column reference 'india': (possible column names are: id, name, country) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19687) Export table on acid partitioned table is failing
[ https://issues.apache.org/jira/browse/HIVE-19687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491476#comment-16491476 ] Hive QA commented on HIVE-19687: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 44s{color} | {color:blue} ql in master has 2323 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 1 new + 625 unchanged - 0 fixed = 626 total (was 625) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch 2 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11219/dev-support/hive-personality.sh | | git revision | master / cbebe69 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11219/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-11219/yetus/whitespace-eol.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-11219/yetus/whitespace-tabs.txt | | modules | C: itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11219/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Export table on acid partitioned table is failing > - > > Key: HIVE-19687 > URL: https://issues.apache.org/jira/browse/HIVE-19687 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19687.1.patch, HIVE-19687.2.patch > > > *Reproducer* > {code:sql} > create table exportPartitionTable(id int, name string) partitioned by(country > string) clustered by (id) into 2 buckets stored as orc tblproperties > ("transactional"="true"); > export table exportPartitionTable PARTITION (country='india') to > '/tmp/exportDataStore'; > {code} > *Error* > {noformat} > FAILED:
[jira] [Commented] (HIVE-19685) OpenTracing support for HMS
[ https://issues.apache.org/jira/browse/HIVE-19685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491467#comment-16491467 ] Hive QA commented on HIVE-19685: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12924852/hive-19685.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14393 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join4] (batchId=186) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11218/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11218/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11218/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12924852 - PreCommit-HIVE-Build > OpenTracing support for HMS > --- > > Key: HIVE-19685 > URL: https://issues.apache.org/jira/browse/HIVE-19685 > Project: Hive > Issue Type: New Feature > Components: Metastore >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hive-19685.patch, trace.png > > > When diagnosing performance of metastore operations it isn't always obvious > why something took a long time. Using a tracing framework can provide an > end-to-end view of an operation including time spent in dependent systems (eg > filesystem operations, RDBMS queries, etc). This JIRA proposes to integrate > OpenTracing, which is a vendor-neutral tracing API into the HMS server and > client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19675) Cast to timestamps on Druid time column leads to an exception
[ https://issues.apache.org/jira/browse/HIVE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491463#comment-16491463 ] slim bouguerra edited comment on HIVE-19675 at 5/26/18 2:15 AM: In Case a cast to timestamp is pushed to Druid {code}CAST(`ssb_druid_100`.`__time` AS TIMESTAMP){code}, the results is formatted as a Timestamp {code}-MM-dd HH:mm:ss{code}. This is probably not the best way to fix this, but i want us to at least fix this very nasty bug and i will refactor this as soon as possible, especially after all the changes coming from [~jcamachorodriguez] around that part of the code. was (Author: bslim): In Case a cast to timestamp is pushed to Druid, the upcoming results is formatted as a Timestamp {code}-MM-dd HH:mm:ss{code}. This is probably not the best way to fix this, but i want us to at least fix this very nasty bug and i will refactor this as soon as possible, especially after all the changes coming from [~jcamachorodriguez] around that part of the code. > Cast to timestamps on Druid time column leads to an exception > - > > Key: HIVE-19675 > URL: https://issues.apache.org/jira/browse/HIVE-19675 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19675.2.patch, HIVE-19675.patch > > > The following query fail due to a formatting issue. > {code} > SELECT CAST(`ssb_druid_100`.`__time` AS TIMESTAMP) AS `x_time`, > . . . . . . . . . . . . . . . .> SUM(`ssb_druid_100`.`lo_revenue`) AS > `sum_lo_revenue_ok` > . . . . . . . . . . . . . . . .> FROM `druid_ssb`.`ssb_druid_100` > `ssb_druid_100` > . . . . . . . . . . . . . . . .> GROUP BY CAST(`ssb_druid_100`.`__time` AS > TIMESTAMP); > {code} > Exception > {code} > Error: java.io.IOException: java.lang.NumberFormatException: For input > string: "1991-12-31 19:00:00" (state=,code=0) > {code} > [~jcamachorodriguez] maybe this is fixed by your upcoming patches. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19675) Cast to timestamps on Druid time column leads to an exception
[ https://issues.apache.org/jira/browse/HIVE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491463#comment-16491463 ] slim bouguerra commented on HIVE-19675: --- In Case a cast to timestamp is pushed to Druid, the upcoming results is formatted as a Timestamp {code}-MM-dd HH:mm:ss{code}. This is probably not the best way to fix this, but i want us to at least fix this very nasty bug and i will refactor this as soon as possible, especially after all the changes coming from [~jcamachorodriguez] around that part of the code. > Cast to timestamps on Druid time column leads to an exception > - > > Key: HIVE-19675 > URL: https://issues.apache.org/jira/browse/HIVE-19675 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19675.2.patch, HIVE-19675.patch > > > The following query fail due to a formatting issue. > {code} > SELECT CAST(`ssb_druid_100`.`__time` AS TIMESTAMP) AS `x_time`, > . . . . . . . . . . . . . . . .> SUM(`ssb_druid_100`.`lo_revenue`) AS > `sum_lo_revenue_ok` > . . . . . . . . . . . . . . . .> FROM `druid_ssb`.`ssb_druid_100` > `ssb_druid_100` > . . . . . . . . . . . . . . . .> GROUP BY CAST(`ssb_druid_100`.`__time` AS > TIMESTAMP); > {code} > Exception > {code} > Error: java.io.IOException: java.lang.NumberFormatException: For input > string: "1991-12-31 19:00:00" (state=,code=0) > {code} > [~jcamachorodriguez] maybe this is fixed by your upcoming patches. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19685) OpenTracing support for HMS
[ https://issues.apache.org/jira/browse/HIVE-19685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491453#comment-16491453 ] Hive QA commented on HIVE-19685: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 47s{color} | {color:blue} standalone-metastore in master has 216 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} standalone-metastore: The patch generated 1 new + 532 unchanged - 0 fixed = 533 total (was 532) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11218/dev-support/hive-personality.sh | | git revision | master / cbebe69 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11218/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11218/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > OpenTracing support for HMS > --- > > Key: HIVE-19685 > URL: https://issues.apache.org/jira/browse/HIVE-19685 > Project: Hive > Issue Type: New Feature > Components: Metastore >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Major > Attachments: hive-19685.patch, trace.png > > > When diagnosing performance of metastore operations it isn't always obvious > why something took a long time. Using a tracing framework can provide an > end-to-end view of an operation including time spent in dependent systems (eg > filesystem operations, RDBMS queries, etc). This JIRA proposes to integrate > OpenTracing, which is a vendor-neutral tracing API into the HMS server and > client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19643) MM table conversion doesn't need full ACID structure checks
[ https://issues.apache.org/jira/browse/HIVE-19643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19643: Attachment: HIVE-19643.03.patch > MM table conversion doesn't need full ACID structure checks > --- > > Key: HIVE-19643 > URL: https://issues.apache.org/jira/browse/HIVE-19643 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Jason Dere >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19643.01.patch, HIVE-19643.02.patch, > HIVE-19643.03.patch, HIVE-19643.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19666) SQL standard auth for create fn may make an impossible privilege check (branch-2)
[ https://issues.apache.org/jira/browse/HIVE-19666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19666: Attachment: HIVE-19666.03-branch-2.patch > SQL standard auth for create fn may make an impossible privilege check > (branch-2) > - > > Key: HIVE-19666 > URL: https://issues.apache.org/jira/browse/HIVE-19666 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19666.01-branch-2.patch, > HIVE-19666.02-branch-2.patch, HIVE-19666.03-branch-2.patch, HIVE-19666.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19690) multi-insert query with multiple GBY, and distinct in only some branches can produce incorrect results
[ https://issues.apache.org/jira/browse/HIVE-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19690: Attachment: HIVE-19690.01.patch > multi-insert query with multiple GBY, and distinct in only some branches can > produce incorrect results > -- > > Key: HIVE-19690 > URL: https://issues.apache.org/jira/browse/HIVE-19690 > Project: Hive > Issue Type: Bug >Reporter: Riju Trivedi >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19690.01.patch, HIVE-19690.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19704) LLAP IO retries on branch-2 should be stoppable
[ https://issues.apache.org/jira/browse/HIVE-19704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19704: Attachment: HIVE-19704.02-branch-2.patch > LLAP IO retries on branch-2 should be stoppable > --- > > Key: HIVE-19704 > URL: https://issues.apache.org/jira/browse/HIVE-19704 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19704.01-branch-2.patch, > HIVE-19704.02-branch-2.patch > > > I will file a JIRA for master to switch IO to actually interrupt IO thread > via a Future, but it might not be safe for branch-2. > Also master doesn't depend on these retries in this spot general, so it's not > as critical. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19418) add background stats updater similar to compactor
[ https://issues.apache.org/jira/browse/HIVE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491451#comment-16491451 ] Sergey Shelukhin commented on HIVE-19418: - Rebased and updated the patch > add background stats updater similar to compactor > - > > Key: HIVE-19418 > URL: https://issues.apache.org/jira/browse/HIVE-19418 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19418.01.patch, HIVE-19418.02.patch, > HIVE-19418.03.patch, HIVE-19418.patch > > > There's a JIRA HIVE-19416 to add snapshot version to stats for MM/ACID tables > to make them usable in a transaction without breaking ACID (for metadata-only > optimization). However, stats for ACID tables can still become unusable if > e.g. two parallel inserts run - neither sees the data written by the other, > so after both finish, the snapshots on either set of stats won't match the > current snapshot and the stats will be unusable. > Additionally, for ACID and non-ACID tables alike, a lot of the stats, with > some exceptions like numRows, cannot be aggregated (i.e. you cannot combine > ndvs from two inserts), and for ACID even less can be aggregated (you cannot > derive min/max if some rows are deleted but you don't scan the rest of the > dataset). > Therefore we will add background logic to metastore (similar to, and > partially inside, the ACID compactor) to update stats. > It will have 3 modes of operation. > 1) Off. > 2) Update only the stats that exist but are out of date (generating stats can > be expensive, so if the user is only analyzing a subset of tables it should > be able to only update that subset). We can simply look at existing stats and > only analyze for the relevant partitions and columns. > 3) On: 2 + create stats for all tables and columns missing stats. > There will also be a table parameter to skip stats update. > In phase 1, the process will operate outside of compactor, and run analyze > command on the table. The analyze command will automatically save the stats > with ACID snapshot information if needed, based on HIVE-19416, so we don't > need to do any special state management and this will work for all table > types. However it's also more expensive. > In phase 2, we can explore adding stats collection during MM compaction that > uses a temp table. If we don't have open writers during major compaction (so > we overwrite all of the data), the temp table stats can simply be copied over > to the main table with correct snapshot information, saving us a table scan. > In phase 3, we can add custom stats collection logic to full ACID compactor > that is not query based, the same way as we'd do for (2). Alternatively we > can wait for ACID compactor to become query based and just reuse (2). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19418) add background stats updater similar to compactor
[ https://issues.apache.org/jira/browse/HIVE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19418: Attachment: HIVE-19418.03.patch > add background stats updater similar to compactor > - > > Key: HIVE-19418 > URL: https://issues.apache.org/jira/browse/HIVE-19418 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19418.01.patch, HIVE-19418.02.patch, > HIVE-19418.03.patch, HIVE-19418.patch > > > There's a JIRA HIVE-19416 to add snapshot version to stats for MM/ACID tables > to make them usable in a transaction without breaking ACID (for metadata-only > optimization). However, stats for ACID tables can still become unusable if > e.g. two parallel inserts run - neither sees the data written by the other, > so after both finish, the snapshots on either set of stats won't match the > current snapshot and the stats will be unusable. > Additionally, for ACID and non-ACID tables alike, a lot of the stats, with > some exceptions like numRows, cannot be aggregated (i.e. you cannot combine > ndvs from two inserts), and for ACID even less can be aggregated (you cannot > derive min/max if some rows are deleted but you don't scan the rest of the > dataset). > Therefore we will add background logic to metastore (similar to, and > partially inside, the ACID compactor) to update stats. > It will have 3 modes of operation. > 1) Off. > 2) Update only the stats that exist but are out of date (generating stats can > be expensive, so if the user is only analyzing a subset of tables it should > be able to only update that subset). We can simply look at existing stats and > only analyze for the relevant partitions and columns. > 3) On: 2 + create stats for all tables and columns missing stats. > There will also be a table parameter to skip stats update. > In phase 1, the process will operate outside of compactor, and run analyze > command on the table. The analyze command will automatically save the stats > with ACID snapshot information if needed, based on HIVE-19416, so we don't > need to do any special state management and this will work for all table > types. However it's also more expensive. > In phase 2, we can explore adding stats collection during MM compaction that > uses a temp table. If we don't have open writers during major compaction (so > we overwrite all of the data), the temp table stats can simply be copied over > to the main table with correct snapshot information, saving us a table scan. > In phase 3, we can add custom stats collection logic to full ACID compactor > that is not query based, the same way as we'd do for (2). Alternatively we > can wait for ACID compactor to become query based and just reuse (2). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19720) backport multiple MM commits to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19720: Status: Patch Available (was: Open) > backport multiple MM commits to branch-3 > > > Key: HIVE-19720 > URL: https://issues.apache.org/jira/browse/HIVE-19720 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19720.01-branch-3.patch > > > To avoid chained test runs of branch-3 backporting one by one, I will run > HiveQA on an epic combined patch, then commit patches w/proper commit > separation via cherry-pick: > 0930aec69b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) > 99a2b8bd6b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) ADDENDUM > 7ebcdeb951 HIVE-17657 : export/import for MM tables is broken (Sergey > Shelukhin, reviewed by Eugene Koifman) > 8db979f1ff (part not previously backported) HIVE-19476: Fix failures in > TestReplicationScenariosAcidTables, TestReplicationOnHDFSEncryptedZones and > TestCopyUtils (Sankar Hariappan, reviewed by Sergey Shelukhin) > f4352e5339 HIVE-19258 : add originals support to MM tables (and make the > conversion a metadata only operation) (Sergey Shelukhin, reviewed by Jason > Dere) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491444#comment-16491444 ] Hive QA commented on HIVE-19629: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925195/HIVE-19629.4.patch {color:green}SUCCESS:{color} +1 due to 12 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 422 failed/errored test(s), 14361 tests executed *Failed tests:* {noformat} TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=186) [insert_overwrite_directory2.q,spark_dynamic_partition_pruning_4.q,vector_outer_join0.q,bucket4.q,orc_merge4.q,bucket5.q,infer_bucket_sort_merge.q,orc_merge_incompat1.q,root_dir_external_table.q,constprog_partitioner.q,constprog_semijoin.q,external_table_with_space_in_location_path.q,spark_constprog_dpp.q,spark_dynamic_partition_pruning_3.q,load_fs2.q,infer_bucket_sort_map_operators.q,spark_dynamic_partition_pruning_2.q,vector_inner_join.q,spark_multi_insert_parallel_orderby.q,remote_script.q] TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=187) [scriptfile1.q,vector_outer_join5.q,file_with_header_footer.q,input16_cc.q,orc_merge2.q,reduce_deduplicate.q,schemeAuthority2.q,spark_dynamic_partition_pruning_5.q,orc_merge8.q,orc_merge_incompat2.q,infer_bucket_sort_bucketed_table.q,vector_outer_join4.q,disable_merge_for_bucketing.q,orc_merge7.q] org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mergejoin] (batchId=62) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_file_dump] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge11] (batchId=41) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge5] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge6] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat1] (batchId=70) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat2] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_schema_evolution_float] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_struct_type_vectorization] (batchId=71) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_10] (batchId=23) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_11] (batchId=38) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_12] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_13] (batchId=53) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_14] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_15] (batchId=89) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_16] (batchId=84) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_17] (batchId=30) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_1] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_2] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_3] (batchId=79) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_4] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_6] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_7] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_8] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_9] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_limit] (batchId=25) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[type_change_test_int] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[type_change_test_int_vectorized] (batchId=65) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_aggregate_9] (batchId=41) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_aggregate_without_gby] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_between_columns] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_binary_join_groupby] (batchId=84) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_bround] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_case_when_1] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_case_when_2] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_cast_constant] (batchId=9)
[jira] [Updated] (HIVE-19720) backport multiple MM commits to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19720: Description: To avoid chained test runs of branch-3 backporting one by one, I will run HiveQA on an epic combined patch, then commit patches w/proper commit separation via cherry-pick: 0930aec69b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey Shelukhin, reviewed by Gunther Hagleitner) 99a2b8bd6b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey Shelukhin, reviewed by Gunther Hagleitner) ADDENDUM 7ebcdeb951 HIVE-17657 : export/import for MM tables is broken (Sergey Shelukhin, reviewed by Eugene Koifman) 8db979f1ff (part not previously backported) HIVE-19476: Fix failures in TestReplicationScenariosAcidTables, TestReplicationOnHDFSEncryptedZones and TestCopyUtils (Sankar Hariappan, reviewed by Sergey Shelukhin) f4352e5339 HIVE-19258 : add originals support to MM tables (and make the conversion a metadata only operation) (Sergey Shelukhin, reviewed by Jason Dere) was: To avoid chained test runs of branch-3 backporting one by one, I will run HiveQA on an epic combined patch, then commit patches w/proper commit separation via cherry-pick: 99a2b8bd6b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey Shelukhin, reviewed by Gunther Hagleitner) ADDENDUM 0930aec69b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey Shelukhin, reviewed by Gunther Hagleitner) 7ebcdeb951 HIVE-17657 : export/import for MM tables is broken (Sergey Shelukhin, reviewed by Eugene Koifman) 8db979f1ff (part not previously backported) HIVE-19476: Fix failures in TestReplicationScenariosAcidTables, TestReplicationOnHDFSEncryptedZones and TestCopyUtils (Sankar Hariappan, reviewed by Sergey Shelukhin) f4352e5339 HIVE-19258 : add originals support to MM tables (and make the conversion a metadata only operation) (Sergey Shelukhin, reviewed by Jason Dere) > backport multiple MM commits to branch-3 > > > Key: HIVE-19720 > URL: https://issues.apache.org/jira/browse/HIVE-19720 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19720.01-branch-3.patch > > > To avoid chained test runs of branch-3 backporting one by one, I will run > HiveQA on an epic combined patch, then commit patches w/proper commit > separation via cherry-pick: > 0930aec69b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) > 99a2b8bd6b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) ADDENDUM > 7ebcdeb951 HIVE-17657 : export/import for MM tables is broken (Sergey > Shelukhin, reviewed by Eugene Koifman) > 8db979f1ff (part not previously backported) HIVE-19476: Fix failures in > TestReplicationScenariosAcidTables, TestReplicationOnHDFSEncryptedZones and > TestCopyUtils (Sankar Hariappan, reviewed by Sergey Shelukhin) > f4352e5339 HIVE-19258 : add originals support to MM tables (and make the > conversion a metadata only operation) (Sergey Shelukhin, reviewed by Jason > Dere) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19720) backport multiple MM commits to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19720: Description: To avoid chained test runs of branch-3 backporting one by one, I will run HiveQA on an epic combined patch, then commit patches w/proper commit separation via cherry-pick: 99a2b8bd6b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey Shelukhin, reviewed by Gunther Hagleitner) ADDENDUM 0930aec69b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey Shelukhin, reviewed by Gunther Hagleitner) 7ebcdeb951 HIVE-17657 : export/import for MM tables is broken (Sergey Shelukhin, reviewed by Eugene Koifman) 8db979f1ff (part not previously backported) HIVE-19476: Fix failures in TestReplicationScenariosAcidTables, TestReplicationOnHDFSEncryptedZones and TestCopyUtils (Sankar Hariappan, reviewed by Sergey Shelukhin) f4352e5339 HIVE-19258 : add originals support to MM tables (and make the conversion a metadata only operation) (Sergey Shelukhin, reviewed by Jason Dere) was:To avoid chained test runs of branch-3 backporting one by one, I will run HiveQA on an epic combined patch, then commit patches w/proper commit separation via cherry-pick > backport multiple MM commits to branch-3 > > > Key: HIVE-19720 > URL: https://issues.apache.org/jira/browse/HIVE-19720 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19720.01-branch-3.patch > > > To avoid chained test runs of branch-3 backporting one by one, I will run > HiveQA on an epic combined patch, then commit patches w/proper commit > separation via cherry-pick: > 99a2b8bd6b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) ADDENDUM > 0930aec69b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) > 7ebcdeb951 HIVE-17657 : export/import for MM tables is broken (Sergey > Shelukhin, reviewed by Eugene Koifman) > 8db979f1ff (part not previously backported) HIVE-19476: Fix failures in > TestReplicationScenariosAcidTables, TestReplicationOnHDFSEncryptedZones and > TestCopyUtils (Sankar Hariappan, reviewed by Sergey Shelukhin) > f4352e5339 HIVE-19258 : add originals support to MM tables (and make the > conversion a metadata only operation) (Sergey Shelukhin, reviewed by Jason > Dere) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19720) backport multiple MM commits to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19720: Attachment: HIVE-19720.01-branch-3.patch > backport multiple MM commits to branch-3 > > > Key: HIVE-19720 > URL: https://issues.apache.org/jira/browse/HIVE-19720 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19720.01-branch-3.patch > > > To avoid chained test runs of branch-3 backporting one by one, I will run > HiveQA on an epic combined patch, then commit patches w/proper commit > separation via cherry-pick -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19720) backport multiple MM commits to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19720: Summary: backport multiple MM commits to branch-3 (was: backport multiple ACID and MM jiras to branch-3) > backport multiple MM commits to branch-3 > > > Key: HIVE-19720 > URL: https://issues.apache.org/jira/browse/HIVE-19720 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > To avoid chained test runs of branch-3 backporting one by one, I will run > HiveQA on an epic combined patch, then commit patches w/proper commit > separation via cherry-pick -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18875) Enable SMB Join by default in Tez
[ https://issues.apache.org/jira/browse/HIVE-18875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18875: -- Attachment: HIVE-18875.9.patch > Enable SMB Join by default in Tez > - > > Key: HIVE-18875 > URL: https://issues.apache.org/jira/browse/HIVE-18875 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18875.1.patch, HIVE-18875.2.patch, > HIVE-18875.3.patch, HIVE-18875.4.patch, HIVE-18875.5.patch, > HIVE-18875.6.patch, HIVE-18875.7.patch, HIVE-18875.8.patch, HIVE-18875.9.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19720) backport multiple ACID and MM jiras to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19720: --- > backport multiple ACID and MM jiras to branch-3 > --- > > Key: HIVE-19720 > URL: https://issues.apache.org/jira/browse/HIVE-19720 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > To avoid chained test runs of branch-3 backporting one by one, I will run > HiveQA on an epic combined patch, then commit patches w/proper commit > separation via cherry-pick -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491426#comment-16491426 ] Hive QA commented on HIVE-19629: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 1s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 42s{color} | {color:blue} llap-server in master has 86 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 55s{color} | {color:blue} ql in master has 2323 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 23s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 19s{color} | {color:red} root: The patch generated 92 new + 3125 unchanged - 26 fixed = 3217 total (was 3151) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} itests/hive-unit: The patch generated 4 new + 184 unchanged - 4 fixed = 188 total (was 188) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s{color} | {color:red} llap-server: The patch generated 21 new + 265 unchanged - 12 fixed = 286 total (was 277) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 53s{color} | {color:red} ql: The patch generated 67 new + 1734 unchanged - 10 fixed = 1801 total (was 1744) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 52s{color} | {color:red} ql generated 3 new + 2323 unchanged - 0 fixed = 2326 total (was 2323) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.setRowDataTypePhysicalVariations(DataTypePhysicalVariation[]) may expose internal representation by storing an externally mutable object into VectorizedRowBatchCtx.rowDataTypePhysicalVariations At VectorizedRowBatchCtx.java:by storing an externally mutable object into VectorizedRowBatchCtx.rowDataTypePhysicalVariations At VectorizedRowBatchCtx.java:[line 168] | | | Switch statement found in org.apache.hadoop.hive.ql.io.orc.WriterImpl.setColumn(int, ColumnVector,
[jira] [Commented] (HIVE-19695) Year Month Day extraction functions need to add an implicit cast for column that are String types
[ https://issues.apache.org/jira/browse/HIVE-19695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491423#comment-16491423 ] Ashutosh Chauhan commented on HIVE-19695: - +1 > Year Month Day extraction functions need to add an implicit cast for column > that are String types > - > > Key: HIVE-19695 > URL: https://issues.apache.org/jira/browse/HIVE-19695 > Project: Hive > Issue Type: Bug > Components: Druid integration, Query Planning >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19695.patch > > > To avoid surprising/wrong results, Hive Query plan shall add an explicit cast > over non date/timestamp column type when user try to extract Year/Month/Hour > etc.. > This is an example of misleading results. > {code} > create table test_base_table(`timecolumn` timestamp, `date_c` string, > `timestamp_c` string, `metric_c` double); > insert into test_base_table values ('2015-03-08 00:00:00', '2015-03-10', > '2015-03-08 00:00:00', 5.0); > CREATE TABLE druid_test_table > STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler' > TBLPROPERTIES ("druid.segment.granularity" = "DAY") > AS select > cast(`timecolumn` as timestamp with local time zone) as `__time`, `date_c`, > `timestamp_c`, `metric_c` FROM test_base_table; > select > year(date_c), month(date_c),day(date_c), hour(date_c), > year(timestamp_c), month(timestamp_c),day(timestamp_c), hour(timestamp_c) > from druid_test_table; > {code} > will return the following wrong results: > {code} > PREHOOK: query: select > year(date_c), month(date_c),day(date_c), hour(date_c), > year(timestamp_c), month(timestamp_c),day(timestamp_c), hour(timestamp_c) > from druid_test_table > PREHOOK: type: QUERY > PREHOOK: Input: default@druid_test_table > A masked pattern was here > POSTHOOK: query: select > year(date_c), month(date_c),day(date_c), hour(date_c), > year(timestamp_c), month(timestamp_c),day(timestamp_c), hour(timestamp_c) > from druid_test_table > POSTHOOK: type: QUERY > POSTHOOK: Input: default@druid_test_table > A masked pattern was here > 1969 12 31 16 196912 31 16 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19675) Cast to timestamps on Druid time column leads to an exception
[ https://issues.apache.org/jira/browse/HIVE-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491418#comment-16491418 ] Ashutosh Chauhan commented on HIVE-19675: - It seems like it adds logic to parse timestamp with 2nd format after it fails parsing with first format. In what cases we will have timestamps with these 2 diff string representations? > Cast to timestamps on Druid time column leads to an exception > - > > Key: HIVE-19675 > URL: https://issues.apache.org/jira/browse/HIVE-19675 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19675.2.patch, HIVE-19675.patch > > > The following query fail due to a formatting issue. > {code} > SELECT CAST(`ssb_druid_100`.`__time` AS TIMESTAMP) AS `x_time`, > . . . . . . . . . . . . . . . .> SUM(`ssb_druid_100`.`lo_revenue`) AS > `sum_lo_revenue_ok` > . . . . . . . . . . . . . . . .> FROM `druid_ssb`.`ssb_druid_100` > `ssb_druid_100` > . . . . . . . . . . . . . . . .> GROUP BY CAST(`ssb_druid_100`.`__time` AS > TIMESTAMP); > {code} > Exception > {code} > Error: java.io.IOException: java.lang.NumberFormatException: For input > string: "1991-12-31 19:00:00" (state=,code=0) > {code} > [~jcamachorodriguez] maybe this is fixed by your upcoming patches. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19719) Adding metastore batch API for partitions
[ https://issues.apache.org/jira/browse/HIVE-19719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-19719: -- Description: Hive Metastore provides APIs for fetching a collection of objects (usually tables or partitions). These APIs provide a way to fetch all available objects so the size of the response is O(N) where N is the number of objects. These calls have several problems: * All objects (and there may be thousands or even millions) should be fetched from the database, serialized to Java list of thrift objects then serialized into byte array for sending over the network. This creates spikes of huge memory pressure, especially since in some cases multiple of copies of the same data are present in memory (e.g. unserialized and serialized versions). * Even though HMS tries to avoid string duplication by use of string interning in JAVA, duplicated strings must be serialized in the output array. * Java has 2Gb limit on the maximum size of byte array, and crashes with Out Of Memory exception if this array size is exceeded * Fetching huge amount of objects blows up DB caches and memory caches in the system. Receiving such huge messages also creates memory pressure on the receiver side (usually HS2) which can cause it crashing with Out of Memory exception as well. * Such requests have very big latencies since the server must collect all objects, serialize them and send them all to the network before the client can do anything with the result. To prevent cases of Out Of Memory exceptions, the server now has a configurable limit on the maximum number of objects returned. This helps to avoid crashes, but doesn’t allow for correct query execution since the result will include random and incomplete set of K objects. Currently this is addressed on the client side by simulating batching by getting list of table or partition names first and then requesting table information for parts of this list. Still, the list of objects can be big as well and this method requires locking to ensure that objects are not added or removed between the calls, especially if this is done outside of HS2. Instead we can do simple modification of existing APIs which allows for batch iterator-style operations without keeping any server-side state. The main idea is to have a unique incrementing IDs for each objects. The IDs should be only unique within their container (e.g. table IDs should be unique within a database and partition IDs should be unique within a table). Such ID can be easily generated using database auto-increment mechanism or we can be simply reuse existing ID column that is already maintained by the Data Nucleus. The request is then modified to include * Starting ID i0 * Batch size (B) The server fetches up to B objects starting from i0, serlalizes them and sends to the client. The client then requests next batch by using the ID of the last received request plus one. It is possible to construct an SQL query (either by using DataNucleus JDOQL or in DirectSQL code) which only selects needed objects avoiding big reads from the database. The client then iterates until it fetches all the objects and each request memory size is limited by the value of batch size. If we extend the API a little bit, providing a way to get the minimum and maximum ID values (either via a separate call or piggybacked to the normal reply), clients can request such batches concurrently, thus also reducing the latency. Clients can easily estimate number of batches by knowing the total number of IDs. While this isn’t a precise method it is good enough to divide the work. It is also possible to wrap this in a way similar to {{PartitionIterator}} and async-fetch next batch while we are processing current batch. *Consistency considerations* * HMS only provides consistency guarantees for a single call. The set of objects that should be returned may change while we are iterating over it. In some cases this is not an issue since HS2 may use ZooKeeper locks on the table to prevent modifications, but in some cases this may be an issue (for example for calls that originate from external systems. We should consider additions and removals separately. * New objects are added during iteration. All new objects are always added at the ‘end’ of ID space, so they will be always picked up by the iterator. We assume that IDs are always incrementing. * Some objects are removed during iteration. Removal of objects that are not already consumed is not a problem. It is possible that some objects which were already consumed are returned. Although this results in an inconsistent list of objects, this situation is indistinguishable from the situation when these objects were removed immediately after we got all objects in one atomic call. So it doens’t seem to be a practical issue. was: Hive Metastore provides
[jira] [Commented] (HIVE-19711) Refactor Hive Schema Tool
[ https://issues.apache.org/jira/browse/HIVE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491383#comment-16491383 ] Alan Gates commented on HIVE-19711: --- There's more to do here than just refactor this. HiveSchemaTool and MetasthoreSchemaTool are 90% the same code. There are a few differences because HiveSchemaTool users beeline (which the metastore can't) and supports the Hive information schema. We need to rationalize this, ideally so that HiveSchemaTool extends MetastoreSchemaTool to add the pieces it needs. > Refactor Hive Schema Tool > - > > Key: HIVE-19711 > URL: https://issues.apache.org/jira/browse/HIVE-19711 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-19711.01.patch > > > HiveSchemaTool is an 1500 lines long class trying to do everything It shold > be cut into multiple classes doing smaller components. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19684) Hive stats optimizer wrongly uses stats against non native tables
[ https://issues.apache.org/jira/browse/HIVE-19684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491380#comment-16491380 ] Vineet Garg commented on HIVE-19684: [~bslim] Once this is in master, go ahead and push it in branch-3 > Hive stats optimizer wrongly uses stats against non native tables > - > > Key: HIVE-19684 > URL: https://issues.apache.org/jira/browse/HIVE-19684 > Project: Hive > Issue Type: Bug > Components: Druid integration, Physical Optimizer >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-19684.2.patch, HIVE-19684.3.patch, HIVE-19684.patch > > > Stats of non native tables are inaccurate, thus queries over non native > tables can not optimized by stats optimizer. > Take example of query > {code} > Explain select count(*) from (select `__time` from druid_test_table limit 1) > as src ; > {code} > the plan will be reduced to > {code} > POSTHOOK: query: explain extended select count(*) from (select `__time` from > druid_test_table limit 1) as src > POSTHOOK: type: QUERY > STAGE DEPENDENCIES: > Stage-0 is a root stage > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: 1 > Processor Tree: > ListSink > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19504) Change default value for hive.auto.convert.join.shuffle.max.size property
[ https://issues.apache.org/jira/browse/HIVE-19504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19504: --- Resolution: Fixed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Pushed to branch-3, master. Cc [~vgarg] > Change default value for hive.auto.convert.join.shuffle.max.size property > - > > Key: HIVE-19504 > URL: https://issues.apache.org/jira/browse/HIVE-19504 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19504.01.patch, HIVE-19504.02.patch, > HIVE-19504.patch > > > The property default value is too low by mistake (10MB), it is missing three > trailing zeros. > {code} > HIVECONVERTJOINMAXSHUFFLESIZE("hive.auto.convert.join.shuffle.max.size", > 1000L, >"If hive.auto.convert.join.noconditionaltask is off, this parameter > does not take affect. \n" + >"However, if it is on, and the predicted size of the larger input for > a given join is greater \n" + >"than this number, the join will not be converted to a dynamically > partitioned hash join. \n" + >"The value \"-1\" means no limit."), > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19704) LLAP IO retries on branch-2 should be stoppable
[ https://issues.apache.org/jira/browse/HIVE-19704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491372#comment-16491372 ] Prasanth Jayachandran commented on HIVE-19704: -- lgtm, +1 > LLAP IO retries on branch-2 should be stoppable > --- > > Key: HIVE-19704 > URL: https://issues.apache.org/jira/browse/HIVE-19704 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19704.01-branch-2.patch > > > I will file a JIRA for master to switch IO to actually interrupt IO thread > via a Future, but it might not be safe for branch-2. > Also master doesn't depend on these retries in this spot general, so it's not > as critical. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491371#comment-16491371 ] Matt McCline commented on HIVE-19629: - An Epic size change. +1 LGTM tests pending. > Enable Decimal64 reader after orc version upgrade > - > > Key: HIVE-19629 > URL: https://issues.apache.org/jira/browse/HIVE-19629 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19629.1.patch, HIVE-19629.2.patch, > HIVE-19629.3.patch, HIVE-19629.4.patch > > > ORC 1.5.0 supports new fast decimal 64 reader. New VRB has to be created for > making use of decimal 64 column vectors. Also LLAP IO will need a new reader > to reader from long stream to decimal 64. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19504) Change default value for hive.auto.convert.join.shuffle.max.size property
[ https://issues.apache.org/jira/browse/HIVE-19504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491368#comment-16491368 ] Hive QA commented on HIVE-19504: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12924841/HIVE-19504.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14393 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11216/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11216/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11216/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12924841 - PreCommit-HIVE-Build > Change default value for hive.auto.convert.join.shuffle.max.size property > - > > Key: HIVE-19504 > URL: https://issues.apache.org/jira/browse/HIVE-19504 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19504.01.patch, HIVE-19504.02.patch, > HIVE-19504.patch > > > The property default value is too low by mistake (10MB), it is missing three > trailing zeros. > {code} > HIVECONVERTJOINMAXSHUFFLESIZE("hive.auto.convert.join.shuffle.max.size", > 1000L, >"If hive.auto.convert.join.noconditionaltask is off, this parameter > does not take affect. \n" + >"However, if it is on, and the predicted size of the larger input for > a given join is greater \n" + >"than this number, the join will not be converted to a dynamically > partitioned hash join. \n" + >"The value \"-1\" means no limit."), > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19508) SparkJobMonitor getReport doesn't print stage progress in order
[ https://issues.apache.org/jira/browse/HIVE-19508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491366#comment-16491366 ] Bharathkrishna Guruvayoor Murali commented on HIVE-19508: - No tests failed. [~stakiar] can you please review this patch. > SparkJobMonitor getReport doesn't print stage progress in order > --- > > Key: HIVE-19508 > URL: https://issues.apache.org/jira/browse/HIVE-19508 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19508.1.patch > > > You can end up with a progress output like this: > {code} > Stage-10_0: 0/29 Stage-11_0: 0/44Stage-12_0: 0/11 > Stage-13_0: 0/1 Stage-8_0: 258(+76)/468 Stage-9_0: 0/165 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19684) Hive stats optimizer wrongly uses stats against non native tables
[ https://issues.apache.org/jira/browse/HIVE-19684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491365#comment-16491365 ] slim bouguerra commented on HIVE-19684: --- [~vgarg] can you please put this on the list too. Thanks > Hive stats optimizer wrongly uses stats against non native tables > - > > Key: HIVE-19684 > URL: https://issues.apache.org/jira/browse/HIVE-19684 > Project: Hive > Issue Type: Bug > Components: Druid integration, Physical Optimizer >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-19684.2.patch, HIVE-19684.3.patch, HIVE-19684.patch > > > Stats of non native tables are inaccurate, thus queries over non native > tables can not optimized by stats optimizer. > Take example of query > {code} > Explain select count(*) from (select `__time` from druid_test_table limit 1) > as src ; > {code} > the plan will be reduced to > {code} > POSTHOOK: query: explain extended select count(*) from (select `__time` from > druid_test_table limit 1) as src > POSTHOOK: type: QUERY > STAGE DEPENDENCIES: > Stage-0 is a root stage > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: 1 > Processor Tree: > ListSink > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19684) Hive stats optimizer wrongly uses stats against non native tables
[ https://issues.apache.org/jira/browse/HIVE-19684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-19684: -- Attachment: HIVE-19684.3.patch > Hive stats optimizer wrongly uses stats against non native tables > - > > Key: HIVE-19684 > URL: https://issues.apache.org/jira/browse/HIVE-19684 > Project: Hive > Issue Type: Bug > Components: Druid integration, Physical Optimizer >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-19684.2.patch, HIVE-19684.3.patch, HIVE-19684.patch > > > Stats of non native tables are inaccurate, thus queries over non native > tables can not optimized by stats optimizer. > Take example of query > {code} > Explain select count(*) from (select `__time` from druid_test_table limit 1) > as src ; > {code} > the plan will be reduced to > {code} > POSTHOOK: query: explain extended select count(*) from (select `__time` from > druid_test_table limit 1) as src > POSTHOOK: type: QUERY > STAGE DEPENDENCIES: > Stage-0 is a root stage > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: 1 > Processor Tree: > ListSink > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19717: --- Attachment: (was: HIVE-19717.1.patch) > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19717.branch-3.1.patch > > > This is not suppose to be committed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19717: --- Attachment: HIVE-19717.branch-3.1.patch > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19717.branch-3.1.patch > > > This is not suppose to be committed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19717: --- Status: Open (was: Patch Available) > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19717.branch-3.1.patch > > > This is not suppose to be committed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19717: --- Status: Patch Available (was: Open) > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19717.branch-3.1.patch > > > This is not suppose to be committed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19598) Acid V1 to V2 upgrade
[ https://issues.apache.org/jira/browse/HIVE-19598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491352#comment-16491352 ] Eugene Koifman commented on HIVE-19598: --- [~ashutoshc], patch 5 is reasonably complete for compaction and table conversion. No file renames yet. > Acid V1 to V2 upgrade > - > > Key: HIVE-19598 > URL: https://issues.apache.org/jira/browse/HIVE-19598 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-19598.02.patch, HIVE-19598.05.patch > > > The on-disk layout for full acid (transactional) tables has changed 3.0. > Any transactional table that has any update/delete events in any deltas that > have not been Major compacted, must go through a Major compaction before > upgrading to 3.0. No more update/delete/merge should be run after/during > major compaction. > Not doing so will result in data corruption/loss. > > Need to create a utility tool to help with this process. HIVE-19233 started > this but it needs more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19598) Acid V1 to V2 upgrade
[ https://issues.apache.org/jira/browse/HIVE-19598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-19598: -- Attachment: HIVE-19598.05.patch > Acid V1 to V2 upgrade > - > > Key: HIVE-19598 > URL: https://issues.apache.org/jira/browse/HIVE-19598 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-19598.02.patch, HIVE-19598.05.patch > > > The on-disk layout for full acid (transactional) tables has changed 3.0. > Any transactional table that has any update/delete events in any deltas that > have not been Major compacted, must go through a Major compaction before > upgrading to 3.0. No more update/delete/merge should be run after/during > major compaction. > Not doing so will result in data corruption/loss. > > Need to create a utility tool to help with this process. HIVE-19233 started > this but it needs more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19323) Create metastore SQL install and upgrade scripts for 3.1
[ https://issues.apache.org/jira/browse/HIVE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491322#comment-16491322 ] Alan Gates commented on HIVE-19323: --- In patch 5 I've added scripts for version 4. I've also attached a patch intended for branch-3 that does not have the version 4 scripts in it. > Create metastore SQL install and upgrade scripts for 3.1 > > > Key: HIVE-19323 > URL: https://issues.apache.org/jira/browse/HIVE-19323 > Project: Hive > Issue Type: Task > Components: Metastore >Affects Versions: 3.1.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19323.2.patch, HIVE-19323.3.patch, > HIVE-19323.4.patch, HIVE-19323.5.patch, HIVE-19323.branch-3.1.patch, > HIVE-19323.patch > > > Now that we've branched for 3.0 we need to create SQL install and upgrade > scripts for 3.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19323) Create metastore SQL install and upgrade scripts for 3.1
[ https://issues.apache.org/jira/browse/HIVE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19323: -- Attachment: HIVE-19323.branch-3.1.patch > Create metastore SQL install and upgrade scripts for 3.1 > > > Key: HIVE-19323 > URL: https://issues.apache.org/jira/browse/HIVE-19323 > Project: Hive > Issue Type: Task > Components: Metastore >Affects Versions: 3.1.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19323.2.patch, HIVE-19323.3.patch, > HIVE-19323.4.patch, HIVE-19323.5.patch, HIVE-19323.branch-3.1.patch, > HIVE-19323.patch > > > Now that we've branched for 3.0 we need to create SQL install and upgrade > scripts for 3.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19323) Create metastore SQL install and upgrade scripts for 3.1
[ https://issues.apache.org/jira/browse/HIVE-19323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19323: -- Attachment: HIVE-19323.5.patch > Create metastore SQL install and upgrade scripts for 3.1 > > > Key: HIVE-19323 > URL: https://issues.apache.org/jira/browse/HIVE-19323 > Project: Hive > Issue Type: Task > Components: Metastore >Affects Versions: 3.1.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19323.2.patch, HIVE-19323.3.patch, > HIVE-19323.4.patch, HIVE-19323.5.patch, HIVE-19323.branch-3.1.patch, > HIVE-19323.patch > > > Now that we've branched for 3.0 we need to create SQL install and upgrade > scripts for 3.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19715) Consolidated and flexible API for fetching partition metadata from HMS
[ https://issues.apache.org/jira/browse/HIVE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-19715: -- Assignee: Vihang Karajgaonkar > Consolidated and flexible API for fetching partition metadata from HMS > -- > > Key: HIVE-19715 > URL: https://issues.apache.org/jira/browse/HIVE-19715 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore >Reporter: Todd Lipcon >Assignee: Vihang Karajgaonkar >Priority: Major > > Currently, the HMS thrift API exposes 17 different APIs for fetching > partition-related information. There is somewhat of a combinatorial explosion > going on, where each API has variants with and without "auth" info, by pspecs > vs names, by filters, by exprs, etc. Having all of these separate APIs long > term is a maintenance burden and also more confusing for consumers. > Additionally, even with all of these APIs, there is a lack of granularity in > fetching only the information needed for a particular use case. For example, > in some use cases it may be beneficial to only fetch the partition locations > without wasting effort fetching statistics, etc. > This JIRA proposes that we add a new "one API to rule them all" for fetching > partition info. The request and response would be encapsulated in structs. > Some desirable properties: > - the request should be able to specify which pieces of information are > required (eg location, properties, etc) > - in the case of partition parameters, the request should be able to do > either whitelisting or blacklisting (eg to exclude large incremental column > stats HLL dumped in there by Impala) > - the request should optionally specify auth info (to encompas the > "with_auth" variants) > - the request should be able to designate the set of partitions to access > through one of several different methods (eg "all", list, expr, > part_vals, etc) > - the struct should be easily evolvable so that new pieces of info can be > added > - the response should be designed in such a way as to avoid transferring > redundant information for common cases (eg simple "dictionary coding" of > strings like parameter names, etc) > - the API should support some form of pagination for tables with large > partition counts -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19504) Change default value for hive.auto.convert.join.shuffle.max.size property
[ https://issues.apache.org/jira/browse/HIVE-19504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491318#comment-16491318 ] Hive QA commented on HIVE-19504: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 18s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 44s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11216/dev-support/hive-personality.sh | | git revision | master / 87e8c73 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: common U: common | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11216/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Change default value for hive.auto.convert.join.shuffle.max.size property > - > > Key: HIVE-19504 > URL: https://issues.apache.org/jira/browse/HIVE-19504 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19504.01.patch, HIVE-19504.02.patch, > HIVE-19504.patch > > > The property default value is too low by mistake (10MB), it is missing three > trailing zeros. > {code} > HIVECONVERTJOINMAXSHUFFLESIZE("hive.auto.convert.join.shuffle.max.size", > 1000L, >"If hive.auto.convert.join.noconditionaltask is off, this parameter > does not take affect. \n" + >"However, if it is on, and the predicted size of the larger input for > a given join is greater \n" + >"than this number, the join will not be converted to a dynamically > partitioned hash join. \n" + >"The value \"-1\" means no limit."), > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19508) SparkJobMonitor getReport doesn't print stage progress in order
[ https://issues.apache.org/jira/browse/HIVE-19508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491311#comment-16491311 ] Hive QA commented on HIVE-19508: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12924840/HIVE-19508.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14393 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11215/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11215/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11215/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12924840 - PreCommit-HIVE-Build > SparkJobMonitor getReport doesn't print stage progress in order > --- > > Key: HIVE-19508 > URL: https://issues.apache.org/jira/browse/HIVE-19508 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19508.1.patch > > > You can end up with a progress output like this: > {code} > Stage-10_0: 0/29 Stage-11_0: 0/44Stage-12_0: 0/11 > Stage-13_0: 0/1 Stage-8_0: 258(+76)/468 Stage-9_0: 0/165 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19680) Push down limit is not applied for Druid storage handler.
[ https://issues.apache.org/jira/browse/HIVE-19680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491303#comment-16491303 ] Vineet Garg commented on HIVE-19680: Pushed to branch-3 > Push down limit is not applied for Druid storage handler. > - > > Key: HIVE-19680 > URL: https://issues.apache.org/jira/browse/HIVE-19680 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19680.1.patch, HIVE-19680.patch > > > Query like > {code} > select `__time` from druid_test_table limit 1; > {code} > returns more than one row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19680) Push down limit is not applied for Druid storage handler.
[ https://issues.apache.org/jira/browse/HIVE-19680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19680: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Push down limit is not applied for Druid storage handler. > - > > Key: HIVE-19680 > URL: https://issues.apache.org/jira/browse/HIVE-19680 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19680.1.patch, HIVE-19680.patch > > > Query like > {code} > select `__time` from druid_test_table limit 1; > {code} > returns more than one row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19680) Push down limit is not applied for Druid storage handler.
[ https://issues.apache.org/jira/browse/HIVE-19680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19680: --- Fix Version/s: 4.0.0 3.1.0 > Push down limit is not applied for Druid storage handler. > - > > Key: HIVE-19680 > URL: https://issues.apache.org/jira/browse/HIVE-19680 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19680.1.patch, HIVE-19680.patch > > > Query like > {code} > select `__time` from druid_test_table limit 1; > {code} > returns more than one row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19305) Arrow format for LlapOutputFormatService (umbrella)
[ https://issues.apache.org/jira/browse/HIVE-19305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Wohlstadter updated HIVE-19305: Status: Open (was: Patch Available) > Arrow format for LlapOutputFormatService (umbrella) > --- > > Key: HIVE-19305 > URL: https://issues.apache.org/jira/browse/HIVE-19305 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > Attachments: HIVE-19305.1-branch-3.patch, HIVE-19305.2-branch-3.patch > > > Allows external clients to consume output from LLAP daemons in Arrow stream > format. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19305) Arrow format for LlapOutputFormatService (umbrella)
[ https://issues.apache.org/jira/browse/HIVE-19305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Wohlstadter updated HIVE-19305: Status: Patch Available (was: Open) > Arrow format for LlapOutputFormatService (umbrella) > --- > > Key: HIVE-19305 > URL: https://issues.apache.org/jira/browse/HIVE-19305 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > Attachments: HIVE-19305.1-branch-3.patch, HIVE-19305.2-branch-3.patch > > > Allows external clients to consume output from LLAP daemons in Arrow stream > format. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19305) Arrow format for LlapOutputFormatService (umbrella)
[ https://issues.apache.org/jira/browse/HIVE-19305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Wohlstadter updated HIVE-19305: Attachment: HIVE-19305.2-branch-3.patch > Arrow format for LlapOutputFormatService (umbrella) > --- > > Key: HIVE-19305 > URL: https://issues.apache.org/jira/browse/HIVE-19305 > Project: Hive > Issue Type: Improvement > Components: llap >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > Attachments: HIVE-19305.1-branch-3.patch, HIVE-19305.2-branch-3.patch > > > Allows external clients to consume output from LLAP daemons in Arrow stream > format. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19508) SparkJobMonitor getReport doesn't print stage progress in order
[ https://issues.apache.org/jira/browse/HIVE-19508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491276#comment-16491276 ] Hive QA commented on HIVE-19508: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 2323 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11215/dev-support/hive-personality.sh | | git revision | master / 87e8c73 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11215/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > SparkJobMonitor getReport doesn't print stage progress in order > --- > > Key: HIVE-19508 > URL: https://issues.apache.org/jira/browse/HIVE-19508 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19508.1.patch > > > You can end up with a progress output like this: > {code} > Stage-10_0: 0/29 Stage-11_0: 0/44Stage-12_0: 0/11 > Stage-13_0: 0/1 Stage-8_0: 258(+76)/468 Stage-9_0: 0/165 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19715) Consolidated and flexible API for fetching partition metadata from HMS
[ https://issues.apache.org/jira/browse/HIVE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491263#comment-16491263 ] Vihang Karajgaonkar commented on HIVE-19715: +1 to the idea of having one common API to consolidate fetching partition-related information. I like the idea of have a {{projection}} list and a {{predicate expression}} to filter the partitions. Pagination is very important too because right now any client can cause OOM on HMS (or client side) by requesting thousands of partitions and the pagination/streaming support will help a lot with such cases. I also think we should at-least deprecate the older APIs so that clients can move to the newer APIs in the near future. get_partition using partition expression proxy has issues like described above by [~akolb] and doesn't work well for a standalone metastore since it depends on ql classes (effectively making standalone-metastore not a standalone-metastore). We have seen compatibility issues with that API as well when newer clients try to talk to older HMS server. I can take up this task if we can come up a API spec which works well for most cases. It can be an incremental effort like for example adding support for pagination could be provided later as long as API is defined in a extendable way. One interesting side-effect of returning only subset of interesting fields of the partition objects is we probably will have to change the partition fields as {{optional}} instead of the {{required}}. This can create a trickle down effect all the way down to the database and I am not sure what complications can it cause. Thoughts? > Consolidated and flexible API for fetching partition metadata from HMS > -- > > Key: HIVE-19715 > URL: https://issues.apache.org/jira/browse/HIVE-19715 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore >Reporter: Todd Lipcon >Priority: Major > > Currently, the HMS thrift API exposes 17 different APIs for fetching > partition-related information. There is somewhat of a combinatorial explosion > going on, where each API has variants with and without "auth" info, by pspecs > vs names, by filters, by exprs, etc. Having all of these separate APIs long > term is a maintenance burden and also more confusing for consumers. > Additionally, even with all of these APIs, there is a lack of granularity in > fetching only the information needed for a particular use case. For example, > in some use cases it may be beneficial to only fetch the partition locations > without wasting effort fetching statistics, etc. > This JIRA proposes that we add a new "one API to rule them all" for fetching > partition info. The request and response would be encapsulated in structs. > Some desirable properties: > - the request should be able to specify which pieces of information are > required (eg location, properties, etc) > - in the case of partition parameters, the request should be able to do > either whitelisting or blacklisting (eg to exclude large incremental column > stats HLL dumped in there by Impala) > - the request should optionally specify auth info (to encompas the > "with_auth" variants) > - the request should be able to designate the set of partitions to access > through one of several different methods (eg "all", list, expr, > part_vals, etc) > - the struct should be easily evolvable so that new pieces of info can be > added > - the response should be designed in such a way as to avoid transferring > redundant information for common cases (eg simple "dictionary coding" of > strings like parameter names, etc) > - the API should support some form of pagination for tables with large > partition counts -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19684) Hive stats optimizer wrongly uses stats against non native tables
[ https://issues.apache.org/jira/browse/HIVE-19684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491246#comment-16491246 ] Hive QA commented on HIVE-19684: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12924863/HIVE-19684.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 14390 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbasestats] (batchId=101) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=241) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=241) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11214/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11214/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11214/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 11 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12924863 - PreCommit-HIVE-Build > Hive stats optimizer wrongly uses stats against non native tables > - > > Key: HIVE-19684 > URL: https://issues.apache.org/jira/browse/HIVE-19684 > Project: Hive > Issue Type: Bug > Components: Druid integration, Physical Optimizer >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-19684.2.patch, HIVE-19684.patch > > > Stats of non native tables are inaccurate, thus queries over non native > tables can not optimized by stats optimizer. > Take example of query > {code} > Explain select count(*) from (select `__time` from druid_test_table limit 1) > as src ; > {code} > the plan will be reduced to > {code} > POSTHOOK: query: explain extended select count(*) from (select `__time` from > druid_test_table limit 1) as src > POSTHOOK: type: QUERY > STAGE DEPENDENCIES: > Stage-0 is a root stage > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: 1 > Processor Tree: > ListSink > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19717: --- Description: This is not suppose to be committed. > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19717.1.patch > > > This is not suppose to be committed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19717: --- Status: Patch Available (was: Open) > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19717.1.patch > > > This is not suppose to be committed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19717: --- Attachment: HIVE-19717.1.patch > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19717.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491230#comment-16491230 ] Prasanth Jayachandran commented on HIVE-19629: -- [~mmccline] RB for just this patch excluding HIVE-19465 changes https://reviews.apache.org/r/67329/diff/1-2/ > Enable Decimal64 reader after orc version upgrade > - > > Key: HIVE-19629 > URL: https://issues.apache.org/jira/browse/HIVE-19629 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19629.1.patch, HIVE-19629.2.patch, > HIVE-19629.3.patch, HIVE-19629.4.patch > > > ORC 1.5.0 supports new fast decimal 64 reader. New VRB has to be created for > making use of decimal 64 column vectors. Also LLAP IO will need a new reader > to reader from long stream to decimal 64. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18117) Create TestCliDriver for HDFS EC
[ https://issues.apache.org/jira/browse/HIVE-18117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18117: --- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) > Create TestCliDriver for HDFS EC > > > Key: HIVE-18117 > URL: https://issues.apache.org/jira/browse/HIVE-18117 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-18117.1.patch, HIVE-18117.10.patch, > HIVE-18117.2.patch, HIVE-18117.3.patch, HIVE-18117.4.patch, > HIVE-18117.5.patch, HIVE-18117.6.patch, HIVE-18117.7.patch, > HIVE-18117.8.patch, HIVE-18117.9.patch > > > Should be able to do something similar to what we do for HDFS encryption. > TestErasureCodingHDFSCliDriver uses a test-only CommandProcessor > "ErasureProcessor" > which allows .q files to contain Erasure Coding commands similar to those > provided > by the hdfs ec command > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html. > The Erasure Coding functionality is exposed through a new shim > "HdfsFileErasureCodingPolicy". > At this stage there are two .q files: > erasure_commnds.q (a simple test to show ERASURE commands can run on local fs > via > TestCliDriver or on hdfs via TestErasureCodingHDFSCliDriver), and > erasure_simple.q (which does some trivial queries to demonstrate basic > functionality). > More tests will come in future commits. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19717) Dummy jira to run tests on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-19717: -- > Dummy jira to run tests on branch-3 > --- > > Key: HIVE-19717 > URL: https://issues.apache.org/jira/browse/HIVE-19717 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19498) Vectorization: CAST expressions produce wrong results
[ https://issues.apache.org/jira/browse/HIVE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19498: --- Fix Version/s: (was: 4.0.0) (was: 3.1.0) > Vectorization: CAST expressions produce wrong results > - > > Key: HIVE-19498 > URL: https://issues.apache.org/jira/browse/HIVE-19498 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Attachments: HIVE-19498.01.patch, HIVE-19498.02.patch, > HIVE-19498.03.patch, HIVE-19498.04.patch, HIVE-19498.05.patch > > > Wrong results for: > DATE --> BOOLEAN > DOUBLE --> DECIMAL > STRING|CHAR|VARCHAR --> DECIMAL > TIMESTAMP --> LONG -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19498) Vectorization: CAST expressions produce wrong results
[ https://issues.apache.org/jira/browse/HIVE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491223#comment-16491223 ] Vineet Garg commented on HIVE-19498: Reverted this from branch-3 as well. > Vectorization: CAST expressions produce wrong results > - > > Key: HIVE-19498 > URL: https://issues.apache.org/jira/browse/HIVE-19498 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Attachments: HIVE-19498.01.patch, HIVE-19498.02.patch, > HIVE-19498.03.patch, HIVE-19498.04.patch, HIVE-19498.05.patch > > > Wrong results for: > DATE --> BOOLEAN > DOUBLE --> DECIMAL > STRING|CHAR|VARCHAR --> DECIMAL > TIMESTAMP --> LONG -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19700) Workaround for JLine issue with UnsupportedTerminal
[ https://issues.apache.org/jira/browse/HIVE-19700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491209#comment-16491209 ] Yongzhi Chen commented on HIVE-19700: - Following method shows when mask == null, the char string will fully print. When mask = NULL_MASK, no string will print. So mask == null and mask == NULL_MASK means opposite. {noformat} /** * Write out the specified string to the buffer and the output stream. */ public final void putString(final CharSequence str) throws IOException { buf.write(str); if (mask == null) { // no masking print(str); } else if (mask == NULL_MASK) { // don't print anything } else { print(mask, str.length()); } drawBuffer(); } {noformat} > Workaround for JLine issue with UnsupportedTerminal > --- > > Key: HIVE-19700 > URL: https://issues.apache.org/jira/browse/HIVE-19700 > Project: Hive > Issue Type: Bug >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-19700.patch > > > From the JLine's ConsoleReader, readLine(prompt, mask) calls the following > beforeReadLine() method. > {code} > try { > // System.out.println("is terminal supported " + > terminal.isSupported()); > if (!terminal.isSupported()) { > beforeReadLine(prompt, mask); > } > {code} > So specifically when using UnsupportedTerminal {{-Djline.terminal}} and > {{prompt=null}} and {{mask!=null}}, a "null" string gets printed to the > console before and after the query result. {{UnsupportedTerminal}} is > required to be used when running beeline as a background process, hangs > otherwise. > {code} > private void beforeReadLine(final String prompt, final Character mask) { > if (mask != null && maskThread == null) { > final String fullPrompt = "\r" + prompt > + " " > + " " > + " " > + "\r" + prompt; > maskThread = new Thread() > { > public void run() { > while (!interrupted()) { > try { > Writer out = getOutput(); > out.write(fullPrompt); > {code} > So the {{prompt}} is null and {{mask}} is NOT in atleast 2 scenarios in > beeline. > when beeline's silent=true, prompt is null > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1264 > when running multiline queries > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/Commands.java#L1093 > When executing beeline in script mode (commands in a file), there should not > be any masking while reading lines from the script file. aka, entire line > should be a beeline command or part of a multiline hive query. > So it should be safe to use a null mask instead of > {{ConsoleReader.NULL_MASK}} when using UnsupportedTerminal as jline terminal. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19711) Refactor Hive Schema Tool
[ https://issues.apache.org/jira/browse/HIVE-19711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491208#comment-16491208 ] Miklos Gergely commented on HIVE-19711: --- [~ashutoshc] is 3.0.0 already released? How is that possible, if the upgrade is not ready yet? > Refactor Hive Schema Tool > - > > Key: HIVE-19711 > URL: https://issues.apache.org/jira/browse/HIVE-19711 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-19711.01.patch > > > HiveSchemaTool is an 1500 lines long class trying to do everything It shold > be cut into multiple classes doing smaller components. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19715) Consolidated and flexible API for fetching partition metadata from HMS
[ https://issues.apache.org/jira/browse/HIVE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491195#comment-16491195 ] Alexander Kolbasov commented on HIVE-19715: --- One part of the existing API which IMO is particularly messy is the handling of partition expressions - it would be great to avoid sending serialized Java classes and UDFs via Thrift API. > Consolidated and flexible API for fetching partition metadata from HMS > -- > > Key: HIVE-19715 > URL: https://issues.apache.org/jira/browse/HIVE-19715 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore >Reporter: Todd Lipcon >Priority: Major > > Currently, the HMS thrift API exposes 17 different APIs for fetching > partition-related information. There is somewhat of a combinatorial explosion > going on, where each API has variants with and without "auth" info, by pspecs > vs names, by filters, by exprs, etc. Having all of these separate APIs long > term is a maintenance burden and also more confusing for consumers. > Additionally, even with all of these APIs, there is a lack of granularity in > fetching only the information needed for a particular use case. For example, > in some use cases it may be beneficial to only fetch the partition locations > without wasting effort fetching statistics, etc. > This JIRA proposes that we add a new "one API to rule them all" for fetching > partition info. The request and response would be encapsulated in structs. > Some desirable properties: > - the request should be able to specify which pieces of information are > required (eg location, properties, etc) > - in the case of partition parameters, the request should be able to do > either whitelisting or blacklisting (eg to exclude large incremental column > stats HLL dumped in there by Impala) > - the request should optionally specify auth info (to encompas the > "with_auth" variants) > - the request should be able to designate the set of partitions to access > through one of several different methods (eg "all", list, expr, > part_vals, etc) > - the struct should be easily evolvable so that new pieces of info can be > added > - the response should be designed in such a way as to avoid transferring > redundant information for common cases (eg simple "dictionary coding" of > strings like parameter names, etc) > - the API should support some form of pagination for tables with large > partition counts -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19715) Consolidated and flexible API for fetching partition metadata from HMS
[ https://issues.apache.org/jira/browse/HIVE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491194#comment-16491194 ] Alexander Kolbasov commented on HIVE-19715: --- Are you proposing Thrift version of Java interning? Should we also have a unified way to send list of locations as a path trie (or some other compressed form)? > Consolidated and flexible API for fetching partition metadata from HMS > -- > > Key: HIVE-19715 > URL: https://issues.apache.org/jira/browse/HIVE-19715 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore >Reporter: Todd Lipcon >Priority: Major > > Currently, the HMS thrift API exposes 17 different APIs for fetching > partition-related information. There is somewhat of a combinatorial explosion > going on, where each API has variants with and without "auth" info, by pspecs > vs names, by filters, by exprs, etc. Having all of these separate APIs long > term is a maintenance burden and also more confusing for consumers. > Additionally, even with all of these APIs, there is a lack of granularity in > fetching only the information needed for a particular use case. For example, > in some use cases it may be beneficial to only fetch the partition locations > without wasting effort fetching statistics, etc. > This JIRA proposes that we add a new "one API to rule them all" for fetching > partition info. The request and response would be encapsulated in structs. > Some desirable properties: > - the request should be able to specify which pieces of information are > required (eg location, properties, etc) > - in the case of partition parameters, the request should be able to do > either whitelisting or blacklisting (eg to exclude large incremental column > stats HLL dumped in there by Impala) > - the request should optionally specify auth info (to encompas the > "with_auth" variants) > - the request should be able to designate the set of partitions to access > through one of several different methods (eg "all", list, expr, > part_vals, etc) > - the struct should be easily evolvable so that new pieces of info can be > added > - the response should be designed in such a way as to avoid transferring > redundant information for common cases (eg simple "dictionary coding" of > strings like parameter names, etc) > - the API should support some form of pagination for tables with large > partition counts -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19700) Workaround for JLine issue with UnsupportedTerminal
[ https://issues.apache.org/jira/browse/HIVE-19700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491193#comment-16491193 ] Naveen Gangam commented on HIVE-19700: -- [~ychena] Thanks for the comments. I am not sure what you mean by print everything. In the output I posted above, there is some rogue characters (see the null at the begining of the line?) {code} $ cat /tmp/b_multi.out null+-+-+-+-+-+--+--+--+--+ {code} That null character is addressed. I althought that making the prompt "" was a cleaner fix. However, in case of multiline queries in a script file, beeline prints out a blank line for every line of the query at the begining and end of the resultset. The code that causes that behavior is below (prompt="" and mask !=null) {code} private void beforeReadLine(final String prompt, final Character mask) { if (mask != null && maskThread == null) { final String fullPrompt = "\r" + prompt + " " + " " + " " + "\r" + prompt; {code} So if the script file had a 20-line query, the beeline output will contain 20 blank lines before the output and 20 blank lines after the output. So that could break existing usecases. Thanks > Workaround for JLine issue with UnsupportedTerminal > --- > > Key: HIVE-19700 > URL: https://issues.apache.org/jira/browse/HIVE-19700 > Project: Hive > Issue Type: Bug >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-19700.patch > > > From the JLine's ConsoleReader, readLine(prompt, mask) calls the following > beforeReadLine() method. > {code} > try { > // System.out.println("is terminal supported " + > terminal.isSupported()); > if (!terminal.isSupported()) { > beforeReadLine(prompt, mask); > } > {code} > So specifically when using UnsupportedTerminal {{-Djline.terminal}} and > {{prompt=null}} and {{mask!=null}}, a "null" string gets printed to the > console before and after the query result. {{UnsupportedTerminal}} is > required to be used when running beeline as a background process, hangs > otherwise. > {code} > private void beforeReadLine(final String prompt, final Character mask) { > if (mask != null && maskThread == null) { > final String fullPrompt = "\r" + prompt > + " " > + " " > + " " > + "\r" + prompt; > maskThread = new Thread() > { > public void run() { > while (!interrupted()) { > try { > Writer out = getOutput(); > out.write(fullPrompt); > {code} > So the {{prompt}} is null and {{mask}} is NOT in atleast 2 scenarios in > beeline. > when beeline's silent=true, prompt is null > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1264 > when running multiline queries > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/Commands.java#L1093 > When executing beeline in script mode (commands in a file), there should not > be any masking while reading lines from the script file. aka, entire line > should be a beeline command or part of a multiline hive query. > So it should be safe to use a null mask instead of > {{ConsoleReader.NULL_MASK}} when using UnsupportedTerminal as jline terminal. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19700) Workaround for JLine issue with UnsupportedTerminal
[ https://issues.apache.org/jira/browse/HIVE-19700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491180#comment-16491180 ] Yongzhi Chen commented on HIVE-19700: - You fix changes from do not print anything to print everything. Should you focus on change the prompt to "" for unsupported terminal ? > Workaround for JLine issue with UnsupportedTerminal > --- > > Key: HIVE-19700 > URL: https://issues.apache.org/jira/browse/HIVE-19700 > Project: Hive > Issue Type: Bug >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-19700.patch > > > From the JLine's ConsoleReader, readLine(prompt, mask) calls the following > beforeReadLine() method. > {code} > try { > // System.out.println("is terminal supported " + > terminal.isSupported()); > if (!terminal.isSupported()) { > beforeReadLine(prompt, mask); > } > {code} > So specifically when using UnsupportedTerminal {{-Djline.terminal}} and > {{prompt=null}} and {{mask!=null}}, a "null" string gets printed to the > console before and after the query result. {{UnsupportedTerminal}} is > required to be used when running beeline as a background process, hangs > otherwise. > {code} > private void beforeReadLine(final String prompt, final Character mask) { > if (mask != null && maskThread == null) { > final String fullPrompt = "\r" + prompt > + " " > + " " > + " " > + "\r" + prompt; > maskThread = new Thread() > { > public void run() { > while (!interrupted()) { > try { > Writer out = getOutput(); > out.write(fullPrompt); > {code} > So the {{prompt}} is null and {{mask}} is NOT in atleast 2 scenarios in > beeline. > when beeline's silent=true, prompt is null > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1264 > when running multiline queries > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/Commands.java#L1093 > When executing beeline in script mode (commands in a file), there should not > be any masking while reading lines from the script file. aka, entire line > should be a beeline command or part of a multiline hive query. > So it should be safe to use a null mask instead of > {{ConsoleReader.NULL_MASK}} when using UnsupportedTerminal as jline terminal. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HIVE-19700) Workaround for JLine issue with UnsupportedTerminal
[ https://issues.apache.org/jira/browse/HIVE-19700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-19700: Comment: was deleted (was: I think I understand the issue. JLine has an issue of misusing null and ConsoleReader.NULL_MASK . In beforeReadLine, it should check mask != ConsoleReader.NULL_MASK not mask!=null Your workaround try to feed null value which the beforeReadLine can properly handle. The fix LGTM +1) > Workaround for JLine issue with UnsupportedTerminal > --- > > Key: HIVE-19700 > URL: https://issues.apache.org/jira/browse/HIVE-19700 > Project: Hive > Issue Type: Bug >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-19700.patch > > > From the JLine's ConsoleReader, readLine(prompt, mask) calls the following > beforeReadLine() method. > {code} > try { > // System.out.println("is terminal supported " + > terminal.isSupported()); > if (!terminal.isSupported()) { > beforeReadLine(prompt, mask); > } > {code} > So specifically when using UnsupportedTerminal {{-Djline.terminal}} and > {{prompt=null}} and {{mask!=null}}, a "null" string gets printed to the > console before and after the query result. {{UnsupportedTerminal}} is > required to be used when running beeline as a background process, hangs > otherwise. > {code} > private void beforeReadLine(final String prompt, final Character mask) { > if (mask != null && maskThread == null) { > final String fullPrompt = "\r" + prompt > + " " > + " " > + " " > + "\r" + prompt; > maskThread = new Thread() > { > public void run() { > while (!interrupted()) { > try { > Writer out = getOutput(); > out.write(fullPrompt); > {code} > So the {{prompt}} is null and {{mask}} is NOT in atleast 2 scenarios in > beeline. > when beeline's silent=true, prompt is null > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1264 > when running multiline queries > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/Commands.java#L1093 > When executing beeline in script mode (commands in a file), there should not > be any masking while reading lines from the script file. aka, entire line > should be a beeline command or part of a multiline hive query. > So it should be safe to use a null mask instead of > {{ConsoleReader.NULL_MASK}} when using UnsupportedTerminal as jline terminal. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19644) change WM syntax to avoid conflicts with identifiers starting with a number
[ https://issues.apache.org/jira/browse/HIVE-19644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19644: Attachment: HIVE-19644.01.patch > change WM syntax to avoid conflicts with identifiers starting with a number > --- > > Key: HIVE-19644 > URL: https://issues.apache.org/jira/browse/HIVE-19644 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19644.01.patch, HIVE-19644.patch > > > Time/etc literals conflict with non-ANSI query column names starting with a > number that were previously supported without quotes (e.g. 30days). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17683) Annotate Query Plan with locking information
[ https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491176#comment-16491176 ] Eugene Koifman commented on HIVE-17683: --- Hmm - very good question. We don't acquire locks on anything that would correspond to an intermediate node in the plan like Join or group by. Lock manager can lock database/table/partition objects but FileSink may write multiple partitions (dynamic partition insert) and a Scan can read many partitions. And I don't know if Database is represented in the QueryPlan at all... I think you need experiment with this a bit and see what locks you get for various query plans to see if there is a reasonable way to represent lock info as part of the plan. It may turn out one doesn't map very well to the other. In that case maybe you should consider writing something like "show locks " or "explain locks " that just dumps LockRequest structure. cc [~alangates] > Annotate Query Plan with locking information > > > Key: HIVE-17683 > URL: https://issues.apache.org/jira/browse/HIVE-17683 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Critical > > Explore if it's possible to add info about what locks will be asked for to > the query plan. > Lock acquisition (for Acid Lock Manager) is done in > DbTxnManager.acquireLocks() which is called once the query starts running. > Would need to refactor that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19700) Workaround for JLine issue with UnsupportedTerminal
[ https://issues.apache.org/jira/browse/HIVE-19700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491173#comment-16491173 ] Yongzhi Chen commented on HIVE-19700: - I think I understand the issue. JLine has an issue of misusing null and ConsoleReader.NULL_MASK . In beforeReadLine, it should check mask != ConsoleReader.NULL_MASK not mask!=null Your workaround try to feed null value which the beforeReadLine can properly handle. The fix LGTM +1 > Workaround for JLine issue with UnsupportedTerminal > --- > > Key: HIVE-19700 > URL: https://issues.apache.org/jira/browse/HIVE-19700 > Project: Hive > Issue Type: Bug >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Major > Fix For: 2.2.1 > > Attachments: HIVE-19700.patch > > > From the JLine's ConsoleReader, readLine(prompt, mask) calls the following > beforeReadLine() method. > {code} > try { > // System.out.println("is terminal supported " + > terminal.isSupported()); > if (!terminal.isSupported()) { > beforeReadLine(prompt, mask); > } > {code} > So specifically when using UnsupportedTerminal {{-Djline.terminal}} and > {{prompt=null}} and {{mask!=null}}, a "null" string gets printed to the > console before and after the query result. {{UnsupportedTerminal}} is > required to be used when running beeline as a background process, hangs > otherwise. > {code} > private void beforeReadLine(final String prompt, final Character mask) { > if (mask != null && maskThread == null) { > final String fullPrompt = "\r" + prompt > + " " > + " " > + " " > + "\r" + prompt; > maskThread = new Thread() > { > public void run() { > while (!interrupted()) { > try { > Writer out = getOutput(); > out.write(fullPrompt); > {code} > So the {{prompt}} is null and {{mask}} is NOT in atleast 2 scenarios in > beeline. > when beeline's silent=true, prompt is null > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1264 > when running multiline queries > * > https://github.com/apache/hive/blob/master/beeline/src/java/org/apache/hive/beeline/Commands.java#L1093 > When executing beeline in script mode (commands in a file), there should not > be any masking while reading lines from the script file. aka, entire line > should be a beeline command or part of a multiline hive query. > So it should be safe to use a null mask instead of > {{ConsoleReader.NULL_MASK}} when using UnsupportedTerminal as jline terminal. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18875) Enable SMB Join by default in Tez
[ https://issues.apache.org/jira/browse/HIVE-18875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491170#comment-16491170 ] Gunther Hagleitner commented on HIVE-18875: --- This looks cleaner, but I don't think the fix in op traits is correct. For one thing you're changing the bucketing cols also, and bucketing doesn't depend on the order (like sort). But the main problem is that I think a reorder via select doesn't actually change the sort cols either. Can you please describe the query/operator sequence that is causing problems one more time? (The comment in the code about gby and join is using internal columns and doesn't have the sql associated with it.) > Enable SMB Join by default in Tez > - > > Key: HIVE-18875 > URL: https://issues.apache.org/jira/browse/HIVE-18875 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18875.1.patch, HIVE-18875.2.patch, > HIVE-18875.3.patch, HIVE-18875.4.patch, HIVE-18875.5.patch, > HIVE-18875.6.patch, HIVE-18875.7.patch, HIVE-18875.8.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19684) Hive stats optimizer wrongly uses stats against non native tables
[ https://issues.apache.org/jira/browse/HIVE-19684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491169#comment-16491169 ] Hive QA commented on HIVE-19684: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 48s{color} | {color:blue} ql in master has 2323 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s{color} | {color:red} ql: The patch generated 2 new + 313 unchanged - 3 fixed = 315 total (was 316) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11214/dev-support/hive-personality.sh | | git revision | master / d8b8c67 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11214/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11214/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive stats optimizer wrongly uses stats against non native tables > - > > Key: HIVE-19684 > URL: https://issues.apache.org/jira/browse/HIVE-19684 > Project: Hive > Issue Type: Bug > Components: Druid integration, Physical Optimizer >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-19684.2.patch, HIVE-19684.patch > > > Stats of non native tables are inaccurate, thus queries over non native > tables can not optimized by stats optimizer. > Take example of query > {code} > Explain select count(*) from (select `__time` from druid_test_table limit 1) > as src ; > {code} > the plan will be reduced to > {code} > POSTHOOK: query: explain extended select count(*) from (select `__time` from > druid_test_table limit 1) as src > POSTHOOK: type: QUERY > STAGE DEPENDENCIES: > Stage-0 is a root stage > STAGE PLANS: > Stage: Stage-0 > Fetch Operator > limit: 1 > Processor Tree: > ListSink > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19644) change WM syntax to avoid conflicts with identifiers starting with a number
[ https://issues.apache.org/jira/browse/HIVE-19644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491168#comment-16491168 ] Sergey Shelukhin commented on HIVE-19644: - [~ashutoshc] making parser configurable is going to be a pain as far as I can tell, since the new tokens (that are removed by this patch) need to either be present for original WM behavior, or absent to avoid parse-time conflict. Agree about breaking compat later.. for the config someone can do a follow-up if desired :) > change WM syntax to avoid conflicts with identifiers starting with a number > --- > > Key: HIVE-19644 > URL: https://issues.apache.org/jira/browse/HIVE-19644 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19644.patch > > > Time/etc literals conflict with non-ANSI query column names starting with a > number that were previously supported without quotes (e.g. 30days). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19629: - Attachment: HIVE-19629.4.patch > Enable Decimal64 reader after orc version upgrade > - > > Key: HIVE-19629 > URL: https://issues.apache.org/jira/browse/HIVE-19629 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19629.1.patch, HIVE-19629.2.patch, > HIVE-19629.3.patch, HIVE-19629.4.patch > > > ORC 1.5.0 supports new fast decimal 64 reader. New VRB has to be created for > making use of decimal 64 column vectors. Also LLAP IO will need a new reader > to reader from long stream to decimal 64. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18079) Statistics: Allow HyperLogLog to be merged to the lowest-common-denominator bit-size
[ https://issues.apache.org/jira/browse/HIVE-18079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-18079: - Attachment: HIVE-18079.19.patch > Statistics: Allow HyperLogLog to be merged to the lowest-common-denominator > bit-size > > > Key: HIVE-18079 > URL: https://issues.apache.org/jira/browse/HIVE-18079 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore, Statistics >Affects Versions: 3.0.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-18079-branch-3.patch, HIVE-18079-branch-3.patch, > HIVE-18079-branch-3.patch, HIVE-18079.1.patch, HIVE-18079.10.patch, > HIVE-18079.11.patch, HIVE-18079.12.patch, HIVE-18079.13.patch, > HIVE-18079.14.patch, HIVE-18079.15.patch, HIVE-18079.15.patch, > HIVE-18079.15.patch, HIVE-18079.16.patch, HIVE-18079.17.patch, > HIVE-18079.17.patch, HIVE-18079.18.patch, HIVE-18079.19.patch, > HIVE-18079.2.patch, HIVE-18079.4.patch, HIVE-18079.5.patch, > HIVE-18079.6.patch, HIVE-18079.7.patch, HIVE-18079.8.patch, HIVE-18079.9.patch > > > HyperLogLog can merge a 14 bit HLL into a 10 bit HLL bitset, because of its > mathematical hash distribution & construction. > Allow the squashing of a 14 bit HLL -> 10 bit HLL without needing a second > scan over the data-set. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19680) Push down limit is not applied for Druid storage handler.
[ https://issues.apache.org/jira/browse/HIVE-19680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491164#comment-16491164 ] Ashutosh Chauhan commented on HIVE-19680: - Pushed to master. Thanks, Slim! [~vgarg] This results in wrong results. Can we get this in branch-3? > Push down limit is not applied for Druid storage handler. > - > > Key: HIVE-19680 > URL: https://issues.apache.org/jira/browse/HIVE-19680 > Project: Hive > Issue Type: Bug > Components: Druid integration >Affects Versions: 3.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-19680.1.patch, HIVE-19680.patch > > > Query like > {code} > select `__time` from druid_test_table limit 1; > {code} > returns more than one row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491158#comment-16491158 ] Prasanth Jayachandran commented on HIVE-19629: -- [~mmccline] could you please review .3 patch? This patch is close to a QA run, I will combine ORC version patch (HIVE-19465) also in .4 patch so that we get a QA run for this patch. > Enable Decimal64 reader after orc version upgrade > - > > Key: HIVE-19629 > URL: https://issues.apache.org/jira/browse/HIVE-19629 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19629.1.patch, HIVE-19629.2.patch, > HIVE-19629.3.patch > > > ORC 1.5.0 supports new fast decimal 64 reader. New VRB has to be created for > making use of decimal 64 column vectors. Also LLAP IO will need a new reader > to reader from long stream to decimal 64. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19716) Set spark.local.dir for a few more HoS integration tests
[ https://issues.apache.org/jira/browse/HIVE-19716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19716: Description: There are a few more flaky tests that are failing because they run HoS queries that writes some temp data to {{/tmp/}}. These tests are regular JUnit tests, so they weren't covered in the previous attempts to do this. (was: There are a few more flaky tests that are failing because the run a HoS queries that writes some temp data to {{/tmp/}}. These tests are regular JUnit tests, so they weren't covered in the previous attempts to do this.) > Set spark.local.dir for a few more HoS integration tests > > > Key: HIVE-19716 > URL: https://issues.apache.org/jira/browse/HIVE-19716 > Project: Hive > Issue Type: Test > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19716.1.patch > > > There are a few more flaky tests that are failing because they run HoS > queries that writes some temp data to {{/tmp/}}. These tests are regular > JUnit tests, so they weren't covered in the previous attempts to do this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19558) HiveAuthorizationProviderBase gets catalog name from config rather than db object
[ https://issues.apache.org/jira/browse/HIVE-19558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19558: -- Attachment: HIVE-19558.1take6.patch > HiveAuthorizationProviderBase gets catalog name from config rather than db > object > - > > Key: HIVE-19558 > URL: https://issues.apache.org/jira/browse/HIVE-19558 > Project: Hive > Issue Type: Bug > Components: Authorization >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Fix For: 3.0.1 > > Attachments: HIVE-19558.1take2.patch, HIVE-19558.1take3.patch, > HIVE-19558.1take4.patch, HIVE-19558.1take5.patch, HIVE-19558.1take6.patch, > HIVE-19558.patch > > > HiveAuthorizationProviderBase.getDatabase uses just the database name to > fetch the database, relying on getDefaultCatalog() to fetch the catalog name > from the conf file. This does not work when the client has passed in an > object for a different catalog. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19716) Set spark.local.dir for a few more HoS integration tests
[ https://issues.apache.org/jira/browse/HIVE-19716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19716: Attachment: HIVE-19716.1.patch > Set spark.local.dir for a few more HoS integration tests > > > Key: HIVE-19716 > URL: https://issues.apache.org/jira/browse/HIVE-19716 > Project: Hive > Issue Type: Test > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19716.1.patch > > > There are a few more flaky tests that are failing because the run a HoS > queries that writes some temp data to {{/tmp/}}. These tests are regular > JUnit tests, so they weren't covered in the previous attempts to do this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19716) Set spark.local.dir for a few more HoS integration tests
[ https://issues.apache.org/jira/browse/HIVE-19716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19716: Status: Patch Available (was: Open) > Set spark.local.dir for a few more HoS integration tests > > > Key: HIVE-19716 > URL: https://issues.apache.org/jira/browse/HIVE-19716 > Project: Hive > Issue Type: Test > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19716.1.patch > > > There are a few more flaky tests that are failing because the run a HoS > queries that writes some temp data to {{/tmp/}}. These tests are regular > JUnit tests, so they weren't covered in the previous attempts to do this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19716) Set spark.local.dir for a few more HoS integration tests
[ https://issues.apache.org/jira/browse/HIVE-19716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-19716: --- > Set spark.local.dir for a few more HoS integration tests > > > Key: HIVE-19716 > URL: https://issues.apache.org/jira/browse/HIVE-19716 > Project: Hive > Issue Type: Test > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > > There are a few more flaky tests that are failing because the run a HoS > queries that writes some temp data to {{/tmp/}}. These tests are regular > JUnit tests, so they weren't covered in the previous attempts to do this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17683) Annotate Query Plan with locking information
[ https://issues.apache.org/jira/browse/HIVE-17683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491150#comment-16491150 ] Igor Kryvenko commented on HIVE-17683: -- [~ekoifman] Thanks for the useful tips. Do we need to annotate only TableScan and FileSink operators? > Annotate Query Plan with locking information > > > Key: HIVE-17683 > URL: https://issues.apache.org/jira/browse/HIVE-17683 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Critical > > Explore if it's possible to add info about what locks will be asked for to > the query plan. > Lock acquisition (for Acid Lock Manager) is done in > DbTxnManager.acquireLocks() which is called once the query starts running. > Would need to refactor that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19715) Consolidated and flexible API for fetching partition metadata from HMS
[ https://issues.apache.org/jira/browse/HIVE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491149#comment-16491149 ] Todd Lipcon commented on HIVE-19715: bq. the response should be designed in such a way as to avoid transferring redundant information for common cases (eg simple "dictionary coding" of strings like parameter names, etc) To elaborate on this, the idea is that the response could have a 'list string_pool' member at the top level. Underlying partition info like storage descriptor input formats, serde names, parameters, etc, can use integer indexes into the string_pool. This can likely reduce the size of responses on the wire as well as memory/GC/CPU costs while deserializing. > Consolidated and flexible API for fetching partition metadata from HMS > -- > > Key: HIVE-19715 > URL: https://issues.apache.org/jira/browse/HIVE-19715 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore >Reporter: Todd Lipcon >Priority: Major > > Currently, the HMS thrift API exposes 17 different APIs for fetching > partition-related information. There is somewhat of a combinatorial explosion > going on, where each API has variants with and without "auth" info, by pspecs > vs names, by filters, by exprs, etc. Having all of these separate APIs long > term is a maintenance burden and also more confusing for consumers. > Additionally, even with all of these APIs, there is a lack of granularity in > fetching only the information needed for a particular use case. For example, > in some use cases it may be beneficial to only fetch the partition locations > without wasting effort fetching statistics, etc. > This JIRA proposes that we add a new "one API to rule them all" for fetching > partition info. The request and response would be encapsulated in structs. > Some desirable properties: > - the request should be able to specify which pieces of information are > required (eg location, properties, etc) > - in the case of partition parameters, the request should be able to do > either whitelisting or blacklisting (eg to exclude large incremental column > stats HLL dumped in there by Impala) > - the request should optionally specify auth info (to encompas the > "with_auth" variants) > - the request should be able to designate the set of partitions to access > through one of several different methods (eg "all", list, expr, > part_vals, etc) > - the struct should be easily evolvable so that new pieces of info can be > added > - the response should be designed in such a way as to avoid transferring > redundant information for common cases (eg simple "dictionary coding" of > strings like parameter names, etc) > - the API should support some form of pagination for tables with large > partition counts -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19084) Test case in Hive Query Language fails with a java.lang.AssertionError.
[ https://issues.apache.org/jira/browse/HIVE-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491148#comment-16491148 ] Steve Yeom commented on HIVE-19084: --- OK. I have submitted the same patch with a different version suffix. We can check the results. > Test case in Hive Query Language fails with a java.lang.AssertionError. > --- > > Key: HIVE-19084 > URL: https://issues.apache.org/jira/browse/HIVE-19084 > Project: Hive > Issue Type: Bug > Components: Test, Transactions > Environment: uname -a > Linux pts00607-vm3 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:46 UTC > 2018 ppc64le ppc64le ppc64le GNU/Linux >Reporter: Alisha Prabhu >Assignee: Steve Yeom >Priority: Major > Labels: patch-available > Attachments: HIVE-19084.02.patch, HIVE-19084.1.patch > > > The test case testInsertOverwriteForPartitionedMmTable in > TestTxnCommandsForMmTable.java and TestTxnCommandsForOrcMmTable.java fails > with a java.lang.AssertionError on ppc arch. > Maven command used is mvn -Dtest=TestTxnCommandsForMmTable test > The test case fails as the listStatus function of the FileSystem does not > guarantee to return the List of files/directories status in a sorted order. > Error : > {code:java} > [INFO] Running org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable > [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 125.463 s <<< FAILURE! - in > org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable > [ERROR] > testInsertOverwriteForPartitionedMmTable(org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable) > Time elapsed: 13.57 s <<< FAILURE! > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteForPartitionedMmTable(TestTxnCommandsForMmTable.java:296) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19084) Test case in Hive Query Language fails with a java.lang.AssertionError.
[ https://issues.apache.org/jira/browse/HIVE-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-19084: -- Attachment: HIVE-19084.02.patch > Test case in Hive Query Language fails with a java.lang.AssertionError. > --- > > Key: HIVE-19084 > URL: https://issues.apache.org/jira/browse/HIVE-19084 > Project: Hive > Issue Type: Bug > Components: Test, Transactions > Environment: uname -a > Linux pts00607-vm3 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:46 UTC > 2018 ppc64le ppc64le ppc64le GNU/Linux >Reporter: Alisha Prabhu >Assignee: Steve Yeom >Priority: Major > Labels: patch-available > Attachments: HIVE-19084.02.patch, HIVE-19084.1.patch > > > The test case testInsertOverwriteForPartitionedMmTable in > TestTxnCommandsForMmTable.java and TestTxnCommandsForOrcMmTable.java fails > with a java.lang.AssertionError on ppc arch. > Maven command used is mvn -Dtest=TestTxnCommandsForMmTable test > The test case fails as the listStatus function of the FileSystem does not > guarantee to return the List of files/directories status in a sorted order. > Error : > {code:java} > [INFO] Running org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable > [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 125.463 s <<< FAILURE! - in > org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable > [ERROR] > testInsertOverwriteForPartitionedMmTable(org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable) > Time elapsed: 13.57 s <<< FAILURE! > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteForPartitionedMmTable(TestTxnCommandsForMmTable.java:296) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19578) HLL merges tempList on every add
[ https://issues.apache.org/jira/browse/HIVE-19578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19578: - Resolution: Fixed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Committed to master and branch-3. Thanks for the reviews! > HLL merges tempList on every add > > > Key: HIVE-19578 > URL: https://issues.apache.org/jira/browse/HIVE-19578 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Prasanth Jayachandran >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19578.1.patch, HIVE-19578.2.patch, > HIVE-19578.3.patch, HIVE-19578.4.patch, Screen Shot 2018-05-16 at 15.29.12 > .png > > > See comments on HIVE-18866; this has significant perf overhead after the > even bigger overhead from hashing is removed. !Screen Shot 2018-05-16 at > 15.29.12 .png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19305) Arrow format for LlapOutputFormatService (umbrella)
[ https://issues.apache.org/jira/browse/HIVE-19305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491138#comment-16491138 ] Hive QA commented on HIVE-19305: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12924836/HIVE-19305.1-branch-3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11213/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11213/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11213/ Messages: {noformat} This message was trimmed, see log for full details [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/security/SecurityUtil.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/util/GenericOptionsParser.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-rewrite/9.3.8.v20160314/jetty-rewrite-9.3.8.v20160314.jar(org/eclipse/jetty/rewrite/handler/RedirectPatternRule.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-rewrite/9.3.8.v20160314/jetty-rewrite-9.3.8.v20160314.jar(org/eclipse/jetty/rewrite/handler/RewriteHandler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/Handler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/Server.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/ServerConnector.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-server/9.3.8.v20160314/jetty-server-9.3.8.v20160314.jar(org/eclipse/jetty/server/handler/HandlerList.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-servlet/9.3.8.v20160314/jetty-servlet-9.3.8.v20160314.jar(org/eclipse/jetty/servlet/FilterHolder.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-servlet/9.3.8.v20160314/jetty-servlet-9.3.8.v20160314.jar(org/eclipse/jetty/servlet/ServletContextHandler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-servlet/9.3.8.v20160314/jetty-servlet-9.3.8.v20160314.jar(org/eclipse/jetty/servlet/ServletHolder.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-xml/9.3.8.v20160314/jetty-xml-9.3.8.v20160314.jar(org/eclipse/jetty/xml/XmlConfiguration.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/org/slf4j/jul-to-slf4j/1.7.10/jul-to-slf4j-1.7.10.jar(org/slf4j/bridge/SLF4JBridgeHandler.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/DispatcherType.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/Filter.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/FilterChain.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/FilterConfig.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletException.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletRequest.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletResponse.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/annotation/WebFilter.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletRequest.class)]] [loading ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletResponse.class)]] [loading
[jira] [Commented] (HIVE-18117) Create TestCliDriver for HDFS EC
[ https://issues.apache.org/jira/browse/HIVE-18117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491140#comment-16491140 ] Andrew Sherman commented on HIVE-18117: --- OK tests look clean now please take a look [~stakiar] when you have time > Create TestCliDriver for HDFS EC > > > Key: HIVE-18117 > URL: https://issues.apache.org/jira/browse/HIVE-18117 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18117.1.patch, HIVE-18117.10.patch, > HIVE-18117.2.patch, HIVE-18117.3.patch, HIVE-18117.4.patch, > HIVE-18117.5.patch, HIVE-18117.6.patch, HIVE-18117.7.patch, > HIVE-18117.8.patch, HIVE-18117.9.patch > > > Should be able to do something similar to what we do for HDFS encryption. > TestErasureCodingHDFSCliDriver uses a test-only CommandProcessor > "ErasureProcessor" > which allows .q files to contain Erasure Coding commands similar to those > provided > by the hdfs ec command > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html. > The Erasure Coding functionality is exposed through a new shim > "HdfsFileErasureCodingPolicy". > At this stage there are two .q files: > erasure_commnds.q (a simple test to show ERASURE commands can run on local fs > via > TestCliDriver or on hdfs via TestErasureCodingHDFSCliDriver), and > erasure_simple.q (which does some trivial queries to demonstrate basic > functionality). > More tests will come in future commits. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19464) Upgrade Parquet to 1.10.0
[ https://issues.apache.org/jira/browse/HIVE-19464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19464: --- Resolution: Fixed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Pushed to master, branch-3. Cc [~vgarg] > Upgrade Parquet to 1.10.0 > - > > Key: HIVE-19464 > URL: https://issues.apache.org/jira/browse/HIVE-19464 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19464.01.patch, HIVE-19464.02.patch, > HIVE-19464.03.patch, HIVE-19464.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19667) Remove distribution management tag from pom.xml
[ https://issues.apache.org/jira/browse/HIVE-19667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19667: --- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) > Remove distribution management tag from pom.xml > --- > > Key: HIVE-19667 > URL: https://issues.apache.org/jira/browse/HIVE-19667 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19667.1.patch > > > This tag overrides apaches configuration in maven settings file and makes it > impossible to publish maven artifacts. There is no way around to it either. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19084) Test case in Hive Query Language fails with a java.lang.AssertionError.
[ https://issues.apache.org/jira/browse/HIVE-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16491137#comment-16491137 ] Prasanth Jayachandran commented on HIVE-19084: -- HiveQA have not +1'ed the patch yet (there are some test failures in the last run). can you re-upload the patch with different version so that precommit tests run again? > Test case in Hive Query Language fails with a java.lang.AssertionError. > --- > > Key: HIVE-19084 > URL: https://issues.apache.org/jira/browse/HIVE-19084 > Project: Hive > Issue Type: Bug > Components: Test, Transactions > Environment: uname -a > Linux pts00607-vm3 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:46 UTC > 2018 ppc64le ppc64le ppc64le GNU/Linux >Reporter: Alisha Prabhu >Assignee: Steve Yeom >Priority: Major > Labels: patch-available > Attachments: HIVE-19084.1.patch > > > The test case testInsertOverwriteForPartitionedMmTable in > TestTxnCommandsForMmTable.java and TestTxnCommandsForOrcMmTable.java fails > with a java.lang.AssertionError on ppc arch. > Maven command used is mvn -Dtest=TestTxnCommandsForMmTable test > The test case fails as the listStatus function of the FileSystem does not > guarantee to return the List of files/directories status in a sorted order. > Error : > {code:java} > [INFO] Running org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable > [ERROR] Tests run: 7, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: > 125.463 s <<< FAILURE! - in > org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable > [ERROR] > testInsertOverwriteForPartitionedMmTable(org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable) > Time elapsed: 13.57 s <<< FAILURE! > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testInsertOverwriteForPartitionedMmTable(TestTxnCommandsForMmTable.java:296) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)