[jira] [Commented] (HIVE-18684) Race condition in RemoteSparkJobMonitor
[ https://issues.apache.org/jira/browse/HIVE-18684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506828#comment-16506828 ] Hive QA commented on HIVE-18684: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 19s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s{color} | {color:red} ql: The patch generated 15 new + 2 unchanged - 4 fixed = 17 total (was 6) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11646/dev-support/hive-personality.sh | | git revision | master / 6454585 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11646/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11646/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Race condition in RemoteSparkJobMonitor > --- > > Key: HIVE-18684 > URL: https://issues.apache.org/jira/browse/HIVE-18684 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18684.1.patch, HIVE-18684.2.patch, > HIVE-18684.3.patch > > > There is a race condition in {{RemoteSparkJobMonitor}}. Sometimes the info in > {{RemoteSparkJobMonitor#startMonitor.STARTED}} gets printed out, sometimes it > doesn't. This can be easily verified by running a qtest on > {{TestMiniSparkOnYarnCliDriver}} and counting the number of times {{Query > Hive on Spark job}} is printed vs. the number of times {{Finished > successfully in}} gets printed. > The issue is that {{RemoteSparkJobMonitor}} runs every one second, and checks > the state of {{JobHandle}}. Depending on the state, it prints out some > logging info. The content of the logs contain an implicit assumption that > logs in the {{STARTED}} state are printed before the logs in the > {{SUCCEEDED}} state. However, this isn't always the case. The state > transitions are driven by how long the remote Spark job takes to run, and it > it finishes within one second then the logs in the {{STARTED}} state never > printed. > This can be confusing to users, and there is key debugging information that > is printed in the {{STARTED}} state. -- This message was sent by
[jira] [Commented] (HIVE-19769) Create dedicated objects for DB and Table names
[ https://issues.apache.org/jira/browse/HIVE-19769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506823#comment-16506823 ] Hive QA commented on HIVE-19769: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926958/HIVE-19769.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14516 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11645/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11645/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11645/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12926958 - PreCommit-HIVE-Build > Create dedicated objects for DB and Table names > --- > > Key: HIVE-19769 > URL: https://issues.apache.org/jira/browse/HIVE-19769 > Project: Hive > Issue Type: Sub-task > Components: storage-api >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19769.patch > > > Currently table names are always strings. Sometimes that string is just > tablename, sometimes it is dbname.tablename. Sometimes the code expects one > or the other, sometimes it handles either. This is burdensome for developers > and error prone. With the addition of catalog to the hierarchy, this becomes > even worse. > I propose to add two objects, DatabaseName and TableName. These will track > full names of each object. They will handle inserting default catalog and > database names when those are not provided. They will handle the conversions > to and from strings. > These will need to be added to storage-api because ValidTxnList will use it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19753) Strict managed tables mode in Hive
[ https://issues.apache.org/jira/browse/HIVE-19753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19753: -- Attachment: HIVE-19753.2.patch > Strict managed tables mode in Hive > -- > > Key: HIVE-19753 > URL: https://issues.apache.org/jira/browse/HIVE-19753 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19753.1.patch, HIVE-19753.2.patch > > > Create a mode in Hive which enforces that all managed tables are > transactional (both full or insert-only tables allowed). Non-transactional > tables, as well as non-native tables, must be created as external tables when > this mode is enabled. > The idea would be that in strict managed tables mode all of the data written > to managed tables would have been done through Hive. > The mode would be enabled using config setting hive.strict.managed.tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19753) Strict managed tables mode in Hive
[ https://issues.apache.org/jira/browse/HIVE-19753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19753: -- Status: Patch Available (was: Open) > Strict managed tables mode in Hive > -- > > Key: HIVE-19753 > URL: https://issues.apache.org/jira/browse/HIVE-19753 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19753.1.patch, HIVE-19753.2.patch > > > Create a mode in Hive which enforces that all managed tables are > transactional (both full or insert-only tables allowed). Non-transactional > tables, as well as non-native tables, must be created as external tables when > this mode is enabled. > The idea would be that in strict managed tables mode all of the data written > to managed tables would have been done through Hive. > The mode would be enabled using config setting hive.strict.managed.tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19769) Create dedicated objects for DB and Table names
[ https://issues.apache.org/jira/browse/HIVE-19769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506819#comment-16506819 ] Hive QA commented on HIVE-19769: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 22s{color} | {color:blue} storage-api in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 26s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 35s{color} | {color:blue} standalone-metastore in master has 216 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hcatalog-unit in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 49s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} storage-api: The patch generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s{color} | {color:red} ql: The patch generated 1 new + 70 unchanged - 1 fixed = 71 total (was 71) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s{color} | {color:red} standalone-metastore: The patch generated 1 new + 1398 unchanged - 7 fixed = 1399 total (was 1405) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 37s{color} | {color:red} ql generated 1 new + 2283 unchanged - 1 fixed = 2284 total (was 2284) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 31s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Dead store to writeIds in org.apache.hadoop.hive.ql.stats.StatsUpdaterThread.processOneTable(TableName) At StatsUpdaterThread.java:org.apache.hadoop.hive.ql.stats.StatsUpdaterThread.processOneTable(TableName) At StatsUpdaterThread.java:[line 221] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11645/dev-support/hive-personality.sh | | git revision | master / 6454585 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-11645/yetus/patch-mvninstall-itests_hcatalog-unit.txt | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-11645/yetus/patch-mvninstall-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11645/yetus/diff-checkstyle-storage-api.txt | |
[jira] [Comment Edited] (HIVE-19723) Arrow serde: "Unsupported data type: Timestamp(NANOSECOND, null)"
[ https://issues.apache.org/jira/browse/HIVE-19723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506809#comment-16506809 ] Eric Wohlstadter edited comment on HIVE-19723 at 6/9/18 4:24 AM: - [~teddy.choi] Serializer needs to create a {{TimeStampMicroTZVector}} instead of {{TimeStampMicroVector}}. See: {{org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowColumnVector(ValueVector vector)}} Can you create a new JIRA for that? was (Author: ewohlstadter): [~teddy.choi] Serializer needs to create a {{TimeStampMicroTZVector}} instead of {{TimeStampMicroVector}}. See: {{org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowColumnVector(ValueVector vector)}} > Arrow serde: "Unsupported data type: Timestamp(NANOSECOND, null)" > - > > Key: HIVE-19723 > URL: https://issues.apache.org/jira/browse/HIVE-19723 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-19723.1.patch, HIVE-19723.3.patch, > HIVE-19723.4.patch, HIVE-19732.2.patch > > > Spark's Arrow support only provides Timestamp at MICROSECOND granularity. > Spark 2.3.0 won't accept NANOSECOND. Switch it back to MICROSECOND. > The unit test org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow will just need > to change the assertion to test microsecond. And we'll need to add this to > documentation on supported datatypes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19723) Arrow serde: "Unsupported data type: Timestamp(NANOSECOND, null)"
[ https://issues.apache.org/jira/browse/HIVE-19723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506809#comment-16506809 ] Eric Wohlstadter commented on HIVE-19723: - [~teddy.choi] Serializer needs to create a {{TimeStampMicroTZVector}} instead of {{TimeStampMicroVector}}. See: {{org.apache.spark.sql.vectorized.ArrowColumnVector.ArrowColumnVector(ValueVector vector)}} > Arrow serde: "Unsupported data type: Timestamp(NANOSECOND, null)" > - > > Key: HIVE-19723 > URL: https://issues.apache.org/jira/browse/HIVE-19723 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-19723.1.patch, HIVE-19723.3.patch, > HIVE-19723.4.patch, HIVE-19732.2.patch > > > Spark's Arrow support only provides Timestamp at MICROSECOND granularity. > Spark 2.3.0 won't accept NANOSECOND. Switch it back to MICROSECOND. > The unit test org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow will just need > to change the assertion to test microsecond. And we'll need to add this to > documentation on supported datatypes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19825) HiveServer2 leader selection shall use different zookeeper znode
[ https://issues.apache.org/jira/browse/HIVE-19825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506807#comment-16506807 ] Hive QA commented on HIVE-19825: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926936/HIVE-19825.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14512 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_aggregate] (batchId=158) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11643/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11643/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11643/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12926936 - PreCommit-HIVE-Build > HiveServer2 leader selection shall use different zookeeper znode > > > Key: HIVE-19825 > URL: https://issues.apache.org/jira/browse/HIVE-19825 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19825.1.patch > > > Currently, HiveServer2 leader selection (used only by privilegesynchronizer > now) is reuse /hiveserver2 parent znode which is already used for HiveServer2 > service discovery. This interfere the service discovery. I'd like to switch > to a different znode /hiveserver2-leader. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19825) HiveServer2 leader selection shall use different zookeeper znode
[ https://issues.apache.org/jira/browse/HIVE-19825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506798#comment-16506798 ] Hive QA commented on HIVE-19825: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} service in master has 48 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s{color} | {color:red} service in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s{color} | {color:red} service in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s{color} | {color:red} service in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11643/dev-support/hive-personality.sh | | git revision | master / 6454585 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-11643/yetus/patch-mvninstall-service.txt | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-11643/yetus/patch-compile-service.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-11643/yetus/patch-compile-service.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-11643/yetus/patch-findbugs-service.txt | | modules | C: common service U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11643/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveServer2 leader selection shall use different zookeeper znode > > > Key: HIVE-19825 > URL: https://issues.apache.org/jira/browse/HIVE-19825 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19825.1.patch > > > Currently, HiveServer2 leader selection (used only by privilegesynchronizer > now) is reuse /hiveserver2 parent znode which is already used for HiveServer2 > service discovery. This interfere the service discovery. I'd like to switch > to a different znode /hiveserver2-leader. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-19839) sssssssssssss
[ https://issues.apache.org/jira/browse/HIVE-19839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Wohlstadter resolved HIVE-19839. - Resolution: Invalid > s > - > > Key: HIVE-19839 > URL: https://issues.apache.org/jira/browse/HIVE-19839 > Project: Hive > Issue Type: Bug > Components: Hive, UDF >Affects Versions: 2.3.1 >Reporter: sadashiv >Priority: Major > Fix For: 0.10.1 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17852) remove support for list bucketing "stored as directories" in 3.0
[ https://issues.apache.org/jira/browse/HIVE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506797#comment-16506797 ] Hive QA commented on HIVE-17852: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926905/HIVE-17852.07.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11642/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11642/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11642/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-09 03:09:36.278 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-11642/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-09 03:09:36.280 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 6454585 HIVE-19776 : HiveServer2.startHiveServer2 retries of start has concurrency issues (Thejas Nair, reviewed by Daniel Dai) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 6454585 HIVE-19776 : HiveServer2.startHiveServer2 retries of start has concurrency issues (Thejas Nair, reviewed by Daniel Dai) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-09 03:09:37.815 + rm -rf ../yetus_PreCommit-HIVE-Build-11642 + mkdir ../yetus_PreCommit-HIVE-Build-11642 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-11642 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11642/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java:30 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java' with conflicts. Going to apply patch with: git apply -p0 /data/hiveptest/working/scratch/build.patch:3777: trailing whitespace. numFiles1 /data/hiveptest/working/scratch/build.patch:3837: trailing whitespace. numFiles1 /data/hiveptest/working/scratch/build.patch:3841: trailing whitespace. totalSize 5293 /data/hiveptest/working/scratch/build.patch:3880: trailing whitespace. numFiles1 /data/hiveptest/working/scratch/build.patch:3901: trailing whitespace. numFiles1 error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java:30 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java' with conflicts. U ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java warning: squelched 46 whitespace errors warning: 51 lines add whitespace errors. + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-11642 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12926905 - PreCommit-HIVE-Build > remove support for list bucketing "stored as directories" in 3.0 > > > Key: HIVE-17852 > URL: https://issues.apache.org/jira/browse/HIVE-17852 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Laszlo Bodor >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-17852.01.patch, HIVE-17852.02.patch, >
[jira] [Commented] (HIVE-16505) Support "unknown" boolean truth value
[ https://issues.apache.org/jira/browse/HIVE-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506796#comment-16506796 ] Hive QA commented on HIVE-16505: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926902/HIVE-16505.04.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14513 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11641/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11641/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11641/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12926902 - PreCommit-HIVE-Build > Support "unknown" boolean truth value > - > > Key: HIVE-16505 > URL: https://issues.apache.org/jira/browse/HIVE-16505 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Zoltan Haindrich >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-16505.01.patch, HIVE-16505.02.patch, > HIVE-16505.03.patch, HIVE-16505.04.patch > > > according to the standard, boolean truth value might be: > {{TRUE|FALSE|UNKNOWN}}. > similar queries to the following should be supported: > {code:java} > select 1 where null is unknown; > select 1 where (select cast(null as boolean) ) is unknown; > {code} > "unknown" behaves similarily to null. {{(null=null) is null}} > > "All boolean values and SQL truth values are comparable and all are > assignable to a site of type boolean. The value True is greater than the > value False, and any comparison involving the null value or an Unknown truth > value will return an Unknown result. The values True and False may be > assigned to any site having a boolean data type; assignment of Unknown, or > the null value, is subject to the nullability characteristic of the target." > > *Truth table for the AND boolean operator* > AND True False Unknown > True True False Unknown > False False False False > Unknown Unknown False Unknown > *Truth table for the OR boolean operator* > OR True False Unknown > True True True True > False True False Unknown > Unknown True Unknown Unknown > *Truth table for the IS boolean operator* > IS TRUE FALSE UNKNOWN > True True False False > False False True False > Unknown False False True > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19837) Setting to have different default location for external tables
[ https://issues.apache.org/jira/browse/HIVE-19837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506791#comment-16506791 ] Jason Dere commented on HIVE-19837: --- RB at https://reviews.apache.org/r/67518/ cc [~ashutoshc] > Setting to have different default location for external tables > -- > > Key: HIVE-19837 > URL: https://issues.apache.org/jira/browse/HIVE-19837 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19837.1.patch > > > Allow external tables to have a different default location than managed tables -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19837) Setting to have different default location for external tables
[ https://issues.apache.org/jira/browse/HIVE-19837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19837: -- Status: Patch Available (was: Open) > Setting to have different default location for external tables > -- > > Key: HIVE-19837 > URL: https://issues.apache.org/jira/browse/HIVE-19837 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19837.1.patch > > > Allow external tables to have a different default location than managed tables -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19837) Setting to have different default location for external tables
[ https://issues.apache.org/jira/browse/HIVE-19837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19837: -- Attachment: HIVE-19837.1.patch > Setting to have different default location for external tables > -- > > Key: HIVE-19837 > URL: https://issues.apache.org/jira/browse/HIVE-19837 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19837.1.patch > > > Allow external tables to have a different default location than managed tables -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18875) Enable SMB Join by default in Tez
[ https://issues.apache.org/jira/browse/HIVE-18875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18875: -- Attachment: HIVE-18875.14.patch > Enable SMB Join by default in Tez > - > > Key: HIVE-18875 > URL: https://issues.apache.org/jira/browse/HIVE-18875 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18875.1.patch, HIVE-18875.10.patch, > HIVE-18875.11.patch, HIVE-18875.12.patch, HIVE-18875.13.patch, > HIVE-18875.14.patch, HIVE-18875.2.patch, HIVE-18875.3.patch, > HIVE-18875.4.patch, HIVE-18875.5.patch, HIVE-18875.6.patch, > HIVE-18875.7.patch, HIVE-18875.8.patch, HIVE-18875.9.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16505) Support "unknown" boolean truth value
[ https://issues.apache.org/jira/browse/HIVE-16505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506788#comment-16506788 ] Hive QA commented on HIVE-16505: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11641/dev-support/hive-personality.sh | | git revision | master / 6454585 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11641/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Support "unknown" boolean truth value > - > > Key: HIVE-16505 > URL: https://issues.apache.org/jira/browse/HIVE-16505 > Project: Hive > Issue Type: Sub-task > Components: SQL >Reporter: Zoltan Haindrich >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-16505.01.patch, HIVE-16505.02.patch, > HIVE-16505.03.patch, HIVE-16505.04.patch > > > according to the standard, boolean truth value might be: > {{TRUE|FALSE|UNKNOWN}}. > similar queries to the following should be supported: > {code:java} > select 1 where null is unknown; > select 1 where (select cast(null as boolean) ) is unknown; > {code} > "unknown" behaves similarily to null. {{(null=null) is null}} > > "All boolean values and SQL truth values are comparable and all are > assignable to a site of type boolean. The value True is greater than the > value False, and any comparison involving the null value or an Unknown truth > value will return an Unknown result. The values True and False may be > assigned to any site having a boolean data type; assignment of Unknown, or > the null value, is subject to the nullability characteristic of the target." > > *Truth table for the AND boolean operator* > AND True False Unknown > True True False Unknown > False False False False > Unknown Unknown False Unknown > *Truth table for the OR boolean operator* > OR True False Unknown > True True True True > False True False Unknown > Unknown True Unknown Unknown > *Truth table for the IS boolean operator* > IS TRUE FALSE UNKNOWN > True True False False > False False True False > Unknown False False True > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19823) BytesBytesMultiHashMap estimation should account for loadFactor
[ https://issues.apache.org/jira/browse/HIVE-19823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506786#comment-16506786 ] Hive QA commented on HIVE-19823: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926900/HIVE-19823.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14513 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] (batchId=255) org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure (batchId=261) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11640/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11640/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11640/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12926900 - PreCommit-HIVE-Build > BytesBytesMultiHashMap estimation should account for loadFactor > --- > > Key: HIVE-19823 > URL: https://issues.apache.org/jira/browse/HIVE-19823 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19823.01.patch > > > it could happen that the capacity is known beforehand; and the estimated size > of the hashtable is accurate. but still; because after some time the element > count violates loadfactor ratio a rehash will occur. > this by default could happen with a {{1-loadfactor = 25%}} probability > this rehashing takes around 2 seconds on my system for 6.5M entries > https://github.com/apache/hive/blob/cfd57348c1ac188e0ba131d5636a62ff7b7c27be/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/BytesBytesMultiHashMap.java#L176-L187 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19823) BytesBytesMultiHashMap estimation should account for loadFactor
[ https://issues.apache.org/jira/browse/HIVE-19823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506763#comment-16506763 ] Hive QA commented on HIVE-19823: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 32s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} ql: The patch generated 0 new + 40 unchanged - 1 fixed = 40 total (was 41) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11640/dev-support/hive-personality.sh | | git revision | master / 6454585 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11640/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > BytesBytesMultiHashMap estimation should account for loadFactor > --- > > Key: HIVE-19823 > URL: https://issues.apache.org/jira/browse/HIVE-19823 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19823.01.patch > > > it could happen that the capacity is known beforehand; and the estimated size > of the hashtable is accurate. but still; because after some time the element > count violates loadfactor ratio a rehash will occur. > this by default could happen with a {{1-loadfactor = 25%}} probability > this rehashing takes around 2 seconds on my system for 6.5M entries > https://github.com/apache/hive/blob/cfd57348c1ac188e0ba131d5636a62ff7b7c27be/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/BytesBytesMultiHashMap.java#L176-L187 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19824) Improve online datasize estimations for MapJoins
[ https://issues.apache.org/jira/browse/HIVE-19824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506753#comment-16506753 ] Hive QA commented on HIVE-19824: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926898/HIVE-19824.01wip01.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11639/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11639/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11639/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12926898/HIVE-19824.01wip01.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12926898 - PreCommit-HIVE-Build > Improve online datasize estimations for MapJoins > > > Key: HIVE-19824 > URL: https://issues.apache.org/jira/browse/HIVE-19824 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19824.01wip01.patch, HIVE-19824.01wip01.patch > > > Statistics.datasize() only accounts for "real" data size; but for example > handling 1M rows might introduce some datastructure overhead...if the "real" > data is small - even this overhead might become the real memory usage > for 6.5M rows of (int,int) the estimation is 52MB > in reality this eats up ~260MB from which 210MB is used to service the > hashmap functionality to that many rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19824) Improve online datasize estimations for MapJoins
[ https://issues.apache.org/jira/browse/HIVE-19824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506752#comment-16506752 ] Hive QA commented on HIVE-19824: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926898/HIVE-19824.01wip01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 14512 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_nullscan] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_max_hashtable] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin_mapjoin] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two] (batchId=183) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] (batchId=255) org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitions (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes (batchId=242) org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11638/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11638/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11638/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 18 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12926898 - PreCommit-HIVE-Build > Improve online datasize estimations for MapJoins > > > Key: HIVE-19824 > URL: https://issues.apache.org/jira/browse/HIVE-19824 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19824.01wip01.patch, HIVE-19824.01wip01.patch > > > Statistics.datasize() only accounts for "real" data size; but for example > handling 1M rows might introduce some datastructure overhead...if the "real" > data is small - even this overhead might become the real memory usage > for 6.5M rows of (int,int) the estimation is 52MB > in reality this eats up ~260MB from which 210MB is used to service the > hashmap functionality to that many rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19777) NPE in TezSessionState
[ https://issues.apache.org/jira/browse/HIVE-19777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506731#comment-16506731 ] Sergey Shelukhin commented on HIVE-19777: - I don't think the change in HIVE-18990 is correct... close is supposed to already take care of all the stuff getSession does. Also it makes no sense imo to wait for AM when user wants to cancel the session... killing it seems to me to be the right thing to do. At any rate, if getSession is called all the other state-related code in close is not needed. > NPE in TezSessionState > -- > > Key: HIVE-19777 > URL: https://issues.apache.org/jira/browse/HIVE-19777 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Jason Dere >Assignee: Igor Kryvenko >Priority: Major > > Encountered while running "insert into table values (..)" > Looks like it is due to the fact that TezSessionState.close() sets console to > null at the start of the method, and then calls getSession() which attempts > to log to console. > {noformat} > java.lang.NullPointerException: null > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.getSession(TezSessionState.java:711) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionState.close(TezSessionState.java:646) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.closeIfNotDefault(TezSessionPoolManager.java:353) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:467) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManagerFederation.getUnmanagedSession(WorkloadManagerFederation.java:66) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.WorkloadManagerFederation.getSession(WorkloadManagerFederation.java:38) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:184) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2497) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1826) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1569) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1563) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:218) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > ~[hive-cli-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > ~[hive-cli-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > ~[hive-cli-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) > ~[hive-cli-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) > ~[hive-cli-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) > ~[hive-cli-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121] > at org.apache.hadoop.util.RunJar.run(RunJar.java:308) > ~[hadoop-common-3.0.0.3.0.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.util.RunJar.main(RunJar.java:222) > ~[hadoop-common-3.0.0.3.0.0.0-SNAPSHOT.jar:?] > {noformat} -- This message was sent by Atlassian JIRA
[jira] [Commented] (HIVE-19824) Improve online datasize estimations for MapJoins
[ https://issues.apache.org/jira/browse/HIVE-19824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506724#comment-16506724 ] Hive QA commented on HIVE-19824: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 24s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} ql: The patch generated 0 new + 27 unchanged - 1 fixed = 27 total (was 28) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11638/dev-support/hive-personality.sh | | git revision | master / 6454585 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11638/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Improve online datasize estimations for MapJoins > > > Key: HIVE-19824 > URL: https://issues.apache.org/jira/browse/HIVE-19824 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19824.01wip01.patch, HIVE-19824.01wip01.patch > > > Statistics.datasize() only accounts for "real" data size; but for example > handling 1M rows might introduce some datastructure overhead...if the "real" > data is small - even this overhead might become the real memory usage > for 6.5M rows of (int,int) the estimation is 52MB > in reality this eats up ~260MB from which 210MB is used to service the > hashmap functionality to that many rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19830) Inconsistent behavior when multiple partitions point to the same location
[ https://issues.apache.org/jira/browse/HIVE-19830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506717#comment-16506717 ] Sergey Shelukhin commented on HIVE-19830: - The 2nd issue is by design. Having partitions like that for Hive is not supported for regular tables. It's assumed that data is managed by Hive and so Hive deletes the directory when it's dropped... for the case where Hive should not manage the data, an external table should be used. The first one does look like it could be a bug for external tables, but again for regular tables such a use case is not supported. > Inconsistent behavior when multiple partitions point to the same location > - > > Key: HIVE-19830 > URL: https://issues.apache.org/jira/browse/HIVE-19830 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.4.0 >Reporter: Gabor Kaszab >Assignee: Adam Szita >Priority: Major > > // create a table with 2 partitions where both partitions share the same > location and inserting a single line to one of them. > create table test (i int) partitioned by (j int) stored as parquet; > alter table test add partition (j=1) location > 'hdfs://localhost:20500/test-warehouse/test/j=1'; > alter table test add partition (j=2) location > 'hdfs://localhost:20500/test-warehouse/test/j=1'; > insert into table test partition (j=1) values (1); > // select * show this single line in both partitions as expected. > select * from test; > 1 1 > 1 2 > // however, sum() doesn't add up the line for all the partitions. This is > +Issue #1+. > select sum( i), sum(j) from test; > 1 2 > // On the file system there is a common dir for the 2 partitions that is > expected. > hdfs dfs -ls hdfs://localhost:20500/test-warehouse/test/ > Found 1 items > drwxr-xr-x - gaborkaszab supergroup 0 2018-06-08 10:54 > hdfs://localhost:20500/test-warehouse/test/j=1 > // Let's drop one of the partitions now! > alter table test drop partition (j=2); > // running the same hdfs dfs -ls command shows that the j=1 directory is > dropped. I think this is a good behavior, we just have to document that this > is the expected case. > // select * from test; returns zero rows, this is still as expected. > // Even though the dir is dropped j=1 partition is still visible with show > partitions. This is +Issue #2+. > show partitions test; > j=1 > After dropping the directory with Hive, when Impala reloads it's partitions > it asks Hive to tell what are the existing partitions. Apparently, Hive sends > down a list with j=1 partition included and then Impala takes it as an > existing one and doesn't drop it from Catalog's cache. Here Hive shouldn't > send that partition down. This is +Issue #3+. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19378) "hive.lock.numretries" Is Misleading
[ https://issues.apache.org/jira/browse/HIVE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19378: - Attachment: HIVE-19378.1.patch > "hive.lock.numretries" Is Misleading > > > Key: HIVE-19378 > URL: https://issues.apache.org/jira/browse/HIVE-19378 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19378.1.patch > > > Configuration 'hive.lock.numretries' is confusing. It's not actually a > 'retry' count, it's the total number of attempt to try: > > {code:java|title=ZooKeeperHiveLockManager.java} > do { > lastException = null; > tryNum++; > try { > if (tryNum > 1) { > Thread.sleep(sleepTime); > prepareRetry(); > } > ret = lockPrimitive(key, mode, keepAlive, parentCreated, > conflictingLocks); > ... > } while (tryNum < numRetriesForLock); > {code} > So, from this code you can see that on the first loop, {{tryNum}} is set to > 1, in which case, if the configuration num*retries* is set to 1, there will > be one attempt total. With a *retry* value of 1, I would assume one initial > attempt and one additional retry. Please change to: > {code} > while (tryNum <= numRetriesForLock); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19378) "hive.lock.numretries" Is Misleading
[ https://issues.apache.org/jira/browse/HIVE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19378: - Attachment: (was: HIVE-19378.1.patch) > "hive.lock.numretries" Is Misleading > > > Key: HIVE-19378 > URL: https://issues.apache.org/jira/browse/HIVE-19378 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > > Configuration 'hive.lock.numretries' is confusing. It's not actually a > 'retry' count, it's the total number of attempt to try: > > {code:java|title=ZooKeeperHiveLockManager.java} > do { > lastException = null; > tryNum++; > try { > if (tryNum > 1) { > Thread.sleep(sleepTime); > prepareRetry(); > } > ret = lockPrimitive(key, mode, keepAlive, parentCreated, > conflictingLocks); > ... > } while (tryNum < numRetriesForLock); > {code} > So, from this code you can see that on the first loop, {{tryNum}} is set to > 1, in which case, if the configuration num*retries* is set to 1, there will > be one attempt total. With a *retry* value of 1, I would assume one initial > attempt and one additional retry. Please change to: > {code} > while (tryNum <= numRetriesForLock); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19237) Only use an operatorId once in a plan
[ https://issues.apache.org/jira/browse/HIVE-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506705#comment-16506705 ] Hive QA commented on HIVE-19237: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12927128/HIVE-19237.12.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14512 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11637/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11637/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11637/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12927128 - PreCommit-HIVE-Build > Only use an operatorId once in a plan > - > > Key: HIVE-19237 > URL: https://issues.apache.org/jira/browse/HIVE-19237 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19237.01.patch, HIVE-19237.02.patch, > HIVE-19237.03.patch, HIVE-19237.04.patch, HIVE-19237.05.patch, > HIVE-19237.05.patch, HIVE-19237.06.patch, HIVE-19237.07.patch, > HIVE-19237.08.patch, HIVE-19237.08.patch, HIVE-19237.09.patch, > HIVE-19237.10.patch, HIVE-19237.10.patch, HIVE-19237.11.patch, > HIVE-19237.11.patch, HIVE-19237.11.patch, HIVE-19237.12.patch > > > Column stats autogather plan part is added from a plan compiled by the driver > itself; however that driver starts to use operatorIds from 1 ; so it's > possible that 2 SEL_1 operators end up in the same plan... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19203) Thread-Safety Issue in HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-19203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19203: - Attachment: HIVE-19203.1.patch > Thread-Safety Issue in HiveMetaStore > > > Key: HIVE-19203 > URL: https://issues.apache.org/jira/browse/HIVE-19203 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19203.1.patch > > > [https://github.com/apache/hive/blob/550d1e1196b7c801c572092db974a459aac6c249/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L345-L351] > {code:java} > private static int nextSerialNum = 0; > private static ThreadLocal threadLocalId = new > ThreadLocal() { > @Override > protected Integer initialValue() { > return nextSerialNum++; > } > };{code} > > {{nextSerialNum}} needs to be an atomic value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19203) Thread-Safety Issue in HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-19203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19203: - Attachment: (was: HIVE-19203.1.patch) > Thread-Safety Issue in HiveMetaStore > > > Key: HIVE-19203 > URL: https://issues.apache.org/jira/browse/HIVE-19203 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > > [https://github.com/apache/hive/blob/550d1e1196b7c801c572092db974a459aac6c249/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L345-L351] > {code:java} > private static int nextSerialNum = 0; > private static ThreadLocal threadLocalId = new > ThreadLocal() { > @Override > protected Integer initialValue() { > return nextSerialNum++; > } > };{code} > > {{nextSerialNum}} needs to be an atomic value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19203) Thread-Safety Issue in HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-19203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19203: - Attachment: (was: HIVE-19203.1.patch) > Thread-Safety Issue in HiveMetaStore > > > Key: HIVE-19203 > URL: https://issues.apache.org/jira/browse/HIVE-19203 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19203.1.patch > > > [https://github.com/apache/hive/blob/550d1e1196b7c801c572092db974a459aac6c249/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L345-L351] > {code:java} > private static int nextSerialNum = 0; > private static ThreadLocal threadLocalId = new > ThreadLocal() { > @Override > protected Integer initialValue() { > return nextSerialNum++; > } > };{code} > > {{nextSerialNum}} needs to be an atomic value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19203) Thread-Safety Issue in HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-19203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19203: - Attachment: HIVE-19203.1.patch > Thread-Safety Issue in HiveMetaStore > > > Key: HIVE-19203 > URL: https://issues.apache.org/jira/browse/HIVE-19203 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19203.1.patch > > > [https://github.com/apache/hive/blob/550d1e1196b7c801c572092db974a459aac6c249/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L345-L351] > {code:java} > private static int nextSerialNum = 0; > private static ThreadLocal threadLocalId = new > ThreadLocal() { > @Override > protected Integer initialValue() { > return nextSerialNum++; > } > };{code} > > {{nextSerialNum}} needs to be an atomic value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19237) Only use an operatorId once in a plan
[ https://issues.apache.org/jira/browse/HIVE-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506696#comment-16506696 ] Hive QA commented on HIVE-19237: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 15s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 30s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 43s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 39s{color} | {color:red} root: The patch generated 3 new + 214 unchanged - 2 fixed = 217 total (was 216) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s{color} | {color:red} ql: The patch generated 3 new + 214 unchanged - 2 fixed = 217 total (was 216) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense xml javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11637/dev-support/hive-personality.sh | | git revision | master / 6454585 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11637/yetus/diff-checkstyle-root.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11637/yetus/diff-checkstyle-ql.txt | | modules | C: . ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11637/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Only use an operatorId once in a plan > - > > Key: HIVE-19237 > URL: https://issues.apache.org/jira/browse/HIVE-19237 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19237.01.patch, HIVE-19237.02.patch, > HIVE-19237.03.patch, HIVE-19237.04.patch, HIVE-19237.05.patch, > HIVE-19237.05.patch, HIVE-19237.06.patch, HIVE-19237.07.patch, > HIVE-19237.08.patch, HIVE-19237.08.patch, HIVE-19237.09.patch, > HIVE-19237.10.patch, HIVE-19237.10.patch, HIVE-19237.11.patch, > HIVE-19237.11.patch, HIVE-19237.11.patch, HIVE-19237.12.patch > > > Column stats autogather plan part is added from a plan compiled by the
[jira] [Commented] (HIVE-19838) simplify & fix ColumnizedDeleteEventRegistry load loop
[ https://issues.apache.org/jira/browse/HIVE-19838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506667#comment-16506667 ] Sergey Shelukhin commented on HIVE-19838: - [~ekoifman] [~teddy.choi] can you take a look? thnx > simplify & fix ColumnizedDeleteEventRegistry load loop > -- > > Key: HIVE-19838 > URL: https://issues.apache.org/jira/browse/HIVE-19838 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19838.patch > > > Apparently sometimes the delete count in ACID stats doesn't match what merger > actually returns. > It could be due to some deltas having duplicate deletes from parallel queries > (I guess?) that are being squashed by the merger or some other reasons beyond > my mortal comprehension. > The loop assumes the merger will return the exact number of records, so it > fails with array index exception. Also, it could actually be done in a single > loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19838) simplify & fix ColumnizedDeleteEventRegistry load loop
[ https://issues.apache.org/jira/browse/HIVE-19838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19838: Status: Patch Available (was: Open) > simplify & fix ColumnizedDeleteEventRegistry load loop > -- > > Key: HIVE-19838 > URL: https://issues.apache.org/jira/browse/HIVE-19838 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19838.patch > > > Apparently sometimes the delete count in ACID stats doesn't match what merger > actually returns. > It could be due to some deltas having duplicate deletes from parallel queries > (I guess?) that are being squashed by the merger or some other reasons beyond > my mortal comprehension. > The loop assumes the merger will return the exact number of records, so it > fails with array index exception. Also, it could actually be done in a single > loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19838) simplify & fix ColumnizedDeleteEventRegistry load loop
[ https://issues.apache.org/jira/browse/HIVE-19838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19838: Attachment: HIVE-19838.patch > simplify & fix ColumnizedDeleteEventRegistry load loop > -- > > Key: HIVE-19838 > URL: https://issues.apache.org/jira/browse/HIVE-19838 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19838.patch > > > Apparently sometimes the delete count in ACID stats doesn't match what merger > actually returns. > It could be due to some deltas having duplicate deletes from parallel queries > (I guess?) that are being squashed by the merger or some other reasons beyond > my mortal comprehension. > The loop assumes the merger will return the exact number of records, so it > fails with array index exception. Also, it could actually be done in a single > loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19838) simplify & fix ColumnizedDeleteEventRegistry load loop
[ https://issues.apache.org/jira/browse/HIVE-19838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19838: Description: Apparently sometimes the delete count in ACID stats doesn't match what merger actually returns. It could be due to some deltas having duplicate deletes from parallel queries (I guess?) that are being squashed by the merger or some other reasons beyond my mortal comprehension. The loop assumes the merger will return the exact number of records, so it fails with array index exception. Also, it could actually be done in a single loop. was: Apparently sometimes the delete count in ACID stats doesn't match what merger actually returns. It could be due to some deltas having duplicate deletes from parallel queries (I guess?) or some other reasons beyond my mortal comprehension. The loop assumes the merger will return the exact number of records, so it fails with array index exception. Also, it could actually be done in a single loop. > simplify & fix ColumnizedDeleteEventRegistry load loop > -- > > Key: HIVE-19838 > URL: https://issues.apache.org/jira/browse/HIVE-19838 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > Apparently sometimes the delete count in ACID stats doesn't match what merger > actually returns. > It could be due to some deltas having duplicate deletes from parallel queries > (I guess?) that are being squashed by the merger or some other reasons beyond > my mortal comprehension. > The loop assumes the merger will return the exact number of records, so it > fails with array index exception. Also, it could actually be done in a single > loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19838) simplify & fix ColumnizedDeleteEventRegistry load loop
[ https://issues.apache.org/jira/browse/HIVE-19838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19838: --- > simplify & fix ColumnizedDeleteEventRegistry load loop > -- > > Key: HIVE-19838 > URL: https://issues.apache.org/jira/browse/HIVE-19838 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > Apparently sometimes the delete count in ACID stats doesn't match what merger > actually returns. > It could be due to some deltas having duplicate deletes from parallel queries > (I guess?) or some other reasons beyond my mortal comprehension. > The loop assumes the merger will return the exact number of records, so it > fails with array index exception. Also, it could actually be done in a single > loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19837) Setting to have different default location for external tables
[ https://issues.apache.org/jira/browse/HIVE-19837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reassigned HIVE-19837: - > Setting to have different default location for external tables > -- > > Key: HIVE-19837 > URL: https://issues.apache.org/jira/browse/HIVE-19837 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Allow external tables to have a different default location than managed tables -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19783) Retrieve only locations in HiveMetaStore.dropPartitionsAndGetLocations
[ https://issues.apache.org/jira/browse/HIVE-19783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506655#comment-16506655 ] Hive QA commented on HIVE-19783: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926867/HIVE-19783.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11635/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11635/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11635/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12926867/HIVE-19783.2.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12926867 - PreCommit-HIVE-Build > Retrieve only locations in HiveMetaStore.dropPartitionsAndGetLocations > -- > > Key: HIVE-19783 > URL: https://issues.apache.org/jira/browse/HIVE-19783 > Project: Hive > Issue Type: Improvement >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19783.2.patch, HIVE-19783.patch > > > Optimize further the dropTable command. > Currently {{HiveMetaStore.dropPartitionsAndGetLocations}} retrieves the whole > partition object, but we need only the locations instead. > Create a RawStore method to retrieve only the locations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506654#comment-16506654 ] Hive QA commented on HIVE-19629: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926864/HIVE-19629.10.patch {color:green}SUCCESS:{color} +1 due to 26 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 14514 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_expressions] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_math_funcs] (batchId=24) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_6] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_expressions] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_math_funcs] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction2] (batchId=165) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_3] (batchId=105) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11634/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11634/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11634/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12926864 - PreCommit-HIVE-Build > Enable Decimal64 reader after orc version upgrade > - > > Key: HIVE-19629 > URL: https://issues.apache.org/jira/browse/HIVE-19629 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19629.1.patch, HIVE-19629.10.patch, > HIVE-19629.2.patch, HIVE-19629.3.patch, HIVE-19629.4.patch, > HIVE-19629.5.patch, HIVE-19629.6.patch, HIVE-19629.7.patch, > HIVE-19629.8.patch, HIVE-19629.9.patch > > > ORC 1.5.0 supports new fast decimal 64 reader. New VRB has to be created for > making use of decimal 64 column vectors. Also LLAP IO will need a new reader > to reader from long stream to decimal 64. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19776) HiveServer2.startHiveServer2 retries of start has concurrency issues
[ https://issues.apache.org/jira/browse/HIVE-19776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-19776: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master and branch-3. Thanks for the review [~daijy] > HiveServer2.startHiveServer2 retries of start has concurrency issues > > > Key: HIVE-19776 > URL: https://issues.apache.org/jira/browse/HIVE-19776 > Project: Hive > Issue Type: Improvement >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19776.1.patch, HIVE-19776.2.patch, > HIVE-19776.3.patch > > > HS2 starts the thrift binary/http servers in background, while it proceeds to > do other setup (eg create zookeeper entries). If there is a ZK error and it > attempts to stop and start in the retry loop within > HiveServer2.startHiveServer2, the retry fails because the thrift server > doesn't get stopped if it was still getting initialized. > The thrift server initialization and stopping needs to be synchronized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19776) HiveServer2.startHiveServer2 retries of start has concurrency issues
[ https://issues.apache.org/jira/browse/HIVE-19776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-19776: - Fix Version/s: 3.1.0 > HiveServer2.startHiveServer2 retries of start has concurrency issues > > > Key: HIVE-19776 > URL: https://issues.apache.org/jira/browse/HIVE-19776 > Project: Hive > Issue Type: Improvement >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19776.1.patch, HIVE-19776.2.patch, > HIVE-19776.3.patch > > > HS2 starts the thrift binary/http servers in background, while it proceeds to > do other setup (eg create zookeeper entries). If there is a ZK error and it > attempts to stop and start in the retry loop within > HiveServer2.startHiveServer2, the retry fails because the thrift server > doesn't get stopped if it was still getting initialized. > The thrift server initialization and stopping needs to be synchronized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19776) HiveServer2.startHiveServer2 retries of start has concurrency issues
[ https://issues.apache.org/jira/browse/HIVE-19776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-19776: - Fix Version/s: 4.0.0 > HiveServer2.startHiveServer2 retries of start has concurrency issues > > > Key: HIVE-19776 > URL: https://issues.apache.org/jira/browse/HIVE-19776 > Project: Hive > Issue Type: Improvement >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-19776.1.patch, HIVE-19776.2.patch, > HIVE-19776.3.patch > > > HS2 starts the thrift binary/http servers in background, while it proceeds to > do other setup (eg create zookeeper entries). If there is a ZK error and it > attempts to stop and start in the retry loop within > HiveServer2.startHiveServer2, the retry fails because the thrift server > doesn't get stopped if it was still getting initialized. > The thrift server initialization and stopping needs to be synchronized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19813) SessionState.start don't have to be synchronized
[ https://issues.apache.org/jira/browse/HIVE-19813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19813: -- Attachment: HIVE-19813.3.patch > SessionState.start don't have to be synchronized > > > Key: HIVE-19813 > URL: https://issues.apache.org/jira/browse/HIVE-19813 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19813.1.patch, HIVE-19813.2.patch, > HIVE-19813.3.patch > > > This is introduced in HIVE-14690. However, only check-set block needs to be > synchronized, not the whole block. The method will start Tez AM, which is a > long operation. Make the method synchronized will serialize session start > thus slow down hs2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19809) Remove Deprecated Code From Utilities Class
[ https://issues.apache.org/jira/browse/HIVE-19809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506644#comment-16506644 ] Aihua Xu commented on HIVE-19809: - The change looks good to me. +1. > Remove Deprecated Code From Utilities Class > --- > > Key: HIVE-19809 > URL: https://issues.apache.org/jira/browse/HIVE-19809 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0, 4.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-19809.1.patch > > > {quote} > This can go away once hive moves to support only JDK 7 and can use > Files.createTempDirectory > {quote} > Remove the {{createTempDir}} method from the {{Utilities}} class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19813) SessionState.start don't have to be synchronized
[ https://issues.apache.org/jira/browse/HIVE-19813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506643#comment-16506643 ] Daniel Dai commented on HIVE-19813: --- Retest. > SessionState.start don't have to be synchronized > > > Key: HIVE-19813 > URL: https://issues.apache.org/jira/browse/HIVE-19813 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19813.1.patch, HIVE-19813.2.patch, > HIVE-19813.3.patch > > > This is introduced in HIVE-14690. However, only check-set block needs to be > synchronized, not the whole block. The method will start Tez AM, which is a > long operation. Make the method synchronized will serialize session start > thus slow down hs2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19805) TableScanDesc Use Commons Library
[ https://issues.apache.org/jira/browse/HIVE-19805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506632#comment-16506632 ] Aihua Xu commented on HIVE-19805: - [~belugabehr] That's nice. Do you know how commons-collections4 dependency gets included? > TableScanDesc Use Commons Library > - > > Key: HIVE-19805 > URL: https://issues.apache.org/jira/browse/HIVE-19805 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Affects Versions: 4.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-19805.1.patch > > > Use commons library and remove some code -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506619#comment-16506619 ] Hive QA commented on HIVE-19629: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} llap-server in master has 86 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 33s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} llap-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s{color} | {color:red} llap-server: The patch generated 26 new + 265 unchanged - 10 fixed = 291 total (was 275) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 51s{color} | {color:red} ql: The patch generated 81 new + 1877 unchanged - 6 fixed = 1958 total (was 1883) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 34s{color} | {color:red} ql generated 3 new + 2284 unchanged - 0 fixed = 2287 total (was 2284) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatchCtx.setRowDataTypePhysicalVariations(DataTypePhysicalVariation[]) may expose internal representation by storing an externally mutable object into VectorizedRowBatchCtx.rowDataTypePhysicalVariations At VectorizedRowBatchCtx.java:by storing an externally mutable object into VectorizedRowBatchCtx.rowDataTypePhysicalVariations At VectorizedRowBatchCtx.java:[line 168] | | | Switch statement found in org.apache.hadoop.hive.ql.io.orc.WriterImpl.setColumn(int, ColumnVector, ObjectInspector, Object) where default case is missing At WriterImpl.java:ColumnVector, ObjectInspector, Object) where default case is missing At WriterImpl.java:[lines 138-227] | | | Self assignment of physicalContext in org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.resolve(PhysicalContext) At Vectorizer.java:org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.resolve(PhysicalContext) At Vectorizer.java:[line 2250] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11634/dev-support/hive-personality.sh | | git revision | master / 913baef | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | mvninstall |
[jira] [Updated] (HIVE-19237) Only use an operatorId once in a plan
[ https://issues.apache.org/jira/browse/HIVE-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19237: Attachment: HIVE-19237.12.patch > Only use an operatorId once in a plan > - > > Key: HIVE-19237 > URL: https://issues.apache.org/jira/browse/HIVE-19237 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19237.01.patch, HIVE-19237.02.patch, > HIVE-19237.03.patch, HIVE-19237.04.patch, HIVE-19237.05.patch, > HIVE-19237.05.patch, HIVE-19237.06.patch, HIVE-19237.07.patch, > HIVE-19237.08.patch, HIVE-19237.08.patch, HIVE-19237.09.patch, > HIVE-19237.10.patch, HIVE-19237.10.patch, HIVE-19237.11.patch, > HIVE-19237.11.patch, HIVE-19237.11.patch, HIVE-19237.12.patch > > > Column stats autogather plan part is added from a plan compiled by the driver > itself; however that driver starts to use operatorIds from 1 ; so it's > possible that 2 SEL_1 operators end up in the same plan... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19776) HiveServer2.startHiveServer2 retries of start has concurrency issues
[ https://issues.apache.org/jira/browse/HIVE-19776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506617#comment-16506617 ] Thejas M Nair commented on HIVE-19776: -- Test failure is addressed in HIVE-19816 > HiveServer2.startHiveServer2 retries of start has concurrency issues > > > Key: HIVE-19776 > URL: https://issues.apache.org/jira/browse/HIVE-19776 > Project: Hive > Issue Type: Improvement >Reporter: Thejas M Nair >Assignee: Thejas M Nair >Priority: Major > Labels: pull-request-available > Attachments: HIVE-19776.1.patch, HIVE-19776.2.patch, > HIVE-19776.3.patch > > > HS2 starts the thrift binary/http servers in background, while it proceeds to > do other setup (eg create zookeeper entries). If there is a ZK error and it > attempts to stop and start in the retry loop within > HiveServer2.startHiveServer2, the retry fails because the thrift server > doesn't get stopped if it was still getting initialized. > The thrift server initialization and stopping needs to be synchronized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19403) Demote 'Pattern' Logging
[ https://issues.apache.org/jira/browse/HIVE-19403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506589#comment-16506589 ] BELUGA BEHR commented on HIVE-19403: If they're helpful to the developers, then DEBUG logging is fine. If they are helpful to no one, please remove. > Demote 'Pattern' Logging > > > Key: HIVE-19403 > URL: https://issues.apache.org/jira/browse/HIVE-19403 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: gonglinglei >Priority: Trivial > Labels: noob > Attachments: HIVE-19403.1.patch > > > In the {{DDLTask}} class, there is some logging that is not helpful to a > cluster admin and should be demoted to _debug_ level logging. In fact, in > one place in the code, it already is. > {code} > LOG.info("pattern: {}", showDatabasesDesc.getPattern()); > LOG.debug("pattern: {}", pattern); > LOG.info("pattern: {}", showFuncs.getPattern()); > LOG.info("pattern: {}", showTblStatus.getPattern()); > {code} > Here is an example... as an admin, I can already see what the pattern is, I > do not need this extra logging. It provides no additional context. > {code:java|title=Example} > 2018-05-03 03:08:26,354 INFO org.apache.hadoop.hive.ql.Driver: > [HiveServer2-Background-Pool: Thread-101980]: Executing > command(queryId=hive_20180503030808_e53c26ef-2280-4eca-929b-668503105e2e): > SHOW TABLE EXTENDED FROM my_db LIKE '*' > 2018-05-03 03:08:26,355 INFO hive.ql.exec.DDLTask: > [HiveServer2-Background-Pool: Thread-101980]: pattern: * > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19794) Disable removing order by from subquery in GenericUDTFGetSplits
[ https://issues.apache.org/jira/browse/HIVE-19794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506585#comment-16506585 ] Hive QA commented on HIVE-19794: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926988/HIVE-19794.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14513 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11633/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11633/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11633/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12926988 - PreCommit-HIVE-Build > Disable removing order by from subquery in GenericUDTFGetSplits > --- > > Key: HIVE-19794 > URL: https://issues.apache.org/jira/browse/HIVE-19794 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19794.1.patch, HIVE-19794.2.patch, > HIVE-19794.3.patch > > > spark-llap always wraps query under a subquery, until that is removed from > spark-llap > hive compiler is going to remove inner order by in GenericUDTFGetSplits. > disable that optimization until then. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19403) Demote 'Pattern' Logging
[ https://issues.apache.org/jira/browse/HIVE-19403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506564#comment-16506564 ] Aihua Xu commented on HIVE-19403: - I feel we can remove such log entries. How do you think? > Demote 'Pattern' Logging > > > Key: HIVE-19403 > URL: https://issues.apache.org/jira/browse/HIVE-19403 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: gonglinglei >Priority: Trivial > Labels: noob > Attachments: HIVE-19403.1.patch > > > In the {{DDLTask}} class, there is some logging that is not helpful to a > cluster admin and should be demoted to _debug_ level logging. In fact, in > one place in the code, it already is. > {code} > LOG.info("pattern: {}", showDatabasesDesc.getPattern()); > LOG.debug("pattern: {}", pattern); > LOG.info("pattern: {}", showFuncs.getPattern()); > LOG.info("pattern: {}", showTblStatus.getPattern()); > {code} > Here is an example... as an admin, I can already see what the pattern is, I > do not need this extra logging. It provides no additional context. > {code:java|title=Example} > 2018-05-03 03:08:26,354 INFO org.apache.hadoop.hive.ql.Driver: > [HiveServer2-Background-Pool: Thread-101980]: Executing > command(queryId=hive_20180503030808_e53c26ef-2280-4eca-929b-668503105e2e): > SHOW TABLE EXTENDED FROM my_db LIKE '*' > 2018-05-03 03:08:26,355 INFO hive.ql.exec.DDLTask: > [HiveServer2-Background-Pool: Thread-101980]: pattern: * > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19744) In Beeline if -u is specified the default connection should not be tried at all
[ https://issues.apache.org/jira/browse/HIVE-19744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-19744: --- Assignee: Zoltan Haindrich > In Beeline if -u is specified the default connection should not be tried at > all > --- > > Key: HIVE-19744 > URL: https://issues.apache.org/jira/browse/HIVE-19744 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19744.01.patch > > > I wanted to explicitly connect to a hiveserver by specifying {{-u}} but that > didn't work because it was not running/etc... > The strange thing is that somehow the default connection is activated...and > tried > The possible "hazard" here is that if someone specifies {{-u $MY_DEV_HS2 -f > recreate_db.sql}} to run some sql script...beeline may connect somewhere else > and run the commands there - which might have serious consequences (in > the above case having default as production might be interesting) > {code} > beeline -u jdbc:hive2://localhost:10502/;transportMode=binary -n hrt_qa > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/hdp/3.0.0.0-1406/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/hdp/3.0.0.0-1406/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to jdbc:hive2://localhost:10502/ > 18/05/31 07:51:20 [main]: WARN jdbc.HiveConnection: Failed to connect to > localhost:10502 > Unknown HS2 problem when communicating with Thrift server. > Error: Could not open client transport with JDBC Uri: > jdbc:hive2://localhost:10502/: Invalid status 72 (state=08S01,code=0) > Connecting to > jdbc:hive2://ctr-e138-1518143905142-336795-01-16.hwx.site:2181,ctr-e138-1518143905142-336795-01-08.hwx.site:2181,ctr-e138-1518143905142-336795-01-14.hwx.site:2181,ctr-e138-1518143905142-336795-01-09.hwx.site:2181,ctr-e138-1518143905142-336795-01-15.hwx.site:2181/default;httpPath=cliservice;principal=hive/_h...@example.com;serviceDiscoveryMode=zooKeeper;ssl=true;transportMode=http;zooKeeperNamespace=hiveserver2 > 18/05/31 07:51:21 [main]: INFO jdbc.HiveConnection: Connected to > ctr-e138-1518143905142-336795-01-03.hwx.site:10001 > 18/05/31 07:51:21 [main]: ERROR jdbc.HiveConnection: Error opening session > org.apache.thrift.transport.TTransportException: > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path validation failed: > java.security.cert.CertPathValidatorException: signature check failed > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19744) In Beeline if -u is specified the default connection should not be tried at all
[ https://issues.apache.org/jira/browse/HIVE-19744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19744: Status: Patch Available (was: Open) > In Beeline if -u is specified the default connection should not be tried at > all > --- > > Key: HIVE-19744 > URL: https://issues.apache.org/jira/browse/HIVE-19744 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19744.01.patch > > > I wanted to explicitly connect to a hiveserver by specifying {{-u}} but that > didn't work because it was not running/etc... > The strange thing is that somehow the default connection is activated...and > tried > The possible "hazard" here is that if someone specifies {{-u $MY_DEV_HS2 -f > recreate_db.sql}} to run some sql script...beeline may connect somewhere else > and run the commands there - which might have serious consequences (in > the above case having default as production might be interesting) > {code} > beeline -u jdbc:hive2://localhost:10502/;transportMode=binary -n hrt_qa > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/hdp/3.0.0.0-1406/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/hdp/3.0.0.0-1406/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to jdbc:hive2://localhost:10502/ > 18/05/31 07:51:20 [main]: WARN jdbc.HiveConnection: Failed to connect to > localhost:10502 > Unknown HS2 problem when communicating with Thrift server. > Error: Could not open client transport with JDBC Uri: > jdbc:hive2://localhost:10502/: Invalid status 72 (state=08S01,code=0) > Connecting to > jdbc:hive2://ctr-e138-1518143905142-336795-01-16.hwx.site:2181,ctr-e138-1518143905142-336795-01-08.hwx.site:2181,ctr-e138-1518143905142-336795-01-14.hwx.site:2181,ctr-e138-1518143905142-336795-01-09.hwx.site:2181,ctr-e138-1518143905142-336795-01-15.hwx.site:2181/default;httpPath=cliservice;principal=hive/_h...@example.com;serviceDiscoveryMode=zooKeeper;ssl=true;transportMode=http;zooKeeperNamespace=hiveserver2 > 18/05/31 07:51:21 [main]: INFO jdbc.HiveConnection: Connected to > ctr-e138-1518143905142-336795-01-03.hwx.site:10001 > 18/05/31 07:51:21 [main]: ERROR jdbc.HiveConnection: Error opening session > org.apache.thrift.transport.TTransportException: > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path validation failed: > java.security.cert.CertPathValidatorException: signature check failed > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19744) In Beeline if -u is specified the default connection should not be tried at all
[ https://issues.apache.org/jira/browse/HIVE-19744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19744: Attachment: HIVE-19744.01.patch > In Beeline if -u is specified the default connection should not be tried at > all > --- > > Key: HIVE-19744 > URL: https://issues.apache.org/jira/browse/HIVE-19744 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19744.01.patch > > > I wanted to explicitly connect to a hiveserver by specifying {{-u}} but that > didn't work because it was not running/etc... > The strange thing is that somehow the default connection is activated...and > tried > The possible "hazard" here is that if someone specifies {{-u $MY_DEV_HS2 -f > recreate_db.sql}} to run some sql script...beeline may connect somewhere else > and run the commands there - which might have serious consequences (in > the above case having default as production might be interesting) > {code} > beeline -u jdbc:hive2://localhost:10502/;transportMode=binary -n hrt_qa > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/usr/hdp/3.0.0.0-1406/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/usr/hdp/3.0.0.0-1406/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] > Connecting to jdbc:hive2://localhost:10502/ > 18/05/31 07:51:20 [main]: WARN jdbc.HiveConnection: Failed to connect to > localhost:10502 > Unknown HS2 problem when communicating with Thrift server. > Error: Could not open client transport with JDBC Uri: > jdbc:hive2://localhost:10502/: Invalid status 72 (state=08S01,code=0) > Connecting to > jdbc:hive2://ctr-e138-1518143905142-336795-01-16.hwx.site:2181,ctr-e138-1518143905142-336795-01-08.hwx.site:2181,ctr-e138-1518143905142-336795-01-14.hwx.site:2181,ctr-e138-1518143905142-336795-01-09.hwx.site:2181,ctr-e138-1518143905142-336795-01-15.hwx.site:2181/default;httpPath=cliservice;principal=hive/_h...@example.com;serviceDiscoveryMode=zooKeeper;ssl=true;transportMode=http;zooKeeperNamespace=hiveserver2 > 18/05/31 07:51:21 [main]: INFO jdbc.HiveConnection: Connected to > ctr-e138-1518143905142-336795-01-03.hwx.site:10001 > 18/05/31 07:51:21 [main]: ERROR jdbc.HiveConnection: Error opening session > org.apache.thrift.transport.TTransportException: > javax.net.ssl.SSLHandshakeException: > sun.security.validator.ValidatorException: PKIX path validation failed: > java.security.cert.CertPathValidatorException: signature check failed > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19794) Disable removing order by from subquery in GenericUDTFGetSplits
[ https://issues.apache.org/jira/browse/HIVE-19794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506553#comment-16506553 ] Hive QA commented on HIVE-19794: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 37s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 25s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} itests/hive-unit: The patch generated 14 new + 0 unchanged - 0 fixed = 14 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 32s{color} | {color:red} ql: The patch generated 5 new + 23 unchanged - 0 fixed = 28 total (was 23) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11633/dev-support/hive-personality.sh | | git revision | master / 913baef | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11633/yetus/diff-checkstyle-itests_hive-unit.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11633/yetus/diff-checkstyle-ql.txt | | modules | C: common itests/hive-unit ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11633/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Disable removing order by from subquery in GenericUDTFGetSplits > --- > > Key: HIVE-19794 > URL: https://issues.apache.org/jira/browse/HIVE-19794 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0, 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19794.1.patch, HIVE-19794.2.patch, > HIVE-19794.3.patch > > > spark-llap always wraps query under a subquery, until that is
[jira] [Updated] (HIVE-19771) allowNullColumnForMissingStats should not be false when column stats are estimated
[ https://issues.apache.org/jira/browse/HIVE-19771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19771: --- Attachment: HIVE-19771.01.patch > allowNullColumnForMissingStats should not be false when column stats are > estimated > -- > > Key: HIVE-19771 > URL: https://issues.apache.org/jira/browse/HIVE-19771 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19771.01.patch, HIVE-19771.patch > > > Otherwise we may throw an Exception. > {noformat} > 2018-05-26T00:30:22,335 DEBUG [HiveServer2-Background-Pool: Thread-631]: > stats.StatsUtils (:()) - Estimated average row size: 372 > 2018-05-26T00:30:22,352 DEBUG [HiveServer2-Background-Pool: Thread-631]: > calcite.RelOptHiveTable (:()) - Stats for column a in table basetable_rebuild > stored in cache > 2018-05-26T00:30:22,352 DEBUG [HiveServer2-Background-Pool: Thread-631]: > calcite.RelOptHiveTable (:()) - colName: a colType: int countDistincts: 4 > numNulls: 1 avgColLen: 4.0 numTrues: 0 numFalses: 0 Range: [ min: > -9223372036854775808 max: 9223372036854775807 ] isPrimaryKey: false > isEstimated: true > 2018-05-26T00:30:22,352 DEBUG [HiveServer2-Background-Pool: Thread-631]: > calcite.RelOptHiveTable (:()) - Stats for column b in table basetable_rebuild > stored in cache > 2018-05-26T00:30:22,352 DEBUG [HiveServer2-Background-Pool: Thread-631]: > calcite.RelOptHiveTable (:()) - colName: b colType: varchar(256) > countDistincts: 4 numNulls: 1 avgColLen: 256.0 numTrues: 0 numFalses: 0 > isPrimaryKey: false isEstimated: true > 2018-05-26T00:30:22,352 ERROR [HiveServer2-Background-Pool: Thread-631]: > calcite.RelOptHiveTable (:()) - No Stats for default@basetable_rebuild, > Columns: a, b > java.lang.RuntimeException: No Stats for default@basetable_rebuild, Columns: > a, b > at > org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable.updateColStats(RelOptHiveTable.java:586) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable.getColStat(RelOptHiveTable.java:606) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.optimizer.calcite.RelOptHiveTable.getColStat(RelOptHiveTable.java:592) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveTableScan.getColStat(HiveTableScan.java:155) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdDistinctRowCount.getDistinctRowCount(HiveRelMdDistinctRowCount.java:78) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdDistinctRowCount.getDistinctRowCount(HiveRelMdDistinctRowCount.java:65) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at > GeneratedMetadataHandler_DistinctRowCount.getDistinctRowCount_$(Unknown > Source) ~[?:?] > at > GeneratedMetadataHandler_DistinctRowCount.getDistinctRowCount(Unknown Source) > ~[?:?] > at > org.apache.calcite.rel.metadata.RelMetadataQuery.getDistinctRowCount(RelMetadataQuery.java:781) > ~[calcite-core-1.16.0.jar:1.16.0] > at > org.apache.calcite.rel.metadata.RelMdRowCount.getRowCount(RelMdRowCount.java:207) > ~[calcite-core-1.16.0.jar:1.16.0] > at GeneratedMetadataHandler_RowCount.getRowCount_$(Unknown Source) > ~[?:?] > at GeneratedMetadataHandler_RowCount.getRowCount(Unknown Source) > ~[?:?] > at > org.apache.calcite.rel.metadata.RelMetadataQuery.getRowCount(RelMetadataQuery.java:235) > ~[calcite-core-1.16.0.jar:1.16.0] > at > org.apache.calcite.rel.externalize.RelWriterImpl.explain_(RelWriterImpl.java:100) > ~[calcite-core-1.16.0.jar:1.16.0] > at > org.apache.calcite.rel.externalize.RelWriterImpl.done(RelWriterImpl.java:156) > ~[calcite-core-1.16.0.jar:1.16.0] > at > org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:312) > ~[calcite-core-1.16.0.jar:1.16.0] > at org.apache.calcite.plan.RelOptUtil.toString(RelOptUtil.java:1991) > ~[calcite-core-1.16.0.jar:1.16.0] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1898) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1613) > ~[hive-exec-3.0.0.jar:3.0.0-SNAPSHOT] > at org.apache.calcite.tools.Frameworks$1.apply(Frameworks.java:118) > ~[calcite-core-1.16.0.jar:1.16.0] > at >
[jira] [Updated] (HIVE-19739) Bootstrap REPL LOAD to use checkpoints to validate and skip the loaded data/metadata.
[ https://issues.apache.org/jira/browse/HIVE-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19739: Description: Currently. bootstrap REPL LOAD have added checkpoint identifiers in DB/table/partition object properties once the data/metadata related to the object is successfully loaded. If the Db exist and is not empty, then currently we are throwing exception. But need to support it for the retry scenario after a failure. If there is a retry of bootstrap load using the same dump, then instead of throwing error, we should check if any of the tables/partitions are completely loaded using the checkpoint identifiers. If yes, then skip it or else drop/create them again. If the bootstrap load is performed using different dump, then it should throw exception. Allow bootstrap on empty Db only if ckpt property is not set. Also, if bootstrap load is completed on the target Db, then shouldn't allow bootstrap retry at all. was: Currently. bootstrap REPL LOAD have added checkpoint identifiers in DB/table/partition object properties once the data/metadata related to the object is successfully loaded. If the Db exist and is not empty, then currently we are throwing exception. But need to support it for the retry scenario after a failure. If there is a retry of bootstrap load using the same dump, then instead of throwing error, we should check if any of the tables/partitions are completely loaded using the checkpoint identifiers. If yes, then skip it or else drop/create them again. If the bootstrap load is performed using different dump, then it should throw exception. > Bootstrap REPL LOAD to use checkpoints to validate and skip the loaded > data/metadata. > - > > Key: HIVE-19739 > URL: https://issues.apache.org/jira/browse/HIVE-19739 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 4.0.0 > > Attachments: HIVE-19739.01.patch > > > Currently. bootstrap REPL LOAD have added checkpoint identifiers in > DB/table/partition object properties once the data/metadata related to the > object is successfully loaded. > If the Db exist and is not empty, then currently we are throwing exception. > But need to support it for the retry scenario after a failure. > If there is a retry of bootstrap load using the same dump, then instead of > throwing error, we should check if any of the tables/partitions are > completely loaded using the checkpoint identifiers. If yes, then skip it or > else drop/create them again. > If the bootstrap load is performed using different dump, then it should throw > exception. > Allow bootstrap on empty Db only if ckpt property is not set. Also, if > bootstrap load is completed on the target Db, then shouldn't allow bootstrap > retry at all. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19569) alter table db1.t1 rename db2.t2 generates MetaStoreEventListener.onDropTable()
[ https://issues.apache.org/jira/browse/HIVE-19569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19569: Status: Patch Available (was: Reopened) > alter table db1.t1 rename db2.t2 generates > MetaStoreEventListener.onDropTable() > --- > > Key: HIVE-19569 > URL: https://issues.apache.org/jira/browse/HIVE-19569 > Project: Hive > Issue Type: Bug > Components: Metastore, Standalone Metastore, Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Attachments: HIVE-19569.01.patch > > > When renaming a table within the same DB, this operation causes > {{MetaStoreEventListener.onAlterTable()}} to fire but when changing DB name > for a table it causes {{MetaStoreEventListener.onDropTable()}} + > {{MetaStoreEventListener.onCreateTable()}}. > The files from original table are moved to new table location. > This creates confusing semantics since any logic in {{onDropTable()}} doesn't > know about the larger context, i.e. that there will be a matching > {{onCreateTable()}}. > In particular, this causes a problem for Acid tables since files moved from > old table use WriteIDs that are not meaningful with the context of new table. > Current implementation is due to replication. This should ideally be changed > to raise a "not supported" error for tables that are marked for replication. > cc [~sankarh] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19725) Add ability to dump non-native tables in replication metadata dump
[ https://issues.apache.org/jira/browse/HIVE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19725: Fix Version/s: (was: 3.0.1) > Add ability to dump non-native tables in replication metadata dump > -- > > Key: HIVE-19725 > URL: https://issues.apache.org/jira/browse/HIVE-19725 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.0.0, 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: Repl, pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19725.01.patch, HIVE-19725.02.patch, > HIVE-19725.03.patch > > > if hive.repl.dump.metadata.only is set to true, allow dumping non native > tables also. > Data dump for non-native tables should never be allowed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19237) Only use an operatorId once in a plan
[ https://issues.apache.org/jira/browse/HIVE-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506531#comment-16506531 ] Hive QA commented on HIVE-19237: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12927026/HIVE-19237.11.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11632/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11632/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11632/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-08 20:21:36.317 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-11632/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-08 20:21:36.320 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 913baef HIVE-19053: RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors (Aihua Xu, reviewed by Sahil Takiar) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 913baef HIVE-19053: RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors (Aihua Xu, reviewed by Sahil Takiar) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-08 20:21:37.254 + rm -rf ../yetus_PreCommit-HIVE-Build-11632 + mkdir ../yetus_PreCommit-HIVE-Build-11632 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-11632 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11632/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/test/results/clientpositive/llap/explainanalyze_2.q.out:63 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/explainanalyze_2.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/explainuser_1.q.out:5315 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/explainuser_1.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/explainuser_2.q.out:460 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/explainuser_2.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/union_fast_stats.q.out:175 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/union_fast_stats.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/spark/spark_explainuser_1.q.out:767 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/spark/spark_explainuser_1.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/tez/explainanalyze_5.q.out:108 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/tez/explainanalyze_5.q.out' with conflicts. Going to apply patch with: git apply -p0 /data/hiveptest/working/scratch/build.patch:6581: trailing whitespace. totalSize 3243 /data/hiveptest/working/scratch/build.patch:6621: trailing whitespace. totalSize 3243 /data/hiveptest/working/scratch/build.patch:6685: trailing whitespace. totalSize 4616 error: patch failed: ql/src/test/results/clientpositive/llap/explainanalyze_2.q.out:63 Falling back to three-way merge... Applied patch to
[jira] [Commented] (HIVE-19782) Flash out TestObjectStore.testDirectSQLDropParitionsCleanup
[ https://issues.apache.org/jira/browse/HIVE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506530#comment-16506530 ] Hive QA commented on HIVE-19782: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926859/HIVE-19782.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11631/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11631/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11631/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12926859/HIVE-19782.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12926859 - PreCommit-HIVE-Build > Flash out TestObjectStore.testDirectSQLDropParitionsCleanup > --- > > Key: HIVE-19782 > URL: https://issues.apache.org/jira/browse/HIVE-19782 > Project: Hive > Issue Type: Test > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19782.patch > > > {{TestObjectStore.testDirectSQLDropParitionsCleanup}} checks that the tables > are empty after the drop. We should add some rows to every partition related > table, to see that they are really cleaned up -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19782) Flash out TestObjectStore.testDirectSQLDropParitionsCleanup
[ https://issues.apache.org/jira/browse/HIVE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506529#comment-16506529 ] Hive QA commented on HIVE-19782: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926859/HIVE-19782.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14512 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.client.TestRuntimeStats.testCleanup[Embedded] (batchId=211) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11630/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11630/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11630/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12926859 - PreCommit-HIVE-Build > Flash out TestObjectStore.testDirectSQLDropParitionsCleanup > --- > > Key: HIVE-19782 > URL: https://issues.apache.org/jira/browse/HIVE-19782 > Project: Hive > Issue Type: Test > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19782.patch > > > {{TestObjectStore.testDirectSQLDropParitionsCleanup}} checks that the tables > are empty after the drop. We should add some rows to every partition related > table, to see that they are really cleaned up -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19789) reenable orc_llap test
[ https://issues.apache.org/jira/browse/HIVE-19789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506518#comment-16506518 ] Sergey Shelukhin commented on HIVE-19789: - I dunno, it was off for a while, smth might have been broken > reenable orc_llap test > -- > > Key: HIVE-19789 > URL: https://issues.apache.org/jira/browse/HIVE-19789 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Matt McCline >Priority: Major > Attachments: HIVE-19789.01.patch > > > Test has been disabled, looks like by mistake (or due to some issue with the > patch there that was never addressed), in HIVE-11394. > It needs to be reenabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19789) reenable orc_llap test
[ https://issues.apache.org/jira/browse/HIVE-19789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506508#comment-16506508 ] Matt McCline commented on HIVE-19789: - Why wouldn't it? > reenable orc_llap test > -- > > Key: HIVE-19789 > URL: https://issues.apache.org/jira/browse/HIVE-19789 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Matt McCline >Priority: Major > Attachments: HIVE-19789.01.patch > > > Test has been disabled, looks like by mistake (or due to some issue with the > patch there that was never addressed), in HIVE-11394. > It needs to be reenabled. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-12192: --- Attachment: HIVE-12192.12.patch > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Labels: timestamp > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.04.patch, HIVE-12192.05.patch, > HIVE-12192.06.patch, HIVE-12192.07.patch, HIVE-12192.08.patch, > HIVE-12192.09.patch, HIVE-12192.10.patch, HIVE-12192.11.patch, > HIVE-12192.12.patch, HIVE-12192.patch > > > Hive currently uses the "local" time of a java.sql.Timestamp to represent the > SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use > {{Timestamp#getYear()}} and similar methods to implement SQL functions like > {{year}}. > When the SQL session's time zone is a DST zone, such as America/Los_Angeles > that alternates between PST and PDT, there are times that cannot be > represented because the effective zone skips them. > {code} > hive> select TIMESTAMP '2015-03-08 02:10:00.101'; > 2015-03-08 03:10:00.101 > {code} > Using UTC instead of the SQL session time zone as the underlying zone for a > java.sql.Timestamp avoids this bug, while still returning correct values for > {{getYear}} etc. Using UTC as the convenience representation (timestamp > without time zone has no real zone) would make timestamp calculations more > consistent and avoid similar problems in the future. > Notably, this would break the {{unix_timestamp}} UDF that specifies the > result is with respect to ["the default timezone and default > locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]. > That function would need to be updated to use the > {{System.getProperty("user.timezone")}} zone. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19833) reduce LLAP IO min allocation to match ORC variable CB size
[ https://issues.apache.org/jira/browse/HIVE-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506489#comment-16506489 ] Gopal V commented on HIVE-19833: LGTM - +1 Since this is a default config change, I'll add the 4kb config to the next nightly run. > reduce LLAP IO min allocation to match ORC variable CB size > --- > > Key: HIVE-19833 > URL: https://issues.apache.org/jira/browse/HIVE-19833 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19833.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19833) reduce LLAP IO min allocation to match ORC variable CB size
[ https://issues.apache.org/jira/browse/HIVE-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19833: Status: Patch Available (was: Open) [~gopalv] [~prasanth_j] can you take a look? thnx > reduce LLAP IO min allocation to match ORC variable CB size > --- > > Key: HIVE-19833 > URL: https://issues.apache.org/jira/browse/HIVE-19833 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19833.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19833) reduce LLAP IO min allocation to match ORC variable CB size
[ https://issues.apache.org/jira/browse/HIVE-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19833: Attachment: HIVE-19833.patch > reduce LLAP IO min allocation to match ORC variable CB size > --- > > Key: HIVE-19833 > URL: https://issues.apache.org/jira/browse/HIVE-19833 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19833.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19833) reduce LLAP IO min allocation to match ORC variable CB size
[ https://issues.apache.org/jira/browse/HIVE-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19833: --- Assignee: Sergey Shelukhin > reduce LLAP IO min allocation to match ORC variable CB size > --- > > Key: HIVE-19833 > URL: https://issues.apache.org/jira/browse/HIVE-19833 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19833.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19782) Flash out TestObjectStore.testDirectSQLDropParitionsCleanup
[ https://issues.apache.org/jira/browse/HIVE-19782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506484#comment-16506484 ] Hive QA commented on HIVE-19782: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 37s{color} | {color:blue} standalone-metastore in master has 216 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11630/dev-support/hive-personality.sh | | git revision | master / 913baef | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11630/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Flash out TestObjectStore.testDirectSQLDropParitionsCleanup > --- > > Key: HIVE-19782 > URL: https://issues.apache.org/jira/browse/HIVE-19782 > Project: Hive > Issue Type: Test > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19782.patch > > > {{TestObjectStore.testDirectSQLDropParitionsCleanup}} checks that the tables > are empty after the drop. We should add some rows to every partition related > table, to see that they are really cleaned up -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HIVE-11105) NegativeArraySizeException from org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase
[ https://issues.apache.org/jira/browse/HIVE-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HIVE-11105: Comment: was deleted (was: [~jesmith3] Agree, so I think we can safely say it's been fixed now in Hive 3.0.0 [1] which was released last month. [1] https://github.com/apache/hive/blob/rel/release-3.0.0/pom.xml#L149) > NegativeArraySizeException from > org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase > - > > Key: HIVE-11105 > URL: https://issues.apache.org/jira/browse/HIVE-11105 > Project: Hive > Issue Type: Bug >Reporter: Priyesh Raj >Priority: Major > Fix For: 3.0.0 > > > I am getting the exception while running a query on very large data set. The > issue is coming in Hive, however my understanding is it's a hadoop > setCapacity function problem. The variable definition is integer and it is > not able to handle such a large count. > Please look into it. > {code} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1099) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1138) > ... 13 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:336) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1064) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1082) > ... 14 more > Caused by: java.lang.NegativeArraySizeException > at > org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:144) > at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:123) > at org.apache.hadoop.io.BytesWritable.set(BytesWritable.java:171) > at > org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:213) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:456) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:316) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-11105) NegativeArraySizeException from org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase
[ https://issues.apache.org/jira/browse/HIVE-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson resolved HIVE-11105. - Resolution: Fixed Resolving this as the Hadoop version was updated from 2.7.2 [1] in Hive 2.3.3 to 3.1.0 [2] in Hive 3.0.0 (HADOOP-11901 was fixed in Hadoop 2.8.0). [1] https://github.com/apache/hive/blob/rel/release-2.3.3/pom.xml#L141 [2] https://github.com/apache/hive/blob/rel/release-3.0.0/pom.xml#L149 > NegativeArraySizeException from > org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase > - > > Key: HIVE-11105 > URL: https://issues.apache.org/jira/browse/HIVE-11105 > Project: Hive > Issue Type: Bug >Reporter: Priyesh Raj >Priority: Major > Fix For: 3.0.0 > > > I am getting the exception while running a query on very large data set. The > issue is coming in Hive, however my understanding is it's a hadoop > setCapacity function problem. The variable definition is integer and it is > not able to handle such a large count. > Please look into it. > {code} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1099) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1138) > ... 13 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:336) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1064) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1082) > ... 14 more > Caused by: java.lang.NegativeArraySizeException > at > org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:144) > at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:123) > at org.apache.hadoop.io.BytesWritable.set(BytesWritable.java:171) > at > org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:213) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:456) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:316) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-11105) NegativeArraySizeException from org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase
[ https://issues.apache.org/jira/browse/HIVE-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Olson updated HIVE-11105: Fix Version/s: 3.0.0 > NegativeArraySizeException from > org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase > - > > Key: HIVE-11105 > URL: https://issues.apache.org/jira/browse/HIVE-11105 > Project: Hive > Issue Type: Bug >Reporter: Priyesh Raj >Priority: Major > Fix For: 3.0.0 > > > I am getting the exception while running a query on very large data set. The > issue is coming in Hive, however my understanding is it's a hadoop > setCapacity function problem. The variable definition is integer and it is > not able to handle such a large count. > Please look into it. > {code} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1099) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1138) > ... 13 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:336) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1064) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1082) > ... 14 more > Caused by: java.lang.NegativeArraySizeException > at > org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:144) > at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:123) > at org.apache.hadoop.io.BytesWritable.set(BytesWritable.java:171) > at > org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:213) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:456) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:316) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-11105) NegativeArraySizeException from org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase
[ https://issues.apache.org/jira/browse/HIVE-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506481#comment-16506481 ] Andrew Olson commented on HIVE-11105: - [~jesmith3] Agree, so I think we can safely say it's been fixed now in Hive 3.0.0 [1] which was released last month. [1] https://github.com/apache/hive/blob/rel/release-3.0.0/pom.xml#L149 > NegativeArraySizeException from > org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase > - > > Key: HIVE-11105 > URL: https://issues.apache.org/jira/browse/HIVE-11105 > Project: Hive > Issue Type: Bug >Reporter: Priyesh Raj >Priority: Major > > I am getting the exception while running a query on very large data set. The > issue is coming in Hive, however my understanding is it's a hadoop > setCapacity function problem. The variable definition is integer and it is > not able to handle such a large count. > Please look into it. > {code} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1099) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1138) > ... 13 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:336) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1064) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1082) > ... 14 more > Caused by: java.lang.NegativeArraySizeException > at > org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:144) > at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:123) > at org.apache.hadoop.io.BytesWritable.set(BytesWritable.java:171) > at > org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:213) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:456) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:316) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19810) StorageHandler fail to ship jars in Tez intermittently
[ https://issues.apache.org/jira/browse/HIVE-19810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506477#comment-16506477 ] Daniel Dai commented on HIVE-19810: --- ptest is testing wrong patch. Retrigger. > StorageHandler fail to ship jars in Tez intermittently > -- > > Key: HIVE-19810 > URL: https://issues.apache.org/jira/browse/HIVE-19810 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19810.1.patch, HIVE-19810.2.patch, testcase.patch > > > Hive relies on StorageHandler to ship jars to backend automatically in > several cases: JdbcStorageHandler, HBaseStorageHandler, > AccumuloStorageHandler. This does not work reliably, in particular, the first > dag in the session will have those jars, the second will not unless container > is reused. In the later case, the containers allocated to first dag will be > reused in the second dag so the container will have additional resources. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19810) StorageHandler fail to ship jars in Tez intermittently
[ https://issues.apache.org/jira/browse/HIVE-19810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19810: -- Attachment: HIVE-19810.2.patch > StorageHandler fail to ship jars in Tez intermittently > -- > > Key: HIVE-19810 > URL: https://issues.apache.org/jira/browse/HIVE-19810 > Project: Hive > Issue Type: Bug > Components: Tez >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19810.1.patch, HIVE-19810.2.patch, testcase.patch > > > Hive relies on StorageHandler to ship jars to backend automatically in > several cases: JdbcStorageHandler, HBaseStorageHandler, > AccumuloStorageHandler. This does not work reliably, in particular, the first > dag in the session will have those jars, the second will not unless container > is reused. In the later case, the containers allocated to first dag will be > reused in the second dag so the container will have additional resources. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-11105) NegativeArraySizeException from org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase
[ https://issues.apache.org/jira/browse/HIVE-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506476#comment-16506476 ] Joseph Smith commented on HIVE-11105: - Looks like a duplicate of HADOOP-11901and MAPREDUCE-21 > NegativeArraySizeException from > org.apache.hadoop.io.BytesWritable.setCapacity during serialization phase > - > > Key: HIVE-11105 > URL: https://issues.apache.org/jira/browse/HIVE-11105 > Project: Hive > Issue Type: Bug >Reporter: Priyesh Raj >Priority: Major > > I am getting the exception while running a query on very large data set. The > issue is coming in Hive, however my understanding is it's a hadoop > setCapacity function problem. The variable definition is integer and it is > not able to handle such a large count. > Please look into it. > {code} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1141) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:577) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588) > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:227) > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1099) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1138) > ... 13 more > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NegativeArraySizeException > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:336) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:793) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1064) > at > org.apache.hadoop.hive.ql.exec.GroupByOperator.flush(GroupByOperator.java:1082) > ... 14 more > Caused by: java.lang.NegativeArraySizeException > at > org.apache.hadoop.io.BytesWritable.setCapacity(BytesWritable.java:144) > at org.apache.hadoop.io.BytesWritable.setSize(BytesWritable.java:123) > at org.apache.hadoop.io.BytesWritable.set(BytesWritable.java:171) > at > org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:213) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:456) > at > org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:316) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19340) Disable timeout of transactions opened by replication task at target cluster
[ https://issues.apache.org/jira/browse/HIVE-19340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506464#comment-16506464 ] Hive QA commented on HIVE-19340: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926829/HIVE-19340.06-branch-3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11628/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11628/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11628/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-08 19:11:15.649 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-11628/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z branch-3 ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-08 19:11:15.652 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 913baef HIVE-19053: RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors (Aihua Xu, reviewed by Sahil Takiar) + git clean -f -d + git checkout branch-3 Switched to branch 'branch-3' Your branch is behind 'origin/branch-3' by 3 commits, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/branch-3 HEAD is now at dc85ea8 HIVE-19723 : Arrow serde: "Unsupported data type: Timestamp(NANOSECOND, null)" (Teddy Choi via Matt McCline) + git merge --ff-only origin/branch-3 Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-08 19:11:16.615 + rm -rf ../yetus_PreCommit-HIVE-Build-11628 + mkdir ../yetus_PreCommit-HIVE-Build-11628 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-11628 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11628/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/test/org/apache/hadoop/hive/metastore/txn/TestTxnHandler.java: does not exist in index error: a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnDbUtil.java: does not exist in index error: a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java: does not exist in index error: a/standalone-metastore/src/main/sql/derby/hive-schema-3.1.0.derby.sql: does not exist in index error: a/standalone-metastore/src/main/sql/derby/upgrade-3.0.0-to-3.1.0.derby.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mssql/hive-schema-3.1.0.mssql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mssql/upgrade-3.0.0-to-3.1.0.mssql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mysql/hive-schema-3.1.0.mysql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mysql/upgrade-3.0.0-to-3.1.0.mysql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/oracle/hive-schema-3.1.0.oracle.sql: does not exist in index error: a/standalone-metastore/src/main/sql/oracle/upgrade-3.0.0-to-3.1.0.oracle.sql: does not exist in index error: a/standalone-metastore/src/main/sql/postgres/hive-schema-3.1.0.postgres.sql: does not exist in index error: a/standalone-metastore/src/main/sql/postgres/upgrade-3.0.0-to-3.1.0.postgres.sql: does not exist in index error: patch failed: standalone-metastore/src/main/sql/derby/upgrade-3.0.0-to-3.1.0.derby.sql:24 Falling back to three-way merge... Applied patch to 'standalone-metastore/src/main/sql/derby/upgrade-3.0.0-to-3.1.0.derby.sql' with conflicts. error: patch failed: standalone-metastore/src/main/sql/mssql/upgrade-3.0.0-to-3.1.0.mssql.sql:25 Falling back to three-way merge... Applied patch to
[jira] [Commented] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506458#comment-16506458 ] Hive QA commented on HIVE-12192: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12927004/HIVE-12192.11.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11626/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11626/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11626/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-08 19:10:00.755 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-11626/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-08 19:10:00.758 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive a0c465d..913baef master -> origin/master f6c8c12..dc85ea8 branch-3 -> origin/branch-3 + git reset --hard HEAD HEAD is now at a0c465d HIVE-19817: Hive streaming API + dynamic partitioning + json/regex writer does not work (Prasanth Jayachandran reviewed by Ashutosh Chauhan) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 3 commits, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 913baef HIVE-19053: RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors (Aihua Xu, reviewed by Sahil Takiar) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-08 19:10:02.359 + rm -rf ../yetus_PreCommit-HIVE-Build-11626 + mkdir ../yetus_PreCommit-HIVE-Build-11626 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-11626 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11626/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch fatal: git diff header lacks filename information when removing 0 leading pathname components (line 1050) error: patch failed: ql/src/test/org/apache/hadoop/hive/ql/io/arrow/TestArrowColumnarBatchSerDe.java:26 Falling back to three-way merge... Applied patch to 'ql/src/test/org/apache/hadoop/hive/ql/io/arrow/TestArrowColumnarBatchSerDe.java' with conflicts. error: patch failed: ql/src/test/results/clientpositive/confirm_initial_tbl_stats.q.out:272 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/confirm_initial_tbl_stats.q.out' cleanly. Going to apply patch with: git apply -p1 /data/hiveptest/working/scratch/build.patch:328: trailing whitespace. /data/hiveptest/working/scratch/build.patch:40406: trailing whitespace. min -28830 /data/hiveptest/working/scratch/build.patch:40407: trailing whitespace. max -28769 /data/hiveptest/working/scratch/build.patch:41014: trailing whitespace. totalSize 295599 /data/hiveptest/working/scratch/build.patch:41035: trailing whitespace. totalSize 1614 error: patch failed: ql/src/test/org/apache/hadoop/hive/ql/io/arrow/TestArrowColumnarBatchSerDe.java:26 Falling back to three-way merge... Applied patch to 'ql/src/test/org/apache/hadoop/hive/ql/io/arrow/TestArrowColumnarBatchSerDe.java' with conflicts. error: patch failed: ql/src/test/results/clientpositive/confirm_initial_tbl_stats.q.out:272 Falling back to three-way
[jira] [Commented] (HIVE-19602) Refactor inplace progress code in Hive-on-spark progress monitor to use ProgressMonitor instance
[ https://issues.apache.org/jira/browse/HIVE-19602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506457#comment-16506457 ] Hive QA commented on HIVE-19602: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12927068/HIVE-19602.5.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14511 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11625/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11625/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11625/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12927068 - PreCommit-HIVE-Build > Refactor inplace progress code in Hive-on-spark progress monitor to use > ProgressMonitor instance > > > Key: HIVE-19602 > URL: https://issues.apache.org/jira/browse/HIVE-19602 > Project: Hive > Issue Type: Bug >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19602.3.patch, HIVE-19602.4.patch, > HIVE-19602.5.patch > > > We can refactor the HOS inplace progress monitor code > (SparkJobMonitor#printStatusInPlace) to use InplaceUpdate#render. > We can create an instance of ProgressMonitor and use it to show the progress. > This would be similar to : > [https://github.com/apache/hive/blob/0b6bea89f74b607299ad944b37e4b62c711aaa69/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java#L181] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19053) RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors
[ https://issues.apache.org/jira/browse/HIVE-19053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-19053: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks [~stakiar] for reviewing the code. > RemoteSparkJobStatus#getSparkJobInfo treats all exceptions as timeout errors > > > Key: HIVE-19053 > URL: https://issues.apache.org/jira/browse/HIVE-19053 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Aihua Xu >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-19053.1.patch, HIVE-19053.2.patch > > > {code} > Future getJobInfo = sparkClient.run( > new GetJobInfoJob(jobHandle.getClientJobId(), sparkJobId)); > try { > return getJobInfo.get(sparkClientTimeoutInSeconds, TimeUnit.SECONDS); > } catch (Exception e) { > LOG.warn("Failed to get job info.", e); > throw new HiveException(e, ErrorMsg.SPARK_GET_JOB_INFO_TIMEOUT, > Long.toString(sparkClientTimeoutInSeconds)); > } > {code} > It should only throw {{ErrorMsg.SPARK_GET_JOB_INFO_TIMEOUT}} if a > {{TimeoutException}} is thrown. Other exceptions should be handled > independently. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19834) Clear Context Map of Paths to ContentSummary
[ https://issues.apache.org/jira/browse/HIVE-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-19834: --- Status: Patch Available (was: Open) > Clear Context Map of Paths to ContentSummary > > > Key: HIVE-19834 > URL: https://issues.apache.org/jira/browse/HIVE-19834 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 2.3.2, 3.0.0, 4.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-19834.1.patch > > > The {{Context}} class has a {{clear}} method which is called. During the > method, various files are deleted and in-memory maps are cleared. I would > like to propose that we clear out an additional in-memory map structure that > may contain a lot of data so that it can be GC'ed asap. This map contains > mapping of "File Path"->"Content Summary". For a query with a large file > set, this can be quite large. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19834) Clear Context Map of Paths to ContentSummary
[ https://issues.apache.org/jira/browse/HIVE-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR reassigned HIVE-19834: -- Assignee: BELUGA BEHR > Clear Context Map of Paths to ContentSummary > > > Key: HIVE-19834 > URL: https://issues.apache.org/jira/browse/HIVE-19834 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.3.2, 4.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-19834.1.patch > > > The {{Context}} class has a {{clear}} method which is called. During the > method, various files are deleted and in-memory maps are cleared. I would > like to propose that we clear out an additional in-memory map structure that > may contain a lot of data so that it can be GC'ed asap. This map contains > mapping of "File Path"->"Content Summary". For a query with a large file > set, this can be quite large. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19834) Clear Context Map of Paths to ContentSummary
[ https://issues.apache.org/jira/browse/HIVE-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-19834: --- Attachment: HIVE-19834.1.patch > Clear Context Map of Paths to ContentSummary > > > Key: HIVE-19834 > URL: https://issues.apache.org/jira/browse/HIVE-19834 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.3.2, 4.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HIVE-19834.1.patch > > > The {{Context}} class has a {{clear}} method which is called. During the > method, various files are deleted and in-memory maps are cleared. I would > like to propose that we clear out an additional in-memory map structure that > may contain a lot of data so that it can be GC'ed asap. This map contains > mapping of "File Path"->"Content Summary". For a query with a large file > set, this can be quite large. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19723) Arrow serde: "Unsupported data type: Timestamp(NANOSECOND, null)"
[ https://issues.apache.org/jira/browse/HIVE-19723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19723: Resolution: Fixed Fix Version/s: (was: 4.0.0) Status: Resolved (was: Patch Available) Pushed to master and branch-3. Thanks, Teddy! > Arrow serde: "Unsupported data type: Timestamp(NANOSECOND, null)" > - > > Key: HIVE-19723 > URL: https://issues.apache.org/jira/browse/HIVE-19723 > Project: Hive > Issue Type: Bug >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-19723.1.patch, HIVE-19723.3.patch, > HIVE-19723.4.patch, HIVE-19732.2.patch > > > Spark's Arrow support only provides Timestamp at MICROSECOND granularity. > Spark 2.3.0 won't accept NANOSECOND. Switch it back to MICROSECOND. > The unit test org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow will just need > to change the assertion to test microsecond. And we'll need to add this to > documentation on supported datatypes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19411) Full-ACID table stats may not be valid
[ https://issues.apache.org/jira/browse/HIVE-19411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-19411: -- Fix Version/s: (was: 0.10.1) 3.1.0 > Full-ACID table stats may not be valid > -- > > Key: HIVE-19411 > URL: https://issues.apache.org/jira/browse/HIVE-19411 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > > One case is that , per Sergey, updating a row can ended up +2 rows instead of > +0 > since it is translated to delete and insert and the physical writer > may just add # of operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19602) Refactor inplace progress code in Hive-on-spark progress monitor to use ProgressMonitor instance
[ https://issues.apache.org/jira/browse/HIVE-19602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506422#comment-16506422 ] Hive QA commented on HIVE-19602: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 28s{color} | {color:blue} ql in master has 2284 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} ql: The patch generated 0 new + 3 unchanged - 2 fixed = 3 total (was 5) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 42s{color} | {color:green} ql generated 0 new + 2283 unchanged - 1 fixed = 2283 total (was 2284) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11625/dev-support/hive-personality.sh | | git revision | master / a0c465d | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11625/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Refactor inplace progress code in Hive-on-spark progress monitor to use > ProgressMonitor instance > > > Key: HIVE-19602 > URL: https://issues.apache.org/jira/browse/HIVE-19602 > Project: Hive > Issue Type: Bug >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19602.3.patch, HIVE-19602.4.patch, > HIVE-19602.5.patch > > > We can refactor the HOS inplace progress monitor code > (SparkJobMonitor#printStatusInPlace) to use InplaceUpdate#render. > We can create an instance of ProgressMonitor and use it to show the progress. > This would be similar to : > [https://github.com/apache/hive/blob/0b6bea89f74b607299ad944b37e4b62c711aaa69/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java#L181] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19808) GenericUDTFGetSplits should support ACID reads in the temp. table read path
[ https://issues.apache.org/jira/browse/HIVE-19808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19808: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Pushed to master and branch-3. Thanks, Eric! > GenericUDTFGetSplits should support ACID reads in the temp. table read path > --- > > Key: HIVE-19808 > URL: https://issues.apache.org/jira/browse/HIVE-19808 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19808.1.patch, HIVE-19808.2.patch > > > 1. Map-only reads work on ACID tables. > 2. Temp. table reads (for multi-vertex queries) work on non-ACID tables. > 3. But temp. table reads don't work on ACID tables. > {code} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create > temp table: java.lang.IllegalStateException: calling recordValidTxn() more > than once in the same txnid:420 > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.createPlanFragment(GenericUDTFGetSplits.java:303) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:202) > at > org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:985) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:931) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:918) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at > org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:985) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:931) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:492) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:484) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:145) > ... 16 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19829) Incremental replication load should create tasks in execution phase rather than semantic phase
[ https://issues.apache.org/jira/browse/HIVE-19829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-19829: -- Labels: pull-request-available (was: ) > Incremental replication load should create tasks in execution phase rather > than semantic phase > -- > > Key: HIVE-19829 > URL: https://issues.apache.org/jira/browse/HIVE-19829 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19829.01.patch > > > Split the incremental load into multiple iterations. In each iteration create > number of tasks equal to the configured value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19829) Incremental replication load should create tasks in execution phase rather than semantic phase
[ https://issues.apache.org/jira/browse/HIVE-19829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506413#comment-16506413 ] mahesh kumar behera commented on HIVE-19829: [~sankarh] Please review the patch > Incremental replication load should create tasks in execution phase rather > than semantic phase > -- > > Key: HIVE-19829 > URL: https://issues.apache.org/jira/browse/HIVE-19829 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19829.01.patch > > > Split the incremental load into multiple iterations. In each iteration create > number of tasks equal to the configured value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19829) Incremental replication load should create tasks in execution phase rather than semantic phase
[ https://issues.apache.org/jira/browse/HIVE-19829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506412#comment-16506412 ] ASF GitHub Bot commented on HIVE-19829: --- GitHub user maheshk114 opened a pull request: https://github.com/apache/hive/pull/370 HIVE-19829 : Incremental replication load should create tasks in execution phase rather than semantic phase … You can merge this pull request into a Git repository by running: $ git pull https://github.com/maheshk114/hive BUG-85371 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/370.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #370 commit 9f1667c77c6e3488208876a5cdf4f11e5718db87 Author: Mahesh Kumar Behera Date: 2018-06-07T11:24:59Z HIVE-19829 : Incremental replication load should create tasks in execution phase rather than semantic phase > Incremental replication load should create tasks in execution phase rather > than semantic phase > -- > > Key: HIVE-19829 > URL: https://issues.apache.org/jira/browse/HIVE-19829 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19829.01.patch > > > Split the incremental load into multiple iterations. In each iteration create > number of tasks equal to the configured value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19812) Disable external table replication by default via a configuration property
[ https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19812: --- Attachment: HIVE-19812.02.patch > Disable external table replication by default via a configuration property > -- > > Key: HIVE-19812 > URL: https://issues.apache.org/jira/browse/HIVE-19812 > Project: Hive > Issue Type: Task > Components: repl >Affects Versions: 3.1.0, 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch > > > use a hive config property to allow external table replication. set this > property by default to prevent external table replication. > for metadata only hive repl always export metadata for external tables. > > REPL_DUMP_EXTERNAL_TABLES("hive.repl.dump.include.external.tables", false, > "Indicates if repl dump should include information about external tables. It > should be \n" > + "used in conjunction with 'hive.repl.dump.metadata.only' set to false. if > 'hive.repl.dump.metadata.only' \n" > + " is set to true then this config parameter has no effect as external table > meta data is flushed \n" > + " always by default.") > This should be done for only replication dump and not for export -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19750) Initialize NEXT_WRITE_ID. NWI_NEXT on converting an existing table to full acid
[ https://issues.apache.org/jira/browse/HIVE-19750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19750: Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to branch-3. > Initialize NEXT_WRITE_ID. NWI_NEXT on converting an existing table to full > acid > --- > > Key: HIVE-19750 > URL: https://issues.apache.org/jira/browse/HIVE-19750 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 3.1.0 > > Attachments: HIVE-19750.01-branch-3.patch, HIVE-19750.01.patch, > HIVE-19750.02.patch, HIVE-19750.03.patch > > > Need to set this to a reasonably high value the the table. > This will reserve a range of write IDs that will be treated by the system as > committed. > This is needed so that we can assign unique ROW__IDs to each row in files > that already exist in the table. For example, if the value is initialized to > the number of files currently in the table, we can think of each file as > written by a separate transaction and thus a free to assign bucketProperty > (BucketCodec) of ROW_ID in whichever way is convenient. > it's guaranteed that all rows get unique ROW_IDs this way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19750) Initialize NEXT_WRITE_ID. NWI_NEXT on converting an existing table to full acid
[ https://issues.apache.org/jira/browse/HIVE-19750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506338#comment-16506338 ] Hive QA commented on HIVE-19750: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926783/HIVE-19750.01-branch-3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11623/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11623/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11623/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-08 18:00:20.895 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-11623/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z branch-3 ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-08 18:00:20.897 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 13960aa..a0c465d master -> origin/master cf492a8..f6c8c12 branch-3 -> origin/branch-3 acfd209..1c6d946 branch-3.0 -> origin/branch-3.0 + git reset --hard HEAD HEAD is now at 13960aa HIVE-18079 : Statistics: Allow HyperLogLog to be merged to the lowest-common-denominator bit-size (Gopal V via Prasanth J) + git clean -f -d + git checkout branch-3 Switched to branch 'branch-3' Your branch is behind 'origin/branch-3' by 8 commits, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/branch-3 HEAD is now at f6c8c12 HIVE-19817: Hive streaming API + dynamic partitioning + json/regex writer does not work (Prasanth Jayachandran reviewed by Ashutosh Chauhan) + git merge --ff-only origin/branch-3 Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-08 18:00:23.190 + rm -rf ../yetus_PreCommit-HIVE-Build-11623 + mkdir ../yetus_PreCommit-HIVE-Build-11623 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-11623 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11623/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Going to apply patch with: git apply -p0 + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: executing: [/tmp/protoc7367033663309093566.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc7367033663309093566.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g [ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (process-resource-bundles) on project hive-shims: Execution process-resource-bundles of goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed. ConcurrentModificationException -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug
[jira] [Commented] (HIVE-19800) Create separate submodules for pre and post upgrade and add rename file logic
[ https://issues.apache.org/jira/browse/HIVE-19800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506339#comment-16506339 ] Hive QA commented on HIVE-19800: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926784/HIVE-19800.03.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11624/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11624/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11624/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12926784/HIVE-19800.03.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12926784 - PreCommit-HIVE-Build > Create separate submodules for pre and post upgrade and add rename file logic > - > > Key: HIVE-19800 > URL: https://issues.apache.org/jira/browse/HIVE-19800 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-19800.01.patch, HIVE-19800.02.patch, > HIVE-19800.03.patch > > > this is a followup to HIVE-19751 which includes HIVE-19751 since it hasn't > landed yet > this includes file rename logic and HIVE-19750 since it hasn't landed yet > either > > cc [~jdere] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19815) Repl dump should not propagate the checkpoint and repl source properties
[ https://issues.apache.org/jira/browse/HIVE-19815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506337#comment-16506337 ] Hive QA commented on HIVE-19815: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12926732/HIVE-19815.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14511 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11622/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11622/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11622/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12926732 - PreCommit-HIVE-Build > Repl dump should not propagate the checkpoint and repl source properties > > > Key: HIVE-19815 > URL: https://issues.apache.org/jira/browse/HIVE-19815 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19815.01.patch > > > For replication scenarios of A-> B -> C the repl dump on B should not include > the checkpoint property when dumping out table information. > Alter tables/partitions during incremental should not propagate this as well. > Also should not propagate the the db level parameters set by replication > internally. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19817) Hive streaming API + dynamic partitioning + json/regex writer does not work
[ https://issues.apache.org/jira/browse/HIVE-19817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19817: - Resolution: Fixed Fix Version/s: 4.0.0 3.0.1 3.1.0 Status: Resolved (was: Patch Available) Committed to branch-3, branch-3.0 and master. Thanks for the review! > Hive streaming API + dynamic partitioning + json/regex writer does not work > --- > > Key: HIVE-19817 > URL: https://issues.apache.org/jira/browse/HIVE-19817 > Project: Hive > Issue Type: Bug > Components: Streaming, Transactions >Affects Versions: 3.1.0, 3.0.1, 4.0.0 >Reporter: Matt Burgess >Assignee: Prasanth Jayachandran >Priority: Critical > Fix For: 3.1.0, 3.0.1, 4.0.0 > > Attachments: HIVE-19817.1.patch > > > New streaming API for dynamic partitioning only works with delimited record > writer. Json and Regex writers does not work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19824) Improve online datasize estimations for MapJoins
[ https://issues.apache.org/jira/browse/HIVE-19824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16506331#comment-16506331 ] Sergey Shelukhin commented on HIVE-19824: - For vectorized case I thought it's almost always be replaced by specialized vectorized hash tables that have specific types... > Improve online datasize estimations for MapJoins > > > Key: HIVE-19824 > URL: https://issues.apache.org/jira/browse/HIVE-19824 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19824.01wip01.patch, HIVE-19824.01wip01.patch > > > Statistics.datasize() only accounts for "real" data size; but for example > handling 1M rows might introduce some datastructure overhead...if the "real" > data is small - even this overhead might become the real memory usage > for 6.5M rows of (int,int) the estimation is 52MB > in reality this eats up ~260MB from which 210MB is used to service the > hashmap functionality to that many rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)