[jira] [Commented] (HIVE-18436) Upgrade to Spark 2.3.0
[ https://issues.apache.org/jira/browse/HIVE-18436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370683#comment-16370683 ] Hive QA commented on HIVE-18436: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911269/HIVE-18436.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 13796 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] (batchId=250) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.TestMarkPartition.testMarkingPartitionSet (batchId=214) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9279/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9279/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9279/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911269 - PreCommit-HIVE-Build > Upgrade to Spark 2.3.0 > -- > > Key: HIVE-18436 > URL: https://issues.apache.org/jira/browse/HIVE-18436 > Project: Hive > Issue Type: Task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18436.1.patch, HIVE-18436.2.patch > > > Branching has been completed. Release candidates should be published soon. > Might be a while before the actual release, but at least we get to identify > any issues early. -- This message
[jira] [Commented] (HIVE-18710) extend inheritPerms to ACID in Hive 2.X
[ https://issues.apache.org/jira/browse/HIVE-18710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370744#comment-16370744 ] Hive QA commented on HIVE-18710: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911275/HIVE-18710.01-branch-2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 10663 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=227) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explaindenpendencydiffengs] (batchId=38) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=142) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] (batchId=139) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[table_nonprintable] (batchId=140) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_acid_non_acid] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr] (batchId=144) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_parquet_types] (batchId=155) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[merge_negative_5] (batchId=88) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[explaindenpendencydiffengs] (batchId=115) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=117) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_ptf] (batchId=125) org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=176) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9280/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9280/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9280/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911275 - PreCommit-HIVE-Build > extend inheritPerms to ACID in Hive 2.X > --- > > Key: HIVE-18710 > URL: https://issues.apache.org/jira/browse/HIVE-18710 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18710-branch-2.patch, HIVE-18710.01-branch-2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18756: Attachment: HIVE-18756.01.patch > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18756: Status: Patch Available (was: Open) > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18259) Automatic cleanup of invalidation cache for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18259: --- Attachment: (was: HIVE-18259.02.patch) > Automatic cleanup of invalidation cache for materialized views > -- > > Key: HIVE-18259 > URL: https://issues.apache.org/jira/browse/HIVE-18259 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18259.01.patch, HIVE-18259.02.patch, > HIVE-18259.patch > > > HIVE-14498 introduces the invalidation cache for materialized views, which > keeps track of the transactions executed on a given table to infer whether > materialized view contents are outdated or not. > Currently, the cache keeps information of transactions in memory to guarantee > quick response time, i.e., quick resolution about the view freshness, at > query rewriting time. This information can grow large, thus we would like to > run a thread that cleans useless transactions from the cache, i.e., > transactions that do invalidate any materialized view in the system, at an > interval defined by a property. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-18756: --- > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18757) LLAP IO for text fails for empty files
[ https://issues.apache.org/jira/browse/HIVE-18757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18757: --- > LLAP IO for text fails for empty files > -- > > Key: HIVE-18757 > URL: https://issues.apache.org/jira/browse/HIVE-18757 > Project: Hive > Issue Type: Bug >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18757) LLAP IO for text fails for empty files
[ https://issues.apache.org/jira/browse/HIVE-18757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370913#comment-16370913 ] Hive QA commented on HIVE-18757: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} llap-server: The patch generated 1 new + 76 unchanged - 1 fixed = 77 total (was 77) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 3df6bc2 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9284/yetus/diff-checkstyle-llap-server.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9284/yetus/patch-asflicense-problems.txt | | modules | C: llap-server U: llap-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9284/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > LLAP IO for text fails for empty files > -- > > Key: HIVE-18757 > URL: https://issues.apache.org/jira/browse/HIVE-18757 > Project: Hive > Issue Type: Bug >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18757.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18744: Attachment: HIVE-18744.03.patch > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18659: -- Attachment: HIVE-18659.07.patch > add acid version marker to acid files/directories > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18659.01.patch, HIVE-18659.04.patch, > HIVE-18659.05.patch, HIVE-18659.06.patch, HIVE-18659.07.patch > > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18259) Automatic cleanup of invalidation cache for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18259: --- Attachment: HIVE-18259.02.patch > Automatic cleanup of invalidation cache for materialized views > -- > > Key: HIVE-18259 > URL: https://issues.apache.org/jira/browse/HIVE-18259 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18259.01.patch, HIVE-18259.02.patch, > HIVE-18259.patch > > > HIVE-14498 introduces the invalidation cache for materialized views, which > keeps track of the transactions executed on a given table to infer whether > materialized view contents are outdated or not. > Currently, the cache keeps information of transactions in memory to guarantee > quick response time, i.e., quick resolution about the view freshness, at > query rewriting time. This information can grow large, thus we would like to > run a thread that cleans useless transactions from the cache, i.e., > transactions that do invalidate any materialized view in the system, at an > interval defined by a property. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-18754: --- Description: We have support for "WITH" clause in "REPL LOAD" command, but we don't have that for "REPL STATUS" command. With the cloud replication model , HiveServer2 is only running in the source on-prem cluster. "REPL LOAD"'s with clause is currently used to pass the remote cloud clusters metastore uri, using "hive.metastore.uri" parameter. Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the next incremental replication should start from. Since "REPL STATUS" is also going to run on source cluster, we need to add support for the "WITH" clause for it. We should also change the privilege required for "REPL STATUS" command to what is required by "REPL LOAD" command as now arbitrary configs can be set for "REPL STATUS" using the WITH clause. was: We have support for "WITH" clause in "REPL LOAD" command, but we don't have that for "REPL STATUS" command. With the cloud replication model for DLM 1.1, HiveServer2 is only running in the source on-prem cluster. "REPL LOAD"'s with clause is currently used to pass the remote cloud clusters metastore uri, using "hive.metastore.uri" parameter. Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the next incremental replication should start from. Since "REPL STATUS" is also going to run on source cluster, we need to add support for the "WITH" clause for it. We should also change the privilege required for "REPL STATUS" command to what is required by "REPL LOAD" command as now arbitrary configs can be set for "REPL STATUS" using the WITH clause. > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model , HiveServer2 is only running in the source > on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud > clusters metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18436) Upgrade to Spark 2.3.0
[ https://issues.apache.org/jira/browse/HIVE-18436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370644#comment-16370644 ] Hive QA commented on HIVE-18436: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 7s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 44m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e51f7c9 | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9279/yetus/patch-asflicense-problems.txt | | modules | C: . itests ql spark-client U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9279/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade to Spark 2.3.0 > -- > > Key: HIVE-18436 > URL: https://issues.apache.org/jira/browse/HIVE-18436 > Project: Hive > Issue Type: Task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18436.1.patch, HIVE-18436.2.patch > > > Branching has been completed. Release candidates should be published soon. > Might be a while before the actual release, but at least we get to identify > any issues early. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18744: Attachment: (was: HIVE-18744.03.patch) > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370736#comment-16370736 ] Deepak Jaiswal commented on HIVE-18744: --- +1 pending test analysis or rerun if related failures. > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18034) Improving logging with HoS executors spend lots of time in GC
[ https://issues.apache.org/jira/browse/HIVE-18034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18034: Attachment: HIVE-18034.3.patch > Improving logging with HoS executors spend lots of time in GC > - > > Key: HIVE-18034 > URL: https://issues.apache.org/jira/browse/HIVE-18034 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18034.1.patch, HIVE-18034.2.patch, > HIVE-18034.3.patch > > > There are times when Spark will spend lots of time doing GC. The Spark > History UI shows a bunch of red flags when too much time is spent in GC. It > would be nice if those warnings are propagated to Hive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18436) Upgrade to Spark 2.3.0
[ https://issues.apache.org/jira/browse/HIVE-18436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370763#comment-16370763 ] Sahil Takiar commented on HIVE-18436: - Looks like all tests are passing now. {{ppd_join5}} is a known flaky test, and {{query39}} passes locally. > Upgrade to Spark 2.3.0 > -- > > Key: HIVE-18436 > URL: https://issues.apache.org/jira/browse/HIVE-18436 > Project: Hive > Issue Type: Task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18436.1.patch, HIVE-18436.2.patch > > > Branching has been completed. Release candidates should be published soon. > Might be a while before the actual release, but at least we get to identify > any issues early. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18685) Add catalogs to metastore
[ https://issues.apache.org/jira/browse/HIVE-18685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-18685: -- Component/s: SQL Security Parser > Add catalogs to metastore > - > > Key: HIVE-18685 > URL: https://issues.apache.org/jira/browse/HIVE-18685 > Project: Hive > Issue Type: New Feature > Components: Metastore, Parser, Security, SQL >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HMS Catalog Design Doc.pdf > > > SQL supports two levels of namespaces, called in the spec catalogs and > schemas (with schema being equivalent to Hive's database). I propose to add > the upper level of catalog. The attached design doc covers the use cases, > requirements, and brief discussion of how it will be implemented in a > backwards compatible way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18744: Status: In Progress (was: Patch Available) > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18658) WM: allow not specifying scheduling policy when creating a pool
[ https://issues.apache.org/jira/browse/HIVE-18658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18658: Fix Version/s: 3.0.0 > WM: allow not specifying scheduling policy when creating a pool > --- > > Key: HIVE-18658 > URL: https://issues.apache.org/jira/browse/HIVE-18658 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18658.01.patch, HIVE-18658.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18658) WM: allow not specifying scheduling policy when creating a pool
[ https://issues.apache.org/jira/browse/HIVE-18658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18658: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master. Thanks for the review! > WM: allow not specifying scheduling policy when creating a pool > --- > > Key: HIVE-18658 > URL: https://issues.apache.org/jira/browse/HIVE-18658 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18658.01.patch, HIVE-18658.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18737) add an option to disable LLAP IO ACID for non-original files
[ https://issues.apache.org/jira/browse/HIVE-18737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18737: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to master. Thanks for the review! > add an option to disable LLAP IO ACID for non-original files > > > Key: HIVE-18737 > URL: https://issues.apache.org/jira/browse/HIVE-18737 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18737.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-18754: -- Labels: pull-request-available (was: ) > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch, HIVE-18754.02.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model , HiveServer2 is only running in the source > on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud > clusters metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370931#comment-16370931 ] ASF GitHub Bot commented on HIVE-18754: --- GitHub user maheshk114 opened a pull request: https://github.com/apache/hive/pull/309 HIVE-18754 : REPL STATUS should support 'with' clause Added support for WITH clause in REPL STATUS commnad. You can merge this pull request into a Git repository by running: $ git pull https://github.com/maheshk114/hive BUG-96847 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/309.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #309 commit 4899e33200ec116816d9e4a6c97c5e8c17a4ea5d Author: Mahesh Kumar BeheraDate: 2018-02-20T09:40:07Z HIVE-18754 : REPL STATUS should support 'with' clause > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch, HIVE-18754.02.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model , HiveServer2 is only running in the source > on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud > clusters metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18746) add_months should validate the date first
[ https://issues.apache.org/jira/browse/HIVE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370730#comment-16370730 ] Szehon Ho commented on HIVE-18746: -- Patch looks good but the related test seems to be failing, can you take a look? > add_months should validate the date first > - > > Key: HIVE-18746 > URL: https://issues.apache.org/jira/browse/HIVE-18746 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Subhasis Gorai >Assignee: Kryvenko Igor >Priority: Minor > Attachments: HIVE-18746.patch > > > hive (sbg_hvc_ods)> select add_months('2017-02-28', 1); > OK > _c0 > 2017-03-31 > Time taken: 0.107 seconds, Fetched: 1 row(s) > hive (sbg_hvc_ods)> select add_months('2017-02-29', 1); > OK > _c0 > 2017-04-01 > Time taken: 0.084 seconds, Fetched: 1 row(s) > hive (sbg_hvc_ods)> > > '2017-02-29' is an invalid date. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370815#comment-16370815 ] Sergey Shelukhin commented on HIVE-18756: - +1 pending tests, can you file a follow-up JIRA to fix it? > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18702) INSERT OVERWRITE TABLE doesn't clean the table directory before overwriting
[ https://issues.apache.org/jira/browse/HIVE-18702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370838#comment-16370838 ] Ashutosh Chauhan commented on HIVE-18702: - +1 [~osayankin] Can you rebase your patch and reupload so that tests can run? > INSERT OVERWRITE TABLE doesn't clean the table directory before overwriting > --- > > Key: HIVE-18702 > URL: https://issues.apache.org/jira/browse/HIVE-18702 > Project: Hive > Issue Type: Bug >Affects Versions: 2.3.2 >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0, 2.3.3 > > Attachments: HIVE-18702.1.patch > > > Enable Hive on TEZ. (MR works fine). > *STEP 1. Create test data* > {code} > nano /home/test/users.txt > {code} > Add to file: > {code} > Peter,34 > John,25 > Mary,28 > {code} > {code} > hadoop fs -mkdir /bug > hadoop fs -copyFromLocal /home/test/users.txt /bug > hadoop fs -ls /bug > {code} > *EXPECTED RESULT:* > {code} > Found 2 items > > -rwxr-xr-x 3 root root 25 2015-10-15 16:11 /bug/users.txt > {code} > *STEP 2. Upload data to hive* > {code} > create external table bug(name string, age int) ROW FORMAT DELIMITED FIELDS > TERMINATED BY ',' LINES TERMINATED BY '\n' LOCATION '/bug'; > select * from bug; > {code} > *EXPECTED RESULT:* > {code} > OK > Peter 34 > John25 > Mary28 > {code} > {code} > create external table bug1(name string, age int) ROW FORMAT DELIMITED FIELDS > TERMINATED BY ',' LINES TERMINATED BY '\n' LOCATION '/bug1'; > insert overwrite table bug select * from bug1; > select * from bug; > {code} > *EXPECTED RESULT:* > {code} > OK > Time taken: 0.097 seconds > {code} > *ACTUAL RESULT:* > {code} > hive> select * from bug; > OK > Peter 34 > John 25 > Mary 28 > Time taken: 0.198 seconds, Fetched: 3 row(s) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18259) Automatic cleanup of invalidation cache for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18259: --- Attachment: HIVE-18259.01.patch > Automatic cleanup of invalidation cache for materialized views > -- > > Key: HIVE-18259 > URL: https://issues.apache.org/jira/browse/HIVE-18259 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18259.01.patch, HIVE-18259.patch > > > HIVE-14498 introduces the invalidation cache for materialized views, which > keeps track of the transactions executed on a given table to infer whether > materialized view contents are outdated or not. > Currently, the cache keeps information of transactions in memory to guarantee > quick response time, i.e., quick resolution about the view freshness, at > query rewriting time. This information can grow large, thus we would like to > run a thread that cleans useless transactions from the cache, i.e., > transactions that do invalidate any materialized view in the system, at an > interval defined by a property. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18034) Improving logging with HoS executors spend lots of time in GC
[ https://issues.apache.org/jira/browse/HIVE-18034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370903#comment-16370903 ] Hive QA commented on HIVE-18034: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911295/HIVE-18034.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 13796 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=205) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteDecimalXY (batchId=192) org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteTimestamp (batchId=192) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9283/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9283/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9283/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911295 - PreCommit-HIVE-Build > Improving logging with HoS executors spend lots of time in GC > - > > Key: HIVE-18034 > URL: https://issues.apache.org/jira/browse/HIVE-18034 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18034.1.patch, HIVE-18034.2.patch, > HIVE-18034.3.patch > > > There are times when Spark will spend lots of time
[jira] [Commented] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370768#comment-16370768 ] Hive QA commented on HIVE-18744: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s{color} | {color:red} ql: The patch generated 22 new + 51 unchanged - 14 fixed = 73 total (was 65) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e51f7c9 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9281/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9281/yetus/patch-asflicense-problems.txt | | modules | C: storage-api ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9281/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18758) Vectorization: Fix VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-18758: --- > Vectorization: Fix VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18758 > URL: https://issues.apache.org/jira/browse/HIVE-18758 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > Fix and turn back on vectorization for issue found in > https://issues.apache.org/jira/browse/HIVE-18756 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18744: Status: Patch Available (was: In Progress) > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch, HIVE-18744.04.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18744: Attachment: HIVE-18744.04.patch > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch, HIVE-18744.04.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18757) LLAP IO for text fails for empty files
[ https://issues.apache.org/jira/browse/HIVE-18757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370791#comment-16370791 ] Sergey Shelukhin commented on HIVE-18757: - [~prasanth_j] tiny patch, mostly cosmetic changes, one line of the actual fix. Can you take a look? > LLAP IO for text fails for empty files > -- > > Key: HIVE-18757 > URL: https://issues.apache.org/jira/browse/HIVE-18757 > Project: Hive > Issue Type: Bug >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18757.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18757) LLAP IO for text fails for empty files
[ https://issues.apache.org/jira/browse/HIVE-18757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18757: Status: Patch Available (was: Open) > LLAP IO for text fails for empty files > -- > > Key: HIVE-18757 > URL: https://issues.apache.org/jira/browse/HIVE-18757 > Project: Hive > Issue Type: Bug >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18757.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18757) LLAP IO for text fails for empty files
[ https://issues.apache.org/jira/browse/HIVE-18757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18757: Attachment: HIVE-18757.patch > LLAP IO for text fails for empty files > -- > > Key: HIVE-18757 > URL: https://issues.apache.org/jira/browse/HIVE-18757 > Project: Hive > Issue Type: Bug >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18757.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370826#comment-16370826 ] Eugene Koifman commented on HIVE-18659: --- {\{createdDeltaDirs.add(deltaDest)}} is how it was before this patch. I'm not sure what the original intent was patch 7 to see check style/tests - the above links got recycled > add acid version marker to acid files/directories > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18659.01.patch, HIVE-18659.04.patch, > HIVE-18659.05.patch, HIVE-18659.06.patch, HIVE-18659.07.patch > > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18259) Automatic cleanup of invalidation cache for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18259: --- Attachment: HIVE-18259.02.patch > Automatic cleanup of invalidation cache for materialized views > -- > > Key: HIVE-18259 > URL: https://issues.apache.org/jira/browse/HIVE-18259 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-18259.01.patch, HIVE-18259.02.patch, > HIVE-18259.02.patch, HIVE-18259.patch > > > HIVE-14498 introduces the invalidation cache for materialized views, which > keeps track of the transactions executed on a given table to infer whether > materialized view contents are outdated or not. > Currently, the cache keeps information of transactions in memory to guarantee > quick response time, i.e., quick resolution about the view freshness, at > query rewriting time. This information can grow large, thus we would like to > run a thread that cleans useless transactions from the cache, i.e., > transactions that do invalidate any materialized view in the system, at an > interval defined by a property. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18034) Improving logging with HoS executors spend lots of time in GC
[ https://issues.apache.org/jira/browse/HIVE-18034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370865#comment-16370865 ] Hive QA commented on HIVE-18034: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 47s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 32 new + 26 unchanged - 16 fixed = 58 total (was 42) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} spark-client: The patch generated 3 new + 30 unchanged - 3 fixed = 33 total (was 33) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 50 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 3df6bc2 | | Default Java | 1.8.0_111 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-9283/yetus/patch-mvninstall-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9283/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9283/yetus/diff-checkstyle-spark-client.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9283/yetus/patch-asflicense-problems.txt | | modules | C: ql spark-client U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9283/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Improving logging with HoS executors spend lots of time in GC > - > > Key: HIVE-18034 > URL: https://issues.apache.org/jira/browse/HIVE-18034 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18034.1.patch, HIVE-18034.2.patch, > HIVE-18034.3.patch > > > There are times when Spark will spend lots of time doing GC. The Spark > History UI shows a bunch of red flags when too much time is spent in GC. It > would be nice if those warnings are propagated to Hive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-18754: --- Attachment: HIVE-18754.02.patch > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch, HIVE-18754.02.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model , HiveServer2 is only running in the source > on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud > clusters metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370801#comment-16370801 ] Hive QA commented on HIVE-18744: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911291/HIVE-18744.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 39 failed/errored test(s), 13797 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_15] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_15] (batchId=65) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_15] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_15] (batchId=145) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_15] (batchId=134) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9281/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9281/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9281/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 39 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911291 - PreCommit-HIVE-Build > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 >
[jira] [Commented] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370817#comment-16370817 ] Hive QA commented on HIVE-18744: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 22 new + 51 unchanged - 14 fixed = 73 total (was 65) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e51f7c9 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9282/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9282/yetus/patch-asflicense-problems.txt | | modules | C: storage-api ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9282/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18034) Improving logging with HoS executors spend lots of time in GC
[ https://issues.apache.org/jira/browse/HIVE-18034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370748#comment-16370748 ] Sahil Takiar commented on HIVE-18034: - Updated patch with a cleaner implementation. Required some re-factoring of the {{SparkStatistics}} classes that I think will be useful for future patches too. > Improving logging with HoS executors spend lots of time in GC > - > > Key: HIVE-18034 > URL: https://issues.apache.org/jira/browse/HIVE-18034 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18034.1.patch, HIVE-18034.2.patch, > HIVE-18034.3.patch > > > There are times when Spark will spend lots of time doing GC. The Spark > History UI shows a bunch of red flags when too much time is spent in GC. It > would be nice if those warnings are propagated to Hive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18755) Modifications to the metastore for catalogs
[ https://issues.apache.org/jira/browse/HIVE-18755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates reassigned HIVE-18755: - > Modifications to the metastore for catalogs > --- > > Key: HIVE-18755 > URL: https://issues.apache.org/jira/browse/HIVE-18755 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > > Step 1 of adding catalogs is to add support in the metastore. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370849#comment-16370849 ] Hive QA commented on HIVE-18744: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911291/HIVE-18744.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 43 failed/errored test(s), 13797 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_15] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_15] (batchId=65) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_15] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=179) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_4] (batchId=180) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_15] (batchId=145) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_15] (batchId=134) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=205) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.TestBeeLineWithArgs.testEscapeCRLFOffInDSVOutput (batchId=231) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=231) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteVarchar (batchId=192) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9282/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9282/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9282/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing
[jira] [Commented] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370957#comment-16370957 ] Hive QA commented on HIVE-18756: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 43s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 3df6bc2 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9285/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9285/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371008#comment-16371008 ] Matt McCline commented on HIVE-18756: - Fix some Q file outputs with patch #2. > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18756: Attachment: HIVE-18756.02.patch > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch, HIVE-18756.02.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371039#comment-16371039 ] Hive QA commented on HIVE-18659: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911311/HIVE-18659.07.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 43 failed/errored test(s), 13780 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestBeeLineDriver.org.apache.hadoop.hive.cli.TestBeeLineDriver (batchId=248) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=53) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=148) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=205) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9286/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9286/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9286/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing
[jira] [Commented] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371012#comment-16371012 ] Matt McCline commented on HIVE-18756: - Committed to master. [~sershe] thank you for your review. > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch, HIVE-18756.02.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18756: Resolution: Fixed Status: Resolved (was: Patch Available) > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch, HIVE-18756.02.patch > > > For a large query. Disabling vectorization for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by value
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371046#comment-16371046 ] Hive QA commented on HIVE-18744: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-9287/patches/PreCommit-HIVE-Build-9287.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9287/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: VectorHashKeyWrapperBatch doesn't assign Timestamp values by > value > - > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch, HIVE-18744.02.patch, > HIVE-18744.03.patch, HIVE-18744.04.patch > > > By assigning the inout TImestamp object the key gets corrupted when the input > object is reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18659) add acid version marker to acid files/directories
[ https://issues.apache.org/jira/browse/HIVE-18659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371003#comment-16371003 ] Hive QA commented on HIVE-18659: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 10 new + 1033 unchanged - 15 fixed = 1043 total (was 1048) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 3df6bc2 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9286/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9286/yetus/patch-asflicense-problems.txt | | modules | C: hcatalog/streaming ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9286/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > add acid version marker to acid files/directories > - > > Key: HIVE-18659 > URL: https://issues.apache.org/jira/browse/HIVE-18659 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18659.01.patch, HIVE-18659.04.patch, > HIVE-18659.05.patch, HIVE-18659.06.patch, HIVE-18659.07.patch > > > add acid version marker to acid files so that we know which version of acid > wrote the file -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371013#comment-16371013 ] Thejas M Nair commented on HIVE-18754: -- +1 > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch, HIVE-18754.02.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model , HiveServer2 is only running in the source > on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud > clusters metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371013#comment-16371013 ] Thejas M Nair edited comment on HIVE-18754 at 2/21/18 6:37 AM: --- +1 pending tests was (Author: thejas): +1 > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch, HIVE-18754.02.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model , HiveServer2 is only running in the source > on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud > clusters metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18695) fix TestAccumuloCliDriver.testCliDriver[accumulo_queries]
[ https://issues.apache.org/jira/browse/HIVE-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371041#comment-16371041 ] Zoltan Haindrich commented on HIVE-18695: - [~erwaman] could you please check these result set differences caused by HIVE-15680? > fix TestAccumuloCliDriver.testCliDriver[accumulo_queries] > - > > Key: HIVE-18695 > URL: https://issues.apache.org/jira/browse/HIVE-18695 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Priority: Major > > seems to be broken by HIVE-15680 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18756) Vectorization: VectorUDAFVarFinal produces Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370989#comment-16370989 ] Hive QA commented on HIVE-18756: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911308/HIVE-18756.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 13796 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_types_vectorization] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_part_project] (batchId=157) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_input_format_excludes] (batchId=126) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded] (batchId=205) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=205) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9285/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9285/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9285/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911308 - PreCommit-HIVE-Build > Vectorization: VectorUDAFVarFinal produces Wrong Results > > > Key: HIVE-18756 > URL: https://issues.apache.org/jira/browse/HIVE-18756 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18756.01.patch > > > For a large query. Disabling
[jira] [Commented] (HIVE-18694) Fix TestHiveCli test
[ https://issues.apache.org/jira/browse/HIVE-18694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16371049#comment-16371049 ] Zoltan Haindrich commented on HIVE-18694: - since cli is deprecated we might as well just @Ignore the test > Fix TestHiveCli test > > > Key: HIVE-18694 > URL: https://issues.apache.org/jira/browse/HIVE-18694 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Priority: Major > > seems to be broken by HIVE-18493 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18757) LLAP IO for text fails for empty files
[ https://issues.apache.org/jira/browse/HIVE-18757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370950#comment-16370950 ] Hive QA commented on HIVE-18757: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911300/HIVE-18757.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 13796 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded] (batchId=205) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.common.TestHiveClientCache.testCloseAllClients (batchId=199) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConnectionSchemaAPIs (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniHS2.testHttpHeaderSize (batchId=238) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9284/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9284/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9284/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 33 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911300 - PreCommit-HIVE-Build > LLAP IO for text fails for empty files > -- > > Key: HIVE-18757 > URL: https://issues.apache.org/jira/browse/HIVE-18757 > Project: Hive > Issue Type: Bug >Reporter: Aswathy Chellammal Sreekumar >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-18757.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18747) Cleaner for TXNS_TO_WRITE_ID table entries.
[ https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-18747: --- > Cleaner for TXNS_TO_WRITE_ID table entries. > --- > > Key: HIVE-18747 > URL: https://issues.apache.org/jira/browse/HIVE-18747 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Minor > Labels: ACID, replication > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) maintains a map between txn ID > and table write ID in TXN_TO_WRITE_ID meta table. > The entries in this table is used to generate ValidWriteIdList for the given > ValidTxnList to ensure snapshot isolation. > When table or database is dropped, then these entries are cleaned-up. But, it > is necessary to clean-up for active tables too for better performance. > Need to have another table MIN_HISTORY_LEVEL to maintain the least txn which > is referred by any active ValidTxnList snapshot as open/aborted txn. If no > references found in this table for any txn, then it is eligible for cleanup. > After clean-up, need to maintain just one entry per table to mark as LWM (low > water mark). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18749) Need to replace transactionId with per table writeId in RecordIdentifier.Field.transactionId
[ https://issues.apache.org/jira/browse/HIVE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18749: Summary: Need to replace transactionId with per table writeId in RecordIdentifier.Field.transactionId (was: Need to replace transactionId to per table writeId in RecordIdentifier.Field.transactionId) > Need to replace transactionId with per table writeId in > RecordIdentifier.Field.transactionId > > > Key: HIVE-18749 > URL: https://issues.apache.org/jira/browse/HIVE-18749 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Minor > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) have replaced global > transaction ID with write ID for the primary key for a row marked by > RecordIdentifier.Field..transactionId. > Need to replace the same with writeId and update all test results file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-18728) Secure webHCat with SSL
[ https://issues.apache.org/jira/browse/HIVE-18728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369866#comment-16369866 ] Lefty Leverenz edited comment on HIVE-18728 at 2/20/18 9:50 AM: I left a comment on RB pointing to the documentation in this jira's description. (The config descriptions don't appear anywhere in the code – is that normal for WebHCat configs?) The doc is fine, I'd just add "the" a couple of times in the example text. When the patch is committed, this documentation should become new sections in the WebHCat configuration wiki and the configs should be listed in the Configuration Variables section: * [WebHCat Configure |https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure] * [WebHCat Configure – Configuration Variables |https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure#WebHCatConfigure-ConfigurationVariables] was (Author: le...@hortonworks.com): I left a comment on RB pointing to the documentation in this jira's description. (The config descriptions don't appear anywhere in the code – is that normal for WebHCat configs?) The doc is fine, I'd just add "the" a couple of times in the example text. When the patch is committed, this documentation should become new sections in the WebHCat configuration wiki and the configs should be listed in the Configuration Variables section: * [WebHCat Configure |https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure] * [WebHCat Configure – ConfigurationVariables |https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure#WebHCatConfigure-ConfigurationVariables] > Secure webHCat with SSL > --- > > Key: HIVE-18728 > URL: https://issues.apache.org/jira/browse/HIVE-18728 > Project: Hive > Issue Type: New Feature > Components: Security >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18728.1.patch, HIVE-18728.2.patch > > > Doc for the issue: > *Configure WebHCat server to use SSL encryption* > You can configure WebHCat REST-API to use SSL (Secure Sockets Layer) > encryption. The following WebHCat properties are added to enable SSL. > {{templeton.use.ssl}} > Default value: {{false}} > Description: Set this to true for using SSL encryption for WebHCat server > {{templeton.keystore.path}} > Default value: {{}} > Description: SSL certificate keystore location for WebHCat server > {{templeton.keystore.password}} > Default value: {{}} > Description: SSL certificate keystore password for WebHCat server > {{templeton.ssl.protocol.blacklist}} > Default value: {{SSLv2,SSLv3}} > Description: SSL Versions to disable for WebHCat server > {{templeton.host}} > Default value: {{0.0.0.0}} > Description: The host address the WebHCat server will listen on. > *Modifying the {{webhcat-site.xml}} file* > Configure the following properties in the {{webhcat-site.xml}} file to enable > SSL encryption on each node where WebHCat is installed: > {code} > > > templeton.use.ssl > true > > > templeton.keystore.path > /path/to/ssl_keystore > > > templeton.keystore.password > password > > {code} > *Example:* To check status of WebHCat server configured for SSL encryption > use following command > {code} > curl -k 'https://:@:50111/templeton/v1/status' > {code} > replace {{}} and {{}} with valid user/password. Replace > {{}} with your host name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18748) Rename table should update the table names in NEXT_WRITE_ID and TXN_TO_WRITE_ID tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18748: Summary: Rename table should update the table names in NEXT_WRITE_ID and TXN_TO_WRITE_ID tables. (was: Rename tables should update the table names in NEXT_WRITE_ID and TXN_TO_WRITE_ID tables. ) > Rename table should update the table names in NEXT_WRITE_ID and > TXN_TO_WRITE_ID tables. > > > Key: HIVE-18748 > URL: https://issues.apache.org/jira/browse/HIVE-18748 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DDL > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) introduces couple of > metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids > allocated per table. > Now, when we rename any tables, it is necessary to update the corresponding > table names in these table as well. Otherwise, ACID table operations won't > work properly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera reassigned HIVE-18754: -- Assignee: mahesh kumar behera > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Fix For: 3.0.0 > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model for DLM 1.1, HiveServer2 is only running in > the source on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud clusters > metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18728) Secure webHCat with SSL
[ https://issues.apache.org/jira/browse/HIVE-18728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369866#comment-16369866 ] Lefty Leverenz commented on HIVE-18728: --- I left a comment on RB pointing to the documentation in this jira's description. (The config descriptions don't appear anywhere in the code – is that normal for WebHCat configs?) The doc is fine, I'd just add "the" a couple of times in the example text. When the patch is committed, this documentation should become new sections in the WebHCat configuration wiki and the configs should be listed in the Configuration Variables section: * [WebHCat Configure |https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure] * [WebHCat Configure – ConfigurationVariables |https://cwiki.apache.org/confluence/display/Hive/WebHCat+Configure#WebHCatConfigure-ConfigurationVariables] > Secure webHCat with SSL > --- > > Key: HIVE-18728 > URL: https://issues.apache.org/jira/browse/HIVE-18728 > Project: Hive > Issue Type: New Feature > Components: Security >Reporter: Oleksiy Sayankin >Assignee: Oleksiy Sayankin >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18728.1.patch, HIVE-18728.2.patch > > > Doc for the issue: > *Configure WebHCat server to use SSL encryption* > You can configure WebHCat REST-API to use SSL (Secure Sockets Layer) > encryption. The following WebHCat properties are added to enable SSL. > {{templeton.use.ssl}} > Default value: {{false}} > Description: Set this to true for using SSL encryption for WebHCat server > {{templeton.keystore.path}} > Default value: {{}} > Description: SSL certificate keystore location for WebHCat server > {{templeton.keystore.password}} > Default value: {{}} > Description: SSL certificate keystore password for WebHCat server > {{templeton.ssl.protocol.blacklist}} > Default value: {{SSLv2,SSLv3}} > Description: SSL Versions to disable for WebHCat server > {{templeton.host}} > Default value: {{0.0.0.0}} > Description: The host address the WebHCat server will listen on. > *Modifying the {{webhcat-site.xml}} file* > Configure the following properties in the {{webhcat-site.xml}} file to enable > SSL encryption on each node where WebHCat is installed: > {code} > > > templeton.use.ssl > true > > > templeton.keystore.path > /path/to/ssl_keystore > > > templeton.keystore.password > password > > {code} > *Example:* To check status of WebHCat server configured for SSL encryption > use following command > {code} > curl -k 'https://:@:50111/templeton/v1/status' > {code} > replace {{}} and {{}} with valid user/password. Replace > {{}} with your host name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369899#comment-16369899 ] Hive QA commented on HIVE-18754: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e0bf12d | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9272/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9272/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model for DLM 1.1, HiveServer2 is only running in > the source on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud clusters > metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18744) Vectorization: VectorHashKeyWrapperBatch doesn't check repeated NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-18744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369961#comment-16369961 ] Teddy Choi commented on HIVE-18744: --- +1 Looks good to me. By the way, I got some unit test failures, but they passed when I tried them twice. It just seemed like fluctuation. It happened in some LLAP vectorization tests; vectorization_limit.q and vectorization_part_project.q. Thanks. > Vectorization: VectorHashKeyWrapperBatch doesn't check repeated NULLs > correctly > --- > > Key: HIVE-18744 > URL: https://issues.apache.org/jira/browse/HIVE-18744 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-18744.01.patch > > > Logic for checking selectedInUse isRepeating case for NULL is broken. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18750) Exchange partition should not be supported with per table write ID.
[ https://issues.apache.org/jira/browse/HIVE-18750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-18750: --- > Exchange partition should not be supported with per table write ID. > --- > > Key: HIVE-18750 > URL: https://issues.apache.org/jira/browse/HIVE-18750 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Fix For: 3.0.0 > > > Per table write id implementation (HIVE-18192) have introduced write ID per > table and used write ID to name the delta/base files and also as primary key > for each row. > Now, exchange partition have to move delta/base files across tables without > changing the write ID which causes incorrect results. > Also, this exchange partition feature is there to support the use-case of > atomic updates. But with ACID updates, we shall support atomic-updates and > hence it makes sense to not support exchange partition for ACID and MM tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18750) Exchange partition should not be supported with per table write ID.
[ https://issues.apache.org/jira/browse/HIVE-18750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18750: Labels: ACID DDL (was: ) > Exchange partition should not be supported with per table write ID. > --- > > Key: HIVE-18750 > URL: https://issues.apache.org/jira/browse/HIVE-18750 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DDL > Fix For: 3.0.0 > > > Per table write id implementation (HIVE-18192) have introduced write ID per > table and used write ID to name the delta/base files and also as primary key > for each row. > Now, exchange partition have to move delta/base files across tables without > changing the write ID which causes incorrect results. > Also, this exchange partition feature is there to support the use-case of > atomic updates. But with ACID updates, we shall support atomic-updates and > hence it makes sense to not support exchange partition for ACID and MM tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18752) HiveEndPoint: Optimise opening batch transactions and getting write Ids for each transaction in the batch into single metastore api.
[ https://issues.apache.org/jira/browse/HIVE-18752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-18752: --- > HiveEndPoint: Optimise opening batch transactions and getting write Ids for > each transaction in the batch into single metastore api. > > > Key: HIVE-18752 > URL: https://issues.apache.org/jira/browse/HIVE-18752 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Metastore >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, Metastore, Streaming > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) have introduced write ID and > maps it against the txn. > Now, for streaming ingest, we need to open txns batch and then allocate write > id for each txn in the batch which is 2 metastore calls. > This can be optimised to use only one metastore api. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369958#comment-16369958 ] Hive QA commented on HIVE-18754: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911204/HIVE-18754.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 13775 tests executed *Failed tests:* {noformat} TestDelimitedInputWriter - did not produce a TEST-*.xml file (likely timed out) (batchId=201) TestMutations - did not produce a TEST-*.xml file (likely timed out) (batchId=201) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3] (batchId=154) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=179) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=121) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221) org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded] (batchId=205) org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224) org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187) org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex (batchId=242) org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex (batchId=242) org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd (batchId=235) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9272/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9272/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9272/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12911204 - PreCommit-HIVE-Build > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch > > > We
[jira] [Assigned] (HIVE-18749) Need to replace transactionId to per table writeId in RecordIdentifier.Field.transactionId
[ https://issues.apache.org/jira/browse/HIVE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-18749: --- > Need to replace transactionId to per table writeId in > RecordIdentifier.Field.transactionId > -- > > Key: HIVE-18749 > URL: https://issues.apache.org/jira/browse/HIVE-18749 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Minor > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) have replaced global > transaction ID with write ID for the primary key for a row marked by > RecordIdentifier.Field..transactionId. > Need to replace the same with writeId and update all test results file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18751) ACID table scan through get_splits UDF doesn't receive ValidWriteIdList configuration.
[ https://issues.apache.org/jira/browse/HIVE-18751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18751: Description: Per table write ID (HIVE-18192) have replaced global transaction ID with write ID to version data files in ACID/MM tables, To ensure snapshot isolation, need to generate ValidWriteIdList for the given txn/table and use it when scan the ACID/MM tables. In case of get_splits UDF which runs on ACID table scan query won't receive it properly through configuration (hive.txn.tables.valid.writeids) and hence throws exception. TestAcidOnTez.testGetSplitsLocks is the test failing for the same. Need to fix it. was: Per table write ID (HIVE-18192) have replaced global transaction ID with write ID to version data files in ACID/MM tables, To ensure snapshot isolation, need to generate ValidWriteIdList for the given txn/table and use it when scan the ACID/MM tables. In case of get_splits UDF which runs on ACID table scan query won't receive it properly through configuration and hence throws exception. TestAcidOnTez.testGetSplitsLocks is the test failing for the same. Need to fix it. > ACID table scan through get_splits UDF doesn't receive ValidWriteIdList > configuration. > -- > > Key: HIVE-18751 > URL: https://issues.apache.org/jira/browse/HIVE-18751 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, UDF > Fix For: 3.0.0 > > > Per table write ID (HIVE-18192) have replaced global transaction ID with > write ID to version data files in ACID/MM tables, > To ensure snapshot isolation, need to generate ValidWriteIdList for the given > txn/table and use it when scan the ACID/MM tables. > In case of get_splits UDF which runs on ACID table scan query won't receive > it properly through configuration (hive.txn.tables.valid.writeids) and hence > throws exception. > TestAcidOnTez.testGetSplitsLocks is the test failing for the same. Need to > fix it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17626) Query reoptimization using cached runtime statistics
[ https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-17626: Attachment: HIVE-17626.01wip01.patch > Query reoptimization using cached runtime statistics > > > Key: HIVE-17626 > URL: https://issues.apache.org/jira/browse/HIVE-17626 > Project: Hive > Issue Type: New Feature > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-17626.01wip01.patch, runtimestats.patch > > > Something similar to "EXPLAIN ANALYZE" where we annotate explain plan with > actual and estimated statistics. The runtime stats can be cached at query > level and subsequent execution of the same query can make use of the cached > statistics from the previous run for better optimization. > Some use cases, > 1) re-planning join query (mapjoin failures can be converted to shuffle joins) > 2) better statistics for table scan operator if dynamic partition pruning is > involved > 3) Better estimates for bloom filter initialization (setting expected entries > during merge) > This can extended to support wider queries by caching fragments of operator > plans scanning same table(s) or matching some operator sequences. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17626) Query reoptimization using cached runtime statistics
[ https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-17626: Status: Patch Available (was: Open) > Query reoptimization using cached runtime statistics > > > Key: HIVE-17626 > URL: https://issues.apache.org/jira/browse/HIVE-17626 > Project: Hive > Issue Type: New Feature > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-17626.01wip01.patch, runtimestats.patch > > > Something similar to "EXPLAIN ANALYZE" where we annotate explain plan with > actual and estimated statistics. The runtime stats can be cached at query > level and subsequent execution of the same query can make use of the cached > statistics from the previous run for better optimization. > Some use cases, > 1) re-planning join query (mapjoin failures can be converted to shuffle joins) > 2) better statistics for table scan operator if dynamic partition pruning is > involved > 3) Better estimates for bloom filter initialization (setting expected entries > during merge) > This can extended to support wider queries by caching fragments of operator > plans scanning same table(s) or matching some operator sequences. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18747) Cleaner for TXN_TO_WRITE_ID table entries.
[ https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18747: Component/s: (was: repl) > Cleaner for TXN_TO_WRITE_ID table entries. > -- > > Key: HIVE-18747 > URL: https://issues.apache.org/jira/browse/HIVE-18747 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Minor > Labels: ACID > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) maintains a map between txn ID > and table write ID in TXN_TO_WRITE_ID meta table. > The entries in this table is used to generate ValidWriteIdList for the given > ValidTxnList to ensure snapshot isolation. > When table or database is dropped, then these entries are cleaned-up. But, it > is necessary to clean-up for active tables too for better performance. > Need to have another table MIN_HISTORY_LEVEL to maintain the least txn which > is referred by any active ValidTxnList snapshot as open/aborted txn. If no > references found in this table for any txn, then it is eligible for cleanup. > After clean-up, need to maintain just one entry per table to mark as LWM (low > water mark). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18747) Cleaner for TXN_TO_WRITE_ID table entries.
[ https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18747: Summary: Cleaner for TXN_TO_WRITE_ID table entries. (was: Cleaner for TXNS_TO_WRITE_ID table entries.) > Cleaner for TXN_TO_WRITE_ID table entries. > -- > > Key: HIVE-18747 > URL: https://issues.apache.org/jira/browse/HIVE-18747 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Minor > Labels: ACID, replication > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) maintains a map between txn ID > and table write ID in TXN_TO_WRITE_ID meta table. > The entries in this table is used to generate ValidWriteIdList for the given > ValidTxnList to ensure snapshot isolation. > When table or database is dropped, then these entries are cleaned-up. But, it > is necessary to clean-up for active tables too for better performance. > Need to have another table MIN_HISTORY_LEVEL to maintain the least txn which > is referred by any active ValidTxnList snapshot as open/aborted txn. If no > references found in this table for any txn, then it is eligible for cleanup. > After clean-up, need to maintain just one entry per table to mark as LWM (low > water mark). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18751) get_splits UDF on ACID table scan doesn't receive ValidWriteIdList configuration.
[ https://issues.apache.org/jira/browse/HIVE-18751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-18751: --- > get_splits UDF on ACID table scan doesn't receive ValidWriteIdList > configuration. > - > > Key: HIVE-18751 > URL: https://issues.apache.org/jira/browse/HIVE-18751 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, UDF > Fix For: 3.0.0 > > > Per table write ID (HIVE-18192) have replaced global transaction ID with > write ID to version data files in ACID/MM tables, > To ensure snapshot isolation, need to generate ValidWriteIdList for the given > txn/table and use it when scan the ACID/MM tables. > In case of get_splits UDF which runs on ACID table scan query won't receive > it properly through configuration and hence throws exception. > TestAcidOnTez.testGetSplitsLocks is the test failing for the same. Need to > fix it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18751) ACID table scan through get_splits UDF doesn't receive ValidWriteIdList configuration.
[ https://issues.apache.org/jira/browse/HIVE-18751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18751: Summary: ACID table scan through get_splits UDF doesn't receive ValidWriteIdList configuration. (was: get_splits UDF on ACID table scan doesn't receive ValidWriteIdList configuration.) > ACID table scan through get_splits UDF doesn't receive ValidWriteIdList > configuration. > -- > > Key: HIVE-18751 > URL: https://issues.apache.org/jira/browse/HIVE-18751 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, UDF > Fix For: 3.0.0 > > > Per table write ID (HIVE-18192) have replaced global transaction ID with > write ID to version data files in ACID/MM tables, > To ensure snapshot isolation, need to generate ValidWriteIdList for the given > txn/table and use it when scan the ACID/MM tables. > In case of get_splits UDF which runs on ACID table scan query won't receive > it properly through configuration and hence throws exception. > TestAcidOnTez.testGetSplitsLocks is the test failing for the same. Need to > fix it. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18750) Exchange partition should not be supported with per table write ID.
[ https://issues.apache.org/jira/browse/HIVE-18750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18750: Description: Per table write id implementation (HIVE-18192) have introduced write ID per table and used write ID to name the delta/base files and also as primary key for each row. Now, exchange partition have to move delta/base files across tables without changing the write ID which causes incorrect results. Also, this exchange partition feature is there to support the use-case of atomic updates. But with ACID updates, we shall support atomic-updates and hence it makes sense to not support exchange partition for ACID and MM tables. The qtest file mm_exchangepartition.q test results to be updated after this change. was: Per table write id implementation (HIVE-18192) have introduced write ID per table and used write ID to name the delta/base files and also as primary key for each row. Now, exchange partition have to move delta/base files across tables without changing the write ID which causes incorrect results. Also, this exchange partition feature is there to support the use-case of atomic updates. But with ACID updates, we shall support atomic-updates and hence it makes sense to not support exchange partition for ACID and MM tables. > Exchange partition should not be supported with per table write ID. > --- > > Key: HIVE-18750 > URL: https://issues.apache.org/jira/browse/HIVE-18750 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DDL > Fix For: 3.0.0 > > > Per table write id implementation (HIVE-18192) have introduced write ID per > table and used write ID to name the delta/base files and also as primary key > for each row. > Now, exchange partition have to move delta/base files across tables without > changing the write ID which causes incorrect results. > Also, this exchange partition feature is there to support the use-case of > atomic updates. But with ACID updates, we shall support atomic-updates and > hence it makes sense to not support exchange partition for ACID and MM tables. > The qtest file mm_exchangepartition.q test results to be updated after this > change. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-18754: --- Status: Patch Available (was: Open) added support for with clause same as repl load command. > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model for DLM 1.1, HiveServer2 is only running in > the source on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud clusters > metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18754) REPL STATUS should support 'with' clause
[ https://issues.apache.org/jira/browse/HIVE-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-18754: --- Attachment: HIVE-18754.01.patch > REPL STATUS should support 'with' clause > > > Key: HIVE-18754 > URL: https://issues.apache.org/jira/browse/HIVE-18754 > Project: Hive > Issue Type: Task > Components: repl, Standalone Metastore >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18754.01.patch > > > We have support for "WITH" clause in "REPL LOAD" command, but we don't have > that for "REPL STATUS" command. > With the cloud replication model for DLM 1.1, HiveServer2 is only running in > the source on-prem cluster. > "REPL LOAD"'s with clause is currently used to pass the remote cloud clusters > metastore uri, using "hive.metastore.uri" parameter. > Once "REPL LOAD" is run, "REPL STATUS" needs to be run to determine where the > next incremental replication should start from. Since "REPL STATUS" is also > going to run on source cluster, we need to add support for the "WITH" clause > for it. > We should also change the privilege required for "REPL STATUS" command to > what is required by "REPL LOAD" command as now arbitrary configs can be set > for "REPL STATUS" using the WITH clause. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18747) Cleaner for TXN_TO_WRITE_ID table entries.
[ https://issues.apache.org/jira/browse/HIVE-18747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18747: Labels: ACID (was: ACID replication) > Cleaner for TXN_TO_WRITE_ID table entries. > -- > > Key: HIVE-18747 > URL: https://issues.apache.org/jira/browse/HIVE-18747 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Minor > Labels: ACID > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) maintains a map between txn ID > and table write ID in TXN_TO_WRITE_ID meta table. > The entries in this table is used to generate ValidWriteIdList for the given > ValidTxnList to ensure snapshot isolation. > When table or database is dropped, then these entries are cleaned-up. But, it > is necessary to clean-up for active tables too for better performance. > Need to have another table MIN_HISTORY_LEVEL to maintain the least txn which > is referred by any active ValidTxnList snapshot as open/aborted txn. If no > references found in this table for any txn, then it is eligible for cleanup. > After clean-up, need to maintain just one entry per table to mark as LWM (low > water mark). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18752) HiveEndPoint: Optimise metastore calls to open transactions batch and allocate write Ids.
[ https://issues.apache.org/jira/browse/HIVE-18752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18752: Summary: HiveEndPoint: Optimise metastore calls to open transactions batch and allocate write Ids. (was: HiveEndPoint: Optimise opening batch transactions and getting write Ids for each transaction in the batch into single metastore api.) > HiveEndPoint: Optimise metastore calls to open transactions batch and > allocate write Ids. > - > > Key: HIVE-18752 > URL: https://issues.apache.org/jira/browse/HIVE-18752 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Metastore >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, Metastore, Streaming > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) have introduced write ID and > maps it against the txn. > Now, for streaming ingest, we need to open txns batch and then allocate write > id for each txn in the batch which is 2 metastore calls. > This can be optimised to use only one metastore api. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18753) Correct methods and variables names which uses writeId instead of transactionId.
[ https://issues.apache.org/jira/browse/HIVE-18753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-18753: --- > Correct methods and variables names which uses writeId instead of > transactionId. > > > Key: HIVE-18753 > URL: https://issues.apache.org/jira/browse/HIVE-18753 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, HiveServer2 > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) have replaced global > transaction ID with per table write ID to version the data files in a table. > Now, the ACID data files referred by different class methods and variables > still uses names transactionId to mean write id. > So, it is required to rename those methods/variables/classes to mean writeId > instead of transactionId. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17626) Query reoptimization using cached runtime statistics
[ https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369980#comment-16369980 ] Hive QA commented on HIVE-17626: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} common: The patch generated 1 new + 420 unchanged - 0 fixed = 421 total (was 420) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 79 new + 759 unchanged - 117 fixed = 838 total (was 876) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e0bf12d | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9273/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-9273/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-9273/yetus/whitespace-eol.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9273/yetus/patch-asflicense-problems.txt | | modules | C: common itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9273/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Query reoptimization using cached runtime statistics > > > Key: HIVE-17626 > URL: https://issues.apache.org/jira/browse/HIVE-17626 > Project: Hive > Issue Type: New Feature > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Prasanth Jayachandran >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-17626.01wip01.patch, runtimestats.patch > > > Something similar to "EXPLAIN ANALYZE" where we annotate explain plan with > actual and estimated statistics. The runtime stats can be cached at query > level and subsequent execution of the same query can make use of the cached > statistics from the previous run for better optimization. > Some use cases, > 1) re-planning join query
[jira] [Assigned] (HIVE-18748) Rename tables should update the table names in NEXT_WRITE_ID and TXN_TO_WRITE_ID tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-18748: --- > Rename tables should update the table names in NEXT_WRITE_ID and > TXN_TO_WRITE_ID tables. > - > > Key: HIVE-18748 > URL: https://issues.apache.org/jira/browse/HIVE-18748 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DDL > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) introduces couple of > metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids > allocated per table. > Now, when we rename any tables, it is necessary to update the corresponding > table names in these table as well. Otherwise, ACID table operations won't > work properly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18749) Need to replace transactionId with per table writeId in RecordIdentifier.Field.transactionId
[ https://issues.apache.org/jira/browse/HIVE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18749: Labels: ACID (was: ) > Need to replace transactionId with per table writeId in > RecordIdentifier.Field.transactionId > > > Key: HIVE-18749 > URL: https://issues.apache.org/jira/browse/HIVE-18749 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Minor > Labels: ACID > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) have replaced global > transaction ID with write ID for the primary key for a row marked by > RecordIdentifier.Field..transactionId. > Need to replace the same with writeId and update all test results file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18341) Add repl load support for adding "raw" namespace for TDE with same encryption keys
[ https://issues.apache.org/jira/browse/HIVE-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16369835#comment-16369835 ] Lefty Leverenz commented on HIVE-18341: --- Doc note: This adds the configuration parameter *hive.repl.add.raw.reserved.namespace* to HiveConf.java. It is documented here (thanks, [~anishek]): * [hive.repl.add.raw.reserved.namespace | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.repl.add.raw.reserved.namespace] > Add repl load support for adding "raw" namespace for TDE with same encryption > keys > -- > > Key: HIVE-18341 > URL: https://issues.apache.org/jira/browse/HIVE-18341 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18341.0.patch, HIVE-18341.1.patch, > HIVE-18341.2.patch, HIVE-18341.3.patch > > > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Running_as_the_superuser > "a new virtual path prefix, /.reserved/raw/, that gives superusers direct > access to the underlying block data in the filesystem. This allows superusers > to distcp data without needing having access to encryption keys, and also > avoids the overhead of decrypting and re-encrypting data." > We need to introduce a new option in "Repl Load" command that will change the > files being copied in distcp to have this "/.reserved/raw/" namespace before > the file paths. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-16494) udaf percentile_approx() may fail on CBO
[ https://issues.apache.org/jira/browse/HIVE-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-16494 started by Miklos Gergely. - > udaf percentile_approx() may fail on CBO > > > Key: HIVE-16494 > URL: https://issues.apache.org/jira/browse/HIVE-16494 > Project: Hive > Issue Type: Bug > Components: CBO, Logical Optimizer, UDF >Reporter: Ashutosh Chauhan >Assignee: Miklos Gergely >Priority: Major > > select percentile_approx(key, array(0.50, 0.70, 0.90, 0.95, 0.99)) from t; > fails with error : The second argument must be a constant. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18706) Ensure each Yetus execution has its own separate working dir
[ https://issues.apache.org/jira/browse/HIVE-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370038#comment-16370038 ] Peter Vary commented on HIVE-18706: --- +1 > Ensure each Yetus execution has its own separate working dir > > > Key: HIVE-18706 > URL: https://issues.apache.org/jira/browse/HIVE-18706 > Project: Hive > Issue Type: Improvement >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18706.0.patch > > > Currently all Yetus executions started asynchronously by ptest are using the > same working directory. > This is not a problem in most of the cases because Yetus finishes in less > than 30 minutes for small patches. For some oversized patches however, this > may take more time than ptest test execution and thus overlapping with the > next build. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18706) Ensure each Yetus execution has its own separate working dir
[ https://issues.apache.org/jira/browse/HIVE-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Szita updated HIVE-18706: -- Status: Patch Available (was: Open) > Ensure each Yetus execution has its own separate working dir > > > Key: HIVE-18706 > URL: https://issues.apache.org/jira/browse/HIVE-18706 > Project: Hive > Issue Type: Improvement >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18706.0.patch > > > Currently all Yetus executions started asynchronously by ptest are using the > same working directory. > This is not a problem in most of the cases because Yetus finishes in less > than 30 minutes for small patches. For some oversized patches however, this > may take more time than ptest test execution and thus overlapping with the > next build. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18706) Ensure each Yetus execution has its own separate working dir
[ https://issues.apache.org/jira/browse/HIVE-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Szita updated HIVE-18706: -- Attachment: HIVE-18706.0.patch > Ensure each Yetus execution has its own separate working dir > > > Key: HIVE-18706 > URL: https://issues.apache.org/jira/browse/HIVE-18706 > Project: Hive > Issue Type: Improvement >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18706.0.patch > > > Currently all Yetus executions started asynchronously by ptest are using the > same working directory. > This is not a problem in most of the cases because Yetus finishes in less > than 30 minutes for small patches. For some oversized patches however, this > may take more time than ptest test execution and thus overlapping with the > next build. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-16494) udaf percentile_approx() may fail on CBO
[ https://issues.apache.org/jira/browse/HIVE-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely reassigned HIVE-16494: - Assignee: Miklos Gergely > udaf percentile_approx() may fail on CBO > > > Key: HIVE-16494 > URL: https://issues.apache.org/jira/browse/HIVE-16494 > Project: Hive > Issue Type: Bug > Components: CBO, Logical Optimizer, UDF >Reporter: Ashutosh Chauhan >Assignee: Miklos Gergely >Priority: Major > > select percentile_approx(key, array(0.50, 0.70, 0.90, 0.95, 0.99)) from t; > fails with error : The second argument must be a constant. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17626) Query reoptimization using cached runtime statistics
[ https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370014#comment-16370014 ] Hive QA commented on HIVE-17626: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12911208/HIVE-17626.01wip01.patch {color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 200 failed/errored test(s), 13798 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=49) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_abort] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] (batchId=79) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[hook_order] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd1] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[retry_failure_oom] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[retry_failure_stat_changes] (batchId=20) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_functions] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join3] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join4] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join6] (batchId=42) org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=174) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=248) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] (batchId=147) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dp_counter_mm] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dp_counter_non_mm] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_2] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainanalyze_2] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join32_lessSize] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join46] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[limit_join_transpose] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_partitioned] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mapjoin46] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[retry_failure] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[semijoin_hint] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in_having] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_multi] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_select] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1] (batchId=170)
[jira] [Commented] (HIVE-18696) The partition folders might not get cleaned up properly in the HiveMetaStore.add_partitions_core method if an exception occurs
[ https://issues.apache.org/jira/browse/HIVE-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370047#comment-16370047 ] Marta Kuczora commented on HIVE-18696: -- Sometimes also a ConcurrentModificationException occurs when running the tests which checks if the folders are cleaned up properly. It is because there still can be running tasks which add new entries to the addedPartitions map while iterating through the map in the finally part. > The partition folders might not get cleaned up properly in the > HiveMetaStore.add_partitions_core method if an exception occurs > -- > > Key: HIVE-18696 > URL: https://issues.apache.org/jira/browse/HIVE-18696 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > > When trying to add multiple partitions, but one of them cannot be created > successfully, none of the partitions are created, but the folders might not > be cleaned up properly. See the test case "testAddPartitionsOneInvalid" in > the TestAddPartitions test. > This is the problematic code in the HiveMetaStore.add_partitions_core method: > {code:java} > for (final Partition part : parts) { > if (!part.getTableName().equals(tblName) || > !part.getDbName().equals(dbName)) { > throw new MetaException("Partition does not belong to target > table " > + dbName + "." + tblName + ": " + part); > } > boolean shouldAdd = startAddPartition(ms, part, ifNotExists); > if (!shouldAdd) { > existingParts.add(part); > LOG.info("Not adding partition " + part + " as it already > exists"); > continue; > } > final UserGroupInformation ugi; > try { > ugi = UserGroupInformation.getCurrentUser(); > } catch (IOException e) { > throw new RuntimeException(e); > } > partFutures.add(threadPool.submit(new Callable() { > @Override > public Partition call() throws Exception { > ugi.doAs(new PrivilegedExceptionAction() { > @Override > public Object run() throws Exception { > try { > boolean madeDir = createLocationForAddedPartition(table, > part); > if (addedPartitions.put(new PartValEqWrapper(part), > madeDir) != null) { > // Technically, for ifNotExists case, we could insert > one and discard the other > // because the first one now "exists", but it seems > better to report the problem > // upstream as such a command doesn't make sense. > throw new MetaException("Duplicate partitions in the > list: " + part); > } > initializeAddedPartition(table, part, madeDir); > } catch (MetaException e) { > throw new IOException(e.getMessage(), e); > } > return null; > } > }); > return part; > } > })); > } > {code} > When going through the partitions, let's say for the first two partitions the > threads are successfully submitted to create the folders. But an exception > occurs for the third partition in the code before submitting the thread. (It > can happen if the partition has different table or db name as the others or > it has invalid value.) > In this case the execution will jump to the finally part where the folders > in the "addedPartitions" map will be cleaned up. However it can happen that > the threads for the first two partitions are not finished with the folder > creation yet, so the map can be empty or it can contain only one of the > partitions. > This issue also happens in the HiveMetastore.add_partitions_pspec_core > method, as this code part is the same as in the add_partitions_core method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18706) Ensure each Yetus execution has its own separate working dir
[ https://issues.apache.org/jira/browse/HIVE-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370025#comment-16370025 ] Hive QA commented on HIVE-18706: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 52 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 8m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / e0bf12d | | Default Java | 1.8.0_111 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-9274/yetus/patch-asflicense-problems.txt | | modules | C: testutils/ptest2 U: testutils/ptest2 | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-9274/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Ensure each Yetus execution has its own separate working dir > > > Key: HIVE-18706 > URL: https://issues.apache.org/jira/browse/HIVE-18706 > Project: Hive > Issue Type: Improvement >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18706.0.patch > > > Currently all Yetus executions started asynchronously by ptest are using the > same working directory. > This is not a problem in most of the cases because Yetus finishes in less > than 30 minutes for small patches. For some oversized patches however, this > may take more time than ptest test execution and thus overlapping with the > next build. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18706) Ensure each Yetus execution has its own separate working dir
[ https://issues.apache.org/jira/browse/HIVE-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370009#comment-16370009 ] Adam Szita commented on HIVE-18706: --- [~pvary] can you take a look on this please? > Ensure each Yetus execution has its own separate working dir > > > Key: HIVE-18706 > URL: https://issues.apache.org/jira/browse/HIVE-18706 > Project: Hive > Issue Type: Improvement >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Major > Attachments: HIVE-18706.0.patch > > > Currently all Yetus executions started asynchronously by ptest are using the > same working directory. > This is not a problem in most of the cases because Yetus finishes in less > than 30 minutes for small patches. For some oversized patches however, this > may take more time than ptest test execution and thus overlapping with the > next build. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18748) Rename table should update the table names in NEXT_WRITE_ID and TXN_TO_WRITE_ID tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16370268#comment-16370268 ] Eugene Koifman commented on HIVE-18748: --- How does "rename" get surfaced to the end user? Via Alter Table? I don't think there is anything else anywhere in the acid system that handles rename of db.table value. This probably needs to be a comprehensive change. (Or we can explore using table ID of some sort) > Rename table should update the table names in NEXT_WRITE_ID and > TXN_TO_WRITE_ID tables. > > > Key: HIVE-18748 > URL: https://issues.apache.org/jira/browse/HIVE-18748 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: ACID, DDL > Fix For: 3.0.0 > > > Per table write ID implementation (HIVE-18192) introduces couple of > metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids > allocated per table. > Now, when we rename any tables, it is necessary to update the corresponding > table names in these table as well. Otherwise, ACID table operations won't > work properly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18696) The partition folders might not get cleaned up properly in the HiveMetaStore.add_partitions_core method if an exception occurs
[ https://issues.apache.org/jira/browse/HIVE-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marta Kuczora updated HIVE-18696: - Status: Patch Available (was: Open) > The partition folders might not get cleaned up properly in the > HiveMetaStore.add_partitions_core method if an exception occurs > -- > > Key: HIVE-18696 > URL: https://issues.apache.org/jira/browse/HIVE-18696 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-18696.1.patch > > > When trying to add multiple partitions, but one of them cannot be created > successfully, none of the partitions are created, but the folders might not > be cleaned up properly. See the test case "testAddPartitionsOneInvalid" in > the TestAddPartitions test. > This is the problematic code in the HiveMetaStore.add_partitions_core method: > {code:java} > for (final Partition part : parts) { > if (!part.getTableName().equals(tblName) || > !part.getDbName().equals(dbName)) { > throw new MetaException("Partition does not belong to target > table " > + dbName + "." + tblName + ": " + part); > } > boolean shouldAdd = startAddPartition(ms, part, ifNotExists); > if (!shouldAdd) { > existingParts.add(part); > LOG.info("Not adding partition " + part + " as it already > exists"); > continue; > } > final UserGroupInformation ugi; > try { > ugi = UserGroupInformation.getCurrentUser(); > } catch (IOException e) { > throw new RuntimeException(e); > } > partFutures.add(threadPool.submit(new Callable() { > @Override > public Partition call() throws Exception { > ugi.doAs(new PrivilegedExceptionAction() { > @Override > public Object run() throws Exception { > try { > boolean madeDir = createLocationForAddedPartition(table, > part); > if (addedPartitions.put(new PartValEqWrapper(part), > madeDir) != null) { > // Technically, for ifNotExists case, we could insert > one and discard the other > // because the first one now "exists", but it seems > better to report the problem > // upstream as such a command doesn't make sense. > throw new MetaException("Duplicate partitions in the > list: " + part); > } > initializeAddedPartition(table, part, madeDir); > } catch (MetaException e) { > throw new IOException(e.getMessage(), e); > } > return null; > } > }); > return part; > } > })); > } > {code} > When going through the partitions, let's say for the first two partitions the > threads are successfully submitted to create the folders. But an exception > occurs for the third partition in the code before submitting the thread. (It > can happen if the partition has different table or db name as the others or > it has invalid value.) > In this case the execution will jump to the finally part where the folders > in the "addedPartitions" map will be cleaned up. However it can happen that > the threads for the first two partitions are not finished with the folder > creation yet, so the map can be empty or it can contain only one of the > partitions. > This issue also happens in the HiveMetastore.add_partitions_pspec_core > method, as this code part is the same as in the add_partitions_core method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)