[jira] [Commented] (HIVE-19302) Logging Too Verbose For TableNotFound
[ https://issues.apache.org/jira/browse/HIVE-19302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615607#comment-16615607 ] Hive QA commented on HIVE-19302: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939650/HIVE-19302.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14940 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=250) org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty (batchId=250) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13797/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13797/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13797/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12939650 - PreCommit-HIVE-Build > Logging Too Verbose For TableNotFound > - > > Key: HIVE-19302 > URL: https://issues.apache.org/jira/browse/HIVE-19302 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.2.0, 3.0.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19302.1.patch, table_not_found_cdh6.txt > > > There is way too much logging when a user submits a query against a table > which does not exist. In an ad-hoc setting, it is quite normal that a user > fat-fingers a table name. Yet, from the perspective of the Hive > administrator, there was perhaps a major issue based on the volume and > severity of logging. Please change the logging to INFO level, and do not > present a stack trace, for such a trivial error. > > See the attached file for a sample of what logging a single "table not found" > query generates. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19302) Logging Too Verbose For TableNotFound
[ https://issues.apache.org/jira/browse/HIVE-19302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615603#comment-16615603 ] Hive QA commented on HIVE-19302: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13797/dev-support/hive-personality.sh | | git revision | master / 08d9083 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13797/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Logging Too Verbose For TableNotFound > - > > Key: HIVE-19302 > URL: https://issues.apache.org/jira/browse/HIVE-19302 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2 >Affects Versions: 2.2.0, 3.0.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19302.1.patch, table_not_found_cdh6.txt > > > There is way too much logging when a user submits a query against a table > which does not exist. In an ad-hoc setting, it is quite normal that a user > fat-fingers a table name. Yet, from the perspective of the Hive > administrator, there was perhaps a major issue based on the volume and > severity of logging. Please change the logging to INFO level, and do not > present a stack trace, for such a trivial error. > > See the attached file for a sample of what logging a single "table not found" > query generates. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20558) Change default of hive.hashtable.key.count.adjustment to 0.99
[ https://issues.apache.org/jira/browse/HIVE-20558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615596#comment-16615596 ] Hive QA commented on HIVE-20558: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939649/HIVE-20558.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14940 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13796/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13796/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13796/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12939649 - PreCommit-HIVE-Build > Change default of hive.hashtable.key.count.adjustment to 0.99 > -- > > Key: HIVE-20558 > URL: https://issues.apache.org/jira/browse/HIVE-20558 > Project: Hive > Issue Type: Improvement >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan >Priority: Major > Attachments: HIVE-20558.patch > > > Current default is 2 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20563) Vectorization: CASE WHEN expression fails when THEN/ELSE type and result type are different
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20563: Summary: Vectorization: CASE WHEN expression fails when THEN/ELSE type and result type are different (was: Vectorization: Exception in execution of CASE statement) > Vectorization: CASE WHEN expression fails when THEN/ELSE type and result type > are different > --- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > Attachments: HIVE-20563.01.patch > > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:136) >
[jira] [Updated] (HIVE-20563) Vectorization: Exception in execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20563: Summary: Vectorization: Exception in execution of CASE statement (was: Exception in vectorization execution of CASE statement) > Vectorization: Exception in execution of CASE statement > --- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > Attachments: HIVE-20563.01.patch > > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:136) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) >
[jira] [Commented] (HIVE-20563) Exception in vectorization execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615584#comment-16615584 ] Matt McCline commented on HIVE-20563: - The patch doesn't vectorized the vertex anymore and has this message: {noformat} notVectorizedReason: SELECT operator: Unable to vectorize CASE WHEN expression -- data type float of THEN/ELSE expression is different than the result type string. Conversion is not supported {noformat} > Exception in vectorization execution of CASE statement > -- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > Attachments: HIVE-20563.01.patch > > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at >
[jira] [Updated] (HIVE-20563) Exception in vectorization execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20563: Attachment: HIVE-20563.01.patch > Exception in vectorization execution of CASE statement > -- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > Attachments: HIVE-20563.01.patch > > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:136) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at
[jira] [Updated] (HIVE-20563) Exception in vectorization execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20563: Status: Patch Available (was: Open) > Exception in vectorization execution of CASE statement > -- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > Attachments: HIVE-20563.01.patch > > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:136) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at
[jira] [Commented] (HIVE-20563) Exception in vectorization execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615575#comment-16615575 ] Matt McCline commented on HIVE-20563: - [~jcamachorodriguez] thank you for the great repro! > Exception in vectorization execution of CASE statement > -- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:136) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at
[jira] [Commented] (HIVE-20420) Provide a fallback authorizer when no other authorizer is in use
[ https://issues.apache.org/jira/browse/HIVE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615573#comment-16615573 ] Hive QA commented on HIVE-20420: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939627/HIVE-20420.7.patch {color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14948 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13795/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13795/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13795/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12939627 - PreCommit-HIVE-Build > Provide a fallback authorizer when no other authorizer is in use > > > Key: HIVE-20420 > URL: https://issues.apache.org/jira/browse/HIVE-20420 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20420.1.patch, HIVE-20420.2.patch, > HIVE-20420.3.patch, HIVE-20420.4.patch, HIVE-20420.5.patch, > HIVE-20420.6.patch, HIVE-20420.7.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20420) Provide a fallback authorizer when no other authorizer is in use
[ https://issues.apache.org/jira/browse/HIVE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615559#comment-16615559 ] Hive QA commented on HIVE-20420: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 12s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s{color} | {color:red} ql: The patch generated 12 new + 1 unchanged - 0 fixed = 13 total (was 1) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13795/dev-support/hive-personality.sh | | git revision | master / 08d9083 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13795/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13795/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Provide a fallback authorizer when no other authorizer is in use > > > Key: HIVE-20420 > URL: https://issues.apache.org/jira/browse/HIVE-20420 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20420.1.patch, HIVE-20420.2.patch, > HIVE-20420.3.patch, HIVE-20420.4.patch, HIVE-20420.5.patch, > HIVE-20420.6.patch, HIVE-20420.7.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20519) Remove 30m min value for hive.spark.session.timeout
[ https://issues.apache.org/jira/browse/HIVE-20519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615553#comment-16615553 ] Hive QA commented on HIVE-20519: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939634/HIVE-20519.2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14944 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testComplexQuery (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13794/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13794/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13794/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12939634 - PreCommit-HIVE-Build > Remove 30m min value for hive.spark.session.timeout > --- > > Key: HIVE-20519 > URL: https://issues.apache.org/jira/browse/HIVE-20519 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-20519.1.patch, HIVE-20519.2.patch > > > In HIVE-14162 we added the config \{{hive.spark.session.timeout}} which > provided a way to time out Spark sessions that are active for a long period > of time. The config has a lower bound of 30m which we should remove. It > should be possible for users to configure this value so the HoS session is > closed as soon as the query is complete. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20519) Remove 30m min value for hive.spark.session.timeout
[ https://issues.apache.org/jira/browse/HIVE-20519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615547#comment-16615547 ] Hive QA commented on HIVE-20519: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 42s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 2s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 12 new + 153 unchanged - 0 fixed = 165 total (was 153) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13794/dev-support/hive-personality.sh | | git revision | master / 08d9083 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13794/yetus/diff-checkstyle-ql.txt | | modules | C: common itests/hive-unit ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13794/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Remove 30m min value for hive.spark.session.timeout > --- > > Key: HIVE-20519 > URL: https://issues.apache.org/jira/browse/HIVE-20519 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-20519.1.patch, HIVE-20519.2.patch > > > In HIVE-14162 we added the config \{{hive.spark.session.timeout}} which > provided a way to time out Spark sessions that are active for a long period > of time. The config has a lower bound of 30m which we should remove. It > should be possible for users to configure this value so the HoS session is > closed as soon as the query is complete. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Status: Patch Available (was: In Progress) > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Status: In Progress (was: Patch Available) > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HIVE-17684: -- Attachment: HIVE-17684.09.patch > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, > HIVE-17684.09.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20549) Allow user set query tag, and kill query with tag
[ https://issues.apache.org/jira/browse/HIVE-20549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615538#comment-16615538 ] Hive QA commented on HIVE-20549: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939622/HIVE-20549.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 14939 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=251) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveBackKill (batchId=251) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=251) org.apache.hive.jdbc.TestTriggersNoTezSessionPool.testTriggerDAGTotalTasks (batchId=249) org.apache.hive.jdbc.TestTriggersNoTezSessionPool.testTriggerSlowQueryExecutionTime (batchId=249) org.apache.hive.jdbc.TestTriggersNoTezSessionPool.testTriggerTotalLaunchedTasks (batchId=249) org.apache.hive.jdbc.TestTriggersNoTezSessionPool.testTriggerVertexTotalTasks (batchId=249) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testMultipleTriggers1 (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testMultipleTriggers2 (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsMultiInsert (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedFiles (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomReadOps (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerDagRawInputSplitsKill (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerDagTotalTasks (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerDefaultRawInputSplits (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesWrite (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerShortQueryElapsedTime (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerTotalTasks (batchId=251) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerVertexRawInputSplitsKill (batchId=251) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13793/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13793/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13793/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 25 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12939622 - PreCommit-HIVE-Build > Allow user set query tag, and kill query with tag > - > > Key: HIVE-20549 > URL: https://issues.apache.org/jira/browse/HIVE-20549 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20549.1.patch, HIVE-20549.2.patch > > > HIVE-19924 add capacity for replication job set a query tag and kill the > replication distcp job with the tag. Here I make it more general, user can > set arbitrary "hive.query.tag" in sql script, and kill query with the tag. > Hive will cancel the corresponding operation in hs2, along with Tez/MR > application launched for the query. For example: > {code} > set hive.query.tag=mytag; > select . -- long running query > {code} > In another session: > {code} > kill query 'mytag'; > {code} > There're limitations in the implementation: > 1. No tag duplication check. There's nothing to prevent conflicting tag for > same user, and kill query will kill queries share the same tag. However, kill > query will not kill queries from different user unless admin. So different > user might share the same tag > 2. In multiple
[jira] [Commented] (HIVE-20549) Allow user set query tag, and kill query with tag
[ https://issues.apache.org/jira/browse/HIVE-20549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615536#comment-16615536 ] Hive QA commented on HIVE-20549: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 54s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 37s{color} | {color:red} root in master failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 48s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 10s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 38s{color} | {color:blue} service in master has 48 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 58s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 42s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 42s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} common: The patch generated 1 new + 425 unchanged - 0 fixed = 426 total (was 425) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s{color} | {color:red} root: The patch generated 1 new + 425 unchanged - 0 fixed = 426 total (was 425) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} itests/hive-unit: The patch generated 7 new + 7 unchanged - 1 fixed = 14 total (was 8) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} service: The patch generated 34 new + 25 unchanged - 0 fixed = 59 total (was 25) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 4 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13793/dev-support/hive-personality.sh | | git revision | master / 08d9083 | | Default Java | 1.8.0_111 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13793/yetus/branch-compile-root.txt | | findbugs | v3.0.0 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13793/yetus/patch-compile-root.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-13793/yetus/patch-compile-root.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13793/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13793/yetus/diff-checkstyle-root.txt | | checkstyle
[jira] [Updated] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
[ https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20556: --- Attachment: HIVE-20556.2.patch Status: Patch Available (was: Open) > Expose an API to retrieve the TBL_ID from TBLS in the metastore tables > -- > > Key: HIVE-20556 > URL: https://issues.apache.org/jira/browse/HIVE-20556 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20556.1.patch, HIVE-20556.2.patch > > > We have two options to do this > 1) Use the current MTable and add a field for this value > 2) Add an independent API call to the metastore that would return the TBL_ID. > Option 1 is preferable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
[ https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20556: --- Status: Open (was: Patch Available) > Expose an API to retrieve the TBL_ID from TBLS in the metastore tables > -- > > Key: HIVE-20556 > URL: https://issues.apache.org/jira/browse/HIVE-20556 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20556.1.patch > > > We have two options to do this > 1) Use the current MTable and add a field for this value > 2) Add an independent API call to the metastore that would return the TBL_ID. > Option 1 is preferable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615513#comment-16615513 ] Misha Dmitriev commented on HIVE-17684: --- [~stakiar] looks like the test failures are due to some silly NumberFormatException somewhere in the new code. I'll see if I can fix that. > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
[ https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615512#comment-16615512 ] Eugene Koifman commented on HIVE-20556: --- it may be useful to add a test that ensures Table.id is actually set when you get the table from HMS > Expose an API to retrieve the TBL_ID from TBLS in the metastore tables > -- > > Key: HIVE-20556 > URL: https://issues.apache.org/jira/browse/HIVE-20556 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20556.1.patch > > > We have two options to do this > 1) Use the current MTable and add a field for this value > 2) Add an independent API call to the metastore that would return the TBL_ID. > Option 1 is preferable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20189) Separate metastore client code into its own module
[ https://issues.apache.org/jira/browse/HIVE-20189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615510#comment-16615510 ] Alexander Kolbasov commented on HIVE-20189: --- [~alangates] [~vihangk1] What are your thoughts on separating actual metastore client out of the standalone-metastore-common module? On the one hand it is a bit cleaner, on the other this module will contain just a couple of classes (HiveMetastoreClient and the retrying version) so benefits are not very clear. > Separate metastore client code into its own module > -- > > Key: HIVE-20189 > URL: https://issues.apache.org/jira/browse/HIVE-20189 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 4.0.0, 3.2.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > The goal of this JIRA is to split HiveMetastoreClient code out of > metastore-common. This is a pom-only change that does not require any changes > in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20564) Remove Hive Server dependency on Metastore Server
[ https://issues.apache.org/jira/browse/HIVE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615508#comment-16615508 ] Alexander Kolbasov commented on HIVE-20564: --- [~alangates] Yeah, I noticed a lot of tricky dependencies here, so it is probably best to leave things as is. > Remove Hive Server dependency on Metastore Server > - > > Key: HIVE-20564 > URL: https://issues.apache.org/jira/browse/HIVE-20564 > Project: Hive > Issue Type: Sub-task >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > Currently Hive Server2 still depends on some classes from Metastore Server - > we should break this dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18583) Enable DateRangeRules
[ https://issues.apache.org/jira/browse/HIVE-18583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-18583: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Nishant! > Enable DateRangeRules > -- > > Key: HIVE-18583 > URL: https://issues.apache.org/jira/browse/HIVE-18583 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-18583.2.patch, HIVE-18583.3.patch, > HIVE-18583.4.patch, HIVE-18583.5.patch, HIVE-18583.patch > > > Enable DateRangeRules to translate druid filters to date ranges. > Need calcite version to upgrade to 0.16.0 before merging this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20095) Fix jdbc external table feature
[ https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615501#comment-16615501 ] Hive QA commented on HIVE-20095: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939620/HIVE-20095.8.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13792/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13792/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13792/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12939620/HIVE-20095.8.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12939620 - PreCommit-HIVE-Build > Fix jdbc external table feature > --- > > Key: HIVE-20095 > URL: https://issues.apache.org/jira/browse/HIVE-20095 > Project: Hive > Issue Type: Bug >Reporter: Jonathan Doron >Assignee: Jonathan Doron >Priority: Major > Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, > HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, > HIVE-20095.6.patch, HIVE-20095.7.patch, HIVE-20095.7.patch, HIVE-20095.8.patch > > > It seems like the committed code for HIVE-19161 > (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 > updates, and therefore some of the external table queries are not working > correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615499#comment-16615499 ] Hive QA commented on HIVE-17684: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939619/HIVE-17684.08.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 111 failed/errored test(s), 14940 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketmapjoin7] (batchId=187) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning] (batchId=187) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_2] (batchId=189) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_3] (batchId=189) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_6] (batchId=188) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_recursive_mapjoin] (batchId=188) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] (batchId=188) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_use_ts_stats_for_mapjoin] (batchId=188) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning] (batchId=187) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_inner_join] (batchId=189) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join0] (batchId=189) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join1] (batchId=188) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join2] (batchId=187) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join3] (batchId=188) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join4] (batchId=190) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join5] (batchId=190) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join0] (batchId=149) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join10] (batchId=125) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join11] (batchId=113) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join12] (batchId=120) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join13] (batchId=146) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join14] (batchId=115) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join15] (batchId=116) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join17] (batchId=147) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join19] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join1] (batchId=145) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join20] (batchId=150) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join21] (batchId=147) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join22] (batchId=134) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join23] (batchId=117) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join24] (batchId=143) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join26] (batchId=115) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join29] (batchId=134) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join2] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join30] (batchId=123) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join31] (batchId=130) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join3] (batchId=147) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join4] (batchId=141) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join5] (batchId=143) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join8] (batchId=148) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join9] (batchId=144) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_filters] (batchId=136) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_nulls] (batchId=140) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_stats2] (batchId=149) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_join_stats] (batchId=131) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_13]
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615495#comment-16615495 ] Hive QA commented on HIVE-17684: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 31s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 39s{color} | {color:red} root in master failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 33s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 41s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 41s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} common: The patch generated 3 new + 425 unchanged - 0 fixed = 428 total (was 425) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s{color} | {color:red} root: The patch generated 3 new + 425 unchanged - 0 fixed = 428 total (was 425) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 5 new + 6 unchanged - 0 fixed = 11 total (was 6) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 27s{color} | {color:red} ql generated 1 new + 2310 unchanged - 1 fixed = 2311 total (was 2311) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Class org.apache.hadoop.hive.ql.exec.HashTableSinkOperator defines non-transient non-serializable instance field memoryExhaustionChecker In HashTableSinkOperator.java:instance field memoryExhaustionChecker In HashTableSinkOperator.java | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13791/dev-support/hive-personality.sh | | git revision | master / 5eaf0dd | | Default Java | 1.8.0_111 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13791/yetus/branch-compile-root.txt | | findbugs | v3.0.0 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13791/yetus/patch-compile-root.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-13791/yetus/patch-compile-root.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13791/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13791/yetus/diff-checkstyle-root.txt | | checkstyle |
[jira] [Commented] (HIVE-20563) Exception in vectorization execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615491#comment-16615491 ] Matt McCline commented on HIVE-20563: - The output type of the CASE WHEN seems different for each THEN/ELSE branch. {noformat} CASE WHEN (cint is not null) THEN (cint) WHEN (cfloat is not null) THEN (cfloat) WHEN (csmallint is not null) THEN (csmallint) ELSE (null) END {noformat} > Exception in vectorization execution of CASE statement > -- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:136) >
[jira] [Updated] (HIVE-20545) Exclude large-sized parameters from serialization of Table and Partition thrift objects in HMS notifications
[ https://issues.apache.org/jira/browse/HIVE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-20545: Attachment: HIVE-20545.1.patch > Exclude large-sized parameters from serialization of Table and Partition > thrift objects in HMS notifications > > > Key: HIVE-20545 > URL: https://issues.apache.org/jira/browse/HIVE-20545 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.1.0, 4.0.0 >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-20545.1.patch > > > Clients can add large-sized parameters in Table/Partition objects. So we need > to enable adding regex patterns through HiveConf to match parameters to be > filtered from table and partition objects before serialization in HMS > notifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20545) Exclude large-sized parameters from serialization of Table and Partition thrift objects in HMS notifications
[ https://issues.apache.org/jira/browse/HIVE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-20545: Attachment: (was: HIVE-20545.1.patch) > Exclude large-sized parameters from serialization of Table and Partition > thrift objects in HMS notifications > > > Key: HIVE-20545 > URL: https://issues.apache.org/jira/browse/HIVE-20545 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.1.0, 4.0.0 >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-20545.1.patch > > > Clients can add large-sized parameters in Table/Partition objects. So we need > to enable adding regex patterns through HiveConf to match parameters to be > filtered from table and partition objects before serialization in HMS > notifications. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20568) There is no need to convert the dbname to pattern while pulling tablemeta
[ https://issues.apache.org/jira/browse/HIVE-20568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-20568: -- Summary: There is no need to convert the dbname to pattern while pulling tablemeta (was: GetTablesOperation : There is no need to convert the dbname to pattern) > There is no need to convert the dbname to pattern while pulling tablemeta > - > > Key: HIVE-20568 > URL: https://issues.apache.org/jira/browse/HIVE-20568 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 0.4.0 > Environment: Hive-4,Java-8 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Minor > Attachments: HIVE-20568.patch > > > there is no need to convert the dbname to pattern, dbNamePattern is just a > dbName which we are passing to getTableMeta > https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java#L117 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20564) Remove Hive Server dependency on Metastore Server
[ https://issues.apache.org/jira/browse/HIVE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615462#comment-16615462 ] Alan Gates commented on HIVE-20564: --- We need to be careful here. HS2 can act as a metastore server, either by answering thrift calls or embedding the metastore locally. We don't want to break either of these features. > Remove Hive Server dependency on Metastore Server > - > > Key: HIVE-20564 > URL: https://issues.apache.org/jira/browse/HIVE-20564 > Project: Hive > Issue Type: Sub-task >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > Currently Hive Server2 still depends on some classes from Metastore Server - > we should break this dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20538) Allow to store a key value together with a transaction.
[ https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20538: --- Attachment: HIVE-20538.2.patch Status: Patch Available (was: Open) > Allow to store a key value together with a transaction. > --- > > Key: HIVE-20538 > URL: https://issues.apache.org/jira/browse/HIVE-20538 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore, Transactions >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch, > HIVE-20538.2.patch > > > This can be useful for example to know if a transaction has already happened. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20538) Allow to store a key value together with a transaction.
[ https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20538: --- Status: Open (was: Patch Available) > Allow to store a key value together with a transaction. > --- > > Key: HIVE-20538 > URL: https://issues.apache.org/jira/browse/HIVE-20538 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore, Transactions >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch > > > This can be useful for example to know if a transaction has already happened. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20095) Fix jdbc external table feature
[ https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615454#comment-16615454 ] Hive QA commented on HIVE-20095: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939620/HIVE-20095.8.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14941 tests executed *Failed tests:* {noformat} org.apache.hive.service.auth.TestCustomAuthentication.org.apache.hive.service.auth.TestCustomAuthentication (batchId=247) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13790/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13790/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13790/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12939620 - PreCommit-HIVE-Build > Fix jdbc external table feature > --- > > Key: HIVE-20095 > URL: https://issues.apache.org/jira/browse/HIVE-20095 > Project: Hive > Issue Type: Bug >Reporter: Jonathan Doron >Assignee: Jonathan Doron >Priority: Major > Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, > HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, > HIVE-20095.6.patch, HIVE-20095.7.patch, HIVE-20095.7.patch, HIVE-20095.8.patch > > > It seems like the committed code for HIVE-19161 > (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 > updates, and therefore some of the external table queries are not working > correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20239) Do Not Print StackTraces to STDERR in MapJoinProcessor
[ https://issues.apache.org/jira/browse/HIVE-20239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615450#comment-16615450 ] Anurag Mantripragada commented on HIVE-20239: - [~belugabehr], the tests passed. Waiting for someone to commit it. > Do Not Print StackTraces to STDERR in MapJoinProcessor > -- > > Key: HIVE-20239 > URL: https://issues.apache.org/jira/browse/HIVE-20239 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Anurag Mantripragada >Priority: Minor > Labels: newbie, noob > Fix For: 4.0.0 > > Attachments: HIVE-20239.1.patch, HIVE-20239.2.patch, > HIVE-20239.3.patch > > > {code:java|title=MapJoinProcessor.java} > } catch (Exception e) { > e.printStackTrace(); > throw new SemanticException("Failed to generate new mapJoin operator " + > "by exception : " + e.getMessage()); > } > {code} > Please change to... something like... > {code} > } catch (Exception e) { > throw new SemanticException("Failed to generate new mapJoin operator", > e); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20291) Allow HiveStreamingConnection to receive a WriteId
[ https://issues.apache.org/jira/browse/HIVE-20291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20291: --- Status: Open (was: Patch Available) > Allow HiveStreamingConnection to receive a WriteId > -- > > Key: HIVE-20291 > URL: https://issues.apache.org/jira/browse/HIVE-20291 > Project: Hive > Issue Type: Improvement >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20291.1.patch, HIVE-20291.2.patch, > HIVE-20291.3.patch, HIVE-20291.4.patch, HIVE-20291.5.patch, HIVE-20291.6.patch > > > If the writeId is received externally it won't need to open connections to > the metastore. It won't be able to the commit in this case as well so it must > be done by the entity passing the writeId. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20291) Allow HiveStreamingConnection to receive a WriteId
[ https://issues.apache.org/jira/browse/HIVE-20291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20291: --- Attachment: HIVE-20291.6.patch Status: Patch Available (was: Open) > Allow HiveStreamingConnection to receive a WriteId > -- > > Key: HIVE-20291 > URL: https://issues.apache.org/jira/browse/HIVE-20291 > Project: Hive > Issue Type: Improvement >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20291.1.patch, HIVE-20291.2.patch, > HIVE-20291.3.patch, HIVE-20291.4.patch, HIVE-20291.5.patch, HIVE-20291.6.patch > > > If the writeId is received externally it won't need to open connections to > the metastore. It won't be able to the commit in this case as well so it must > be done by the entity passing the writeId. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17040) Join elimination in the presence of FK relationship
[ https://issues.apache.org/jira/browse/HIVE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17040: --- Attachment: HIVE-17040.06.patch > Join elimination in the presence of FK relationship > --- > > Key: HIVE-17040 > URL: https://issues.apache.org/jira/browse/HIVE-17040 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-17040.01.patch, HIVE-17040.02.patch, > HIVE-17040.04.patch, HIVE-17040.05.patch, HIVE-17040.06.patch, > HIVE-17040.patch > > > If the PK/UK table is not filtered, we can safely remove the join. > A simple example: > {code:sql} > SELECT c_current_cdemo_sk > FROM customer, customer_address > ON c_current_addr_sk = ca_address_sk; > {code} > As a Calcite rule, we could implement this rewriting by 1) matching a Project > on top of a Join operator, 2) checking that only columns from the FK are used > in the Project, 3) checking that the join condition matches the FK - PK/UK > relationship, 4) pulling all the predicates from the PK/UK side and checking > that the input is not filtered, and 5) removing the join, possibly adding a > IS NOT NULL condition on the join column from the FK side. > If the PK/UK table is filtered, we should still transform the Join into a > SemiJoin operator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17040) Join elimination in the presence of FK relationship
[ https://issues.apache.org/jira/browse/HIVE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17040: --- Attachment: (was: HIVE-17040.06.patch) > Join elimination in the presence of FK relationship > --- > > Key: HIVE-17040 > URL: https://issues.apache.org/jira/browse/HIVE-17040 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-17040.01.patch, HIVE-17040.02.patch, > HIVE-17040.04.patch, HIVE-17040.05.patch, HIVE-17040.06.patch, > HIVE-17040.patch > > > If the PK/UK table is not filtered, we can safely remove the join. > A simple example: > {code:sql} > SELECT c_current_cdemo_sk > FROM customer, customer_address > ON c_current_addr_sk = ca_address_sk; > {code} > As a Calcite rule, we could implement this rewriting by 1) matching a Project > on top of a Join operator, 2) checking that only columns from the FK are used > in the Project, 3) checking that the join condition matches the FK - PK/UK > relationship, 4) pulling all the predicates from the PK/UK side and checking > that the input is not filtered, and 5) removing the join, possibly adding a > IS NOT NULL condition on the join column from the FK side. > If the PK/UK table is filtered, we should still transform the Join into a > SemiJoin operator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20568) GetTablesOperation : There is no need to convert the dbname to pattern
[ https://issues.apache.org/jira/browse/HIVE-20568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh reassigned HIVE-20568: - Assignee: Rajkumar Singh > GetTablesOperation : There is no need to convert the dbname to pattern > -- > > Key: HIVE-20568 > URL: https://issues.apache.org/jira/browse/HIVE-20568 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 0.4.0 > Environment: Hive-4,Java-8 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Minor > > there is no need to convert the dbname to pattern, dbNamePattern is just a > dbName which we are passing to getTableMeta > https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/operation/GetTablesOperation.java#L117 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17040) Join elimination in the presence of FK relationship
[ https://issues.apache.org/jira/browse/HIVE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-17040: --- Attachment: HIVE-17040.06.patch > Join elimination in the presence of FK relationship > --- > > Key: HIVE-17040 > URL: https://issues.apache.org/jira/browse/HIVE-17040 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-17040.01.patch, HIVE-17040.02.patch, > HIVE-17040.04.patch, HIVE-17040.05.patch, HIVE-17040.06.patch, > HIVE-17040.patch > > > If the PK/UK table is not filtered, we can safely remove the join. > A simple example: > {code:sql} > SELECT c_current_cdemo_sk > FROM customer, customer_address > ON c_current_addr_sk = ca_address_sk; > {code} > As a Calcite rule, we could implement this rewriting by 1) matching a Project > on top of a Join operator, 2) checking that only columns from the FK are used > in the Project, 3) checking that the join condition matches the FK - PK/UK > relationship, 4) pulling all the predicates from the PK/UK side and checking > that the input is not filtered, and 5) removing the join, possibly adding a > IS NOT NULL condition on the join column from the FK side. > If the PK/UK table is filtered, we should still transform the Join into a > SemiJoin operator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20095) Fix jdbc external table feature
[ https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615417#comment-16615417 ] Hive QA commented on HIVE-20095: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 23s{color} | {color:blue} jdbc-handler in master has 8 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 16s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} jdbc-handler: The patch generated 2 new + 24 unchanged - 1 fixed = 26 total (was 25) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 33s{color} | {color:red} jdbc-handler generated 3 new + 8 unchanged - 0 fixed = 11 total (was 8) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:jdbc-handler | | | Exception is caught when Exception is not thrown in org.apache.hive.storage.jdbc.JdbcSerDe.initialize(Configuration, Properties) At JdbcSerDe.java:is not thrown in org.apache.hive.storage.jdbc.JdbcSerDe.initialize(Configuration, Properties) At JdbcSerDe.java:[line 114] | | | org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getColumnTypes(Configuration) may fail to clean up java.sql.ResultSet Obligation to clean up resource created at GenericJdbcDatabaseAccessor.java:up java.sql.ResultSet Obligation to clean up resource created at GenericJdbcDatabaseAccessor.java:[line 115] is not discharged | | | org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getColumnTypes(Configuration) may fail to clean up java.sql.Statement Obligation to clean up resource created at GenericJdbcDatabaseAccessor.java:up java.sql.Statement Obligation to clean up resource created at GenericJdbcDatabaseAccessor.java:[line 114] is not discharged | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13790/dev-support/hive-personality.sh | | git revision | master / 5eaf0dd | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13790/yetus/diff-checkstyle-jdbc-handler.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-13790/yetus/whitespace-eol.txt | | findbugs |
[jira] [Commented] (HIVE-20489) Explain plan of query hangs
[ https://issues.apache.org/jira/browse/HIVE-20489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615413#comment-16615413 ] Janaki Lahorani commented on HIVE-20489: A test case that covers this issue is pushed into master using HIVE-20526. > Explain plan of query hangs > --- > > Key: HIVE-20489 > URL: https://issues.apache.org/jira/browse/HIVE-20489 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20489.1.patch, HIVE-20489.2.patch, > HIVE-20489.3.patch, HIVE-20489.4.patch > > > Explain on a query that joins 47 views, in effect around 94 joins after view > expansion seems to take forever. The case here tries to generate a plan > using map join with conditional tasks. > When the task graph is huge with many paths, there can be a performance issue > during compilation. This is caused by recursive traversal of task graph in > internTableDesc and deriveFinalExplainAttributes. The use of recursion is > inefficient in a couple of ways. > * For large graphs the recursion was filling up the stack > * Instead of finding the map works, the traversal was walking all possible > paths from root causing a huge performance problem. > The fix is to replace the traversal from recursive to an iterative one, > keeping track of the nodes already visited. The fix uses getMRTasks, > getSparkTasks and getTezTasks to do iterative traversal. These calls were > changed to using iterative calls through HIVE-17195. When pushing this patch > to an older release, please make sure HIVE-17195 is also pushed to that > release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-14431) Recognize COALESCE as CASE
[ https://issues.apache.org/jira/browse/HIVE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-14431: -- Assignee: Jesus Camacho Rodriguez (was: Remus Rusanu) > Recognize COALESCE as CASE > -- > > Key: HIVE-14431 > URL: https://issues.apache.org/jira/browse/HIVE-14431 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-14431.01.patch, HIVE-14431.03.patch, > HIVE-14431.04.patch, HIVE-14431.2.patch, HIVE-14431.patch > > > Transform: > {code} > (COALESCE(a, '') = '') OR >(a = 'A' AND b = c) OR >(a = 'B' AND b = d) OR >(a = 'C' AND b = e) OR >(a = 'D' AND b = f) OR >(a = 'E' AND b = g) OR >(a = 'F' AND b = h) > {code} > into: > {code} > (a='') OR >(a is null) OR >(a = 'A' AND b = c) OR >(a = 'B' AND b = d) OR >(a = 'C' AND b = e) OR >(a = 'D' AND b = f) OR >(a = 'E' AND b = g) OR >(a = 'F' AND b = h) > {code} > With complex queries, this will lead us to factor more predicates that could > be pushed to the TS. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-14431) Recognize COALESCE as CASE
[ https://issues.apache.org/jira/browse/HIVE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14431: --- Attachment: HIVE-14431.04.patch > Recognize COALESCE as CASE > -- > > Key: HIVE-14431 > URL: https://issues.apache.org/jira/browse/HIVE-14431 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-14431.01.patch, HIVE-14431.03.patch, > HIVE-14431.04.patch, HIVE-14431.2.patch, HIVE-14431.patch > > > Transform: > {code} > (COALESCE(a, '') = '') OR >(a = 'A' AND b = c) OR >(a = 'B' AND b = d) OR >(a = 'C' AND b = e) OR >(a = 'D' AND b = f) OR >(a = 'E' AND b = g) OR >(a = 'F' AND b = h) > {code} > into: > {code} > (a='') OR >(a is null) OR >(a = 'A' AND b = c) OR >(a = 'B' AND b = d) OR >(a = 'C' AND b = e) OR >(a = 'D' AND b = f) OR >(a = 'E' AND b = g) OR >(a = 'F' AND b = h) > {code} > With complex queries, this will lead us to factor more predicates that could > be pushed to the TS. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-14431) Recognize COALESCE as CASE
[ https://issues.apache.org/jira/browse/HIVE-14431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14431: --- Status: Patch Available (was: Open) > Recognize COALESCE as CASE > -- > > Key: HIVE-14431 > URL: https://issues.apache.org/jira/browse/HIVE-14431 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-14431.01.patch, HIVE-14431.03.patch, > HIVE-14431.04.patch, HIVE-14431.2.patch, HIVE-14431.patch > > > Transform: > {code} > (COALESCE(a, '') = '') OR >(a = 'A' AND b = c) OR >(a = 'B' AND b = d) OR >(a = 'C' AND b = e) OR >(a = 'D' AND b = f) OR >(a = 'E' AND b = g) OR >(a = 'F' AND b = h) > {code} > into: > {code} > (a='') OR >(a is null) OR >(a = 'A' AND b = c) OR >(a = 'B' AND b = d) OR >(a = 'C' AND b = e) OR >(a = 'D' AND b = f) OR >(a = 'E' AND b = g) OR >(a = 'F' AND b = h) > {code} > With complex queries, this will lead us to factor more predicates that could > be pushed to the TS. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20554) Unable to drop an external table after renaming it.
[ https://issues.apache.org/jira/browse/HIVE-20554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615386#comment-16615386 ] Krishnama Raju K commented on HIVE-20554: - NOTE : a. There is a Primary Key and Foreign Key relationship between these two tables( PARTITIONS.PART_ID is PK and PART_COL_STATS.PART_ID is FK ). b. This issue is ONLY for External Tables. Detailed Reproducing steps and Possible Fix == Step 1. Create sample table (table003) and inserted few records to that tables {noformat} hive> select * from table003; OK a b 1 a b 2 a b 3 a b 4 a b 5 a b 6 a b 7 Time taken: 0.48 seconds, Fetched: 7 row(s) hive>{noformat} Step 2. Even after inserting data into table, PARTITIONS table is updated and the PART_COL_STATS table is not updated ( Working as expected ) {code:java} mysql> mysql> select count * from TBLS, PARTITIONS where TBL_NAME = 'table003' and PARTITIONS.TBL_ID = TBLS.TBL_ID ; -- count -- 7 -- 1 row in set (0.00 sec) mysql> select db_name, table_name, count from PART_COL_STATS group by db_name, table_name; Empty set (0.00 sec) {code} Step 3. Run analyze command for populating PART_COL_STATS {noformat} hive> analyze table table003 partition (data_dt) COMPUTE STATISTICS FOR COLUMNS; Query ID = hive_20180912165312_39c4e8d6-f092-40fc-aeda-812c2c8079da Total jobs = 1 Launching Job 1 out of 1 Tez session was closed. Reopening... Session re-established. Status: Running (Executing on YARN cluster with App id application_1536713295508_0003) VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED Map 1 .. SUCCEEDED 1 1 0 0 0 0 Reducer 2 .. SUCCEEDED 1 1 0 0 0 0 VERTICES: 02/02 ==>> 100% ELAPSED TIME: 6.24 s OK Time taken: 12.387 seconds hive> mysql> select db_name, table_name, count from PART_COL_STATS group by db_name, table_name; --- db_name table_name count --- default table00314 --- 1 row in set (0.00 sec) {noformat} Step 4. Rename the table and see PARTITIONS table getting updated but not PART_COL_STATS {noformat} hive> alter table table003 rename to table004; OK Time taken: 0.363 seconds hive> mysql> select * from TBLS where lower(TBL_NAME) = 'table003'; Empty set (0.00 sec) mysql> select * from TBLS where lower(TBL_NAME) = 'table004'; - TBL_ID CREATE_TIME DB_ID LAST_ACCESS_TIMEOWNER RETENTION SD_ID TBL_NAMETBL_TYPEVIEW_EXPANDED_TEXT VIEW_ORIGINAL_TEXT - 26 1536769737 1 0 admin 0 76 table004 EXTERNAL_TABLE NULLNULL - 1 row in set (0.00 sec) mysql> select count from TBLS, PARTITIONS where TBL_NAME = 'table004' and PARTITIONS.TBL_ID = TBLS.TBL_ID ; -- count -- 7 -- 1 row in set (0.00 sec) mysql> select count from TBLS, PARTITIONS where TBL_NAME = 'table003' and PARTITIONS.TBL_ID = TBLS.TBL_ID ; -- count -- 0 -- 1 row in set (0.00 sec) mysql> select db_name, table_name, count from PART_COL_STATS group by db_name, table_name; --- db_name table_name count --- default table00314 --- 1 row in set (0.00 sec) {noformat} Step 5. Failure to drop the renamed table. {noformat} hive> drop table table004; if I see hive.log then I am getting below error Caused by: java.sql.BatchUpdateException: Cannot delete or update a parent row: a foreign key constraint fails ("hive"."PART_COL_STATS", CONSTRAINT "PART_COL_STATS_FK" FOREIGN KEY ("PART_ID") REFERENCES "PARTITIONS" ("PART_ID")) at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:2024) at com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1449) at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424) at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:366) at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:676) at
[jira] [Commented] (HIVE-17040) Join elimination in the presence of FK relationship
[ https://issues.apache.org/jira/browse/HIVE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615379#comment-16615379 ] Hive QA commented on HIVE-17040: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939612/HIVE-17040.05.patch {color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14943 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_constraints_optimization] (batchId=159) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[schemeAuthority] (batchId=188) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[list_bucket_dml_2] (batchId=114) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13789/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13789/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13789/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12939612 - PreCommit-HIVE-Build > Join elimination in the presence of FK relationship > --- > > Key: HIVE-17040 > URL: https://issues.apache.org/jira/browse/HIVE-17040 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-17040.01.patch, HIVE-17040.02.patch, > HIVE-17040.04.patch, HIVE-17040.05.patch, HIVE-17040.patch > > > If the PK/UK table is not filtered, we can safely remove the join. > A simple example: > {code:sql} > SELECT c_current_cdemo_sk > FROM customer, customer_address > ON c_current_addr_sk = ca_address_sk; > {code} > As a Calcite rule, we could implement this rewriting by 1) matching a Project > on top of a Join operator, 2) checking that only columns from the FK are used > in the Project, 3) checking that the join condition matches the FK - PK/UK > relationship, 4) pulling all the predicates from the PK/UK side and checking > that the input is not filtered, and 5) removing the join, possibly adding a > IS NOT NULL condition on the join column from the FK side. > If the PK/UK table is filtered, we should still transform the Join into a > SemiJoin operator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20564) Remove Hive Server dependency on Metastore Server
[ https://issues.apache.org/jira/browse/HIVE-20564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov reassigned HIVE-20564: - Assignee: Alexander Kolbasov > Remove Hive Server dependency on Metastore Server > - > > Key: HIVE-20564 > URL: https://issues.apache.org/jira/browse/HIVE-20564 > Project: Hive > Issue Type: Sub-task >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > Currently Hive Server2 still depends on some classes from Metastore Server - > we should break this dependency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17040) Join elimination in the presence of FK relationship
[ https://issues.apache.org/jira/browse/HIVE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615374#comment-16615374 ] Hive QA commented on HIVE-17040: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 52s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 32s{color} | {color:red} root in master failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 4s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 46s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 46s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 13 new + 173 unchanged - 7 fixed = 186 total (was 180) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 28 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 23s{color} | {color:red} ql generated 1 new + 2311 unchanged - 0 fixed = 2312 total (was 2311) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Dead store to p1 in org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveJoinConstraintsRule$EquivalenceClasses.addEquivalenceClass(RexTableInputRef, RexTableInputRef) At HiveJoinConstraintsRule.java:org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveJoinConstraintsRule$EquivalenceClasses.addEquivalenceClass(RexTableInputRef, RexTableInputRef) At HiveJoinConstraintsRule.java:[line 453] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13789/dev-support/hive-personality.sh | | git revision | master / 5eaf0dd | | Default Java | 1.8.0_111 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13789/yetus/branch-compile-root.txt | | findbugs | v3.0.0 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-13789/yetus/patch-compile-root.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-13789/yetus/patch-compile-root.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13789/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-13789/yetus/whitespace-eol.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13789/yetus/new-findbugs-ql.html | | modules | C: common . itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13789/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. >
[jira] [Commented] (HIVE-20563) Exception in vectorization execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615371#comment-16615371 ] Jesus Camacho Rodriguez commented on HIVE-20563: This is the explain vectorization plan: {code} PLAN VECTORIZATION: enabled: true enabledConditionsMet: [hive.vectorized.execution.enabled IS true] STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 depends on stages: Stage-1 STAGE PLANS: Stage: Stage-1 Map Reduce Map Operator Tree: TableScan Vectorization: native: true Filter Vectorization: className: VectorFilterOperator native: true predicateExpression: SelectColumnIsNull(col 5:double) Select Vectorization: className: VectorSelectOperator native: true projectedOutputColumnNums: [6, 2, 4, 1, 23] selectExpressions: IfExprColumnCondExpr(col 13:boolean, col 6:stringcol 22:string)(children: IsNotNull(col 6:string) -> 13:boolean, col 6:string, VectorUDFAdaptor(CASE WHEN (cint is not null) THEN (cint) WHEN (cfloat is not null) THEN (cfloat) WHEN (csmallint is not null) THEN (csmallint) ELSE (null) END)(children: IsNotNull(col 2:int) -> 18:boolean, IsNotNull(col 4:float) -> 19:boolean, IsNotNull(col 1:smallint) -> 21:boolean) -> 22:string) -> 23:string Reduce Sink Vectorization: className: VectorReduceSinkOperator native: false nativeConditionsMet: hive.vectorized.execution.reducesink.new.enabled IS true, No PTF TopN IS true, No DISTINCT columns IS true, BinarySortableSerDe for keys IS true, LazyBinarySerDe for values IS true nativeConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false Execution mode: vectorized Map Vectorization: enabled: true enabledConditionsMet: hive.vectorized.use.vectorized.input.format IS true inputFormatFeatureSupport: [DECIMAL_64] featureSupportInUse: [DECIMAL_64] inputFileFormats: org.apache.hadoop.hive.ql.io.orc.OrcInputFormat allNative: false usesVectorUDFAdaptor: true vectorized: true Reduce Vectorization: enabled: false enableConditionsMet: hive.vectorized.execution.reduce.enabled IS true enableConditionsNotMet: hive.execution.engine mr IN [tez, spark] IS false Reduce Operator Tree: Stage: Stage-0 Fetch Operator {code} > Exception in vectorization execution of CASE statement > -- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by:
[jira] [Updated] (HIVE-20097) Convert standalone-metastore to a submodule
[ https://issues.apache.org/jira/browse/HIVE-20097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-20097: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Convert standalone-metastore to a submodule > --- > > Key: HIVE-20097 > URL: https://issues.apache.org/jira/browse/HIVE-20097 > Project: Hive > Issue Type: Sub-task > Components: Hive, Metastore, Standalone Metastore >Affects Versions: 3.1.0, 4.0.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20097.01.patch, HIVE-20097.02.patch, > HIVE-20097.03.patch, HIVE-20097.04.patch, HIVE-20097.05.patch, > HIVE-20097.06.patch, HIVE-20097.07-branch-3.patch, HIVE-20097.07.patch, > HIVE-20097.08-branch-3.patch, HIVE-20097.09.branch-3.patch > > > This is a subtask to stage HIVE-17751 changes into several smaller phases. > The first part is moving existing code in hive-standalone-metastore to a > sub-module. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20196) Remove MetastoreConf dependency on server-specific classes
[ https://issues.apache.org/jira/browse/HIVE-20196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-20196: -- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) > Remove MetastoreConf dependency on server-specific classes > -- > > Key: HIVE-20196 > URL: https://issues.apache.org/jira/browse/HIVE-20196 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 4.0.0, 3.2.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20196.01.patch, HIVE-20196.02.patch, > HIVE-20196.03.patch, HIVE-20196.04.patch > > > MetastoreConf has knowledge about some server-specific classes. We need to > separate these into a separate server-specific class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-20482) Remove dependency on metastore-server
[ https://issues.apache.org/jira/browse/HIVE-20482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov resolved HIVE-20482. --- Resolution: Fixed Fix Version/s: 4.0.0 > Remove dependency on metastore-server > - > > Key: HIVE-20482 > URL: https://issues.apache.org/jira/browse/HIVE-20482 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 3.0.1, 4.0.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Fix For: 4.0.0 > > > Now that we separated common and server classes we should remove dependency > on the server module from poms. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20390) Split TxnUtils into common and server parts.
[ https://issues.apache.org/jira/browse/HIVE-20390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov reassigned HIVE-20390: - Assignee: Alexander Kolbasov > Split TxnUtils into common and server parts. > > > Key: HIVE-20390 > URL: https://issues.apache.org/jira/browse/HIVE-20390 > Project: Hive > Issue Type: Sub-task >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Fix For: 4.0.0 > > > HiveMetastoreClient uses some static methods from TxnUtils which should move > to metastore-common package. Remaining server-specific methods should remain > in metastore-server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-20390) Split TxnUtils into common and server parts.
[ https://issues.apache.org/jira/browse/HIVE-20390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov resolved HIVE-20390. --- Resolution: Fixed Fix Version/s: 4.0.0 > Split TxnUtils into common and server parts. > > > Key: HIVE-20390 > URL: https://issues.apache.org/jira/browse/HIVE-20390 > Project: Hive > Issue Type: Sub-task >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Fix For: 4.0.0 > > > HiveMetastoreClient uses some static methods from TxnUtils which should move > to metastore-common package. Remaining server-specific methods should remain > in metastore-server. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20306) Implement projection spec for fetching only requested fields from partitions
[ https://issues.apache.org/jira/browse/HIVE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615368#comment-16615368 ] Alexander Kolbasov commented on HIVE-20306: --- Patch 16 merged with {code:java} * commit 5eaf0ddf98d9320a54916962e1f04c83f3f6f13e (origin/master, origin/HEAD, master) | Author: Eugene Koifman | Date: Fri Sep 14 11:29:21 2018 -0700 | | HIVE-20553: more acid stats tests (Eugene Koifman, reviewed by Sergey Shelukhin) | {code} > Implement projection spec for fetching only requested fields from partitions > > > Key: HIVE-20306 > URL: https://issues.apache.org/jira/browse/HIVE-20306 > Project: Hive > Issue Type: Sub-task >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-20306.02.patch, HIVE-20306.03.patch, > HIVE-20306.04.patch, HIVE-20306.05.patch, HIVE-20306.06.patch, > HIVE-20306.07.patch, HIVE-20306.08.patch, HIVE-20306.09.patch, > HIVE-20306.10.patch, HIVE-20306.11.patch, HIVE-20306.12.patch, > HIVE-20306.13.patch, HIVE-20306.14.patch, HIVE-20306.15.patch, > HIVE-20306.16.patch, HIVE-20306.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20306) Implement projection spec for fetching only requested fields from partitions
[ https://issues.apache.org/jira/browse/HIVE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-20306: -- Attachment: HIVE-20306.16.patch > Implement projection spec for fetching only requested fields from partitions > > > Key: HIVE-20306 > URL: https://issues.apache.org/jira/browse/HIVE-20306 > Project: Hive > Issue Type: Sub-task >Reporter: Vihang Karajgaonkar >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-20306.02.patch, HIVE-20306.03.patch, > HIVE-20306.04.patch, HIVE-20306.05.patch, HIVE-20306.06.patch, > HIVE-20306.07.patch, HIVE-20306.08.patch, HIVE-20306.09.patch, > HIVE-20306.10.patch, HIVE-20306.11.patch, HIVE-20306.12.patch, > HIVE-20306.13.patch, HIVE-20306.14.patch, HIVE-20306.15.patch, > HIVE-20306.16.patch, HIVE-20306.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20563) Exception in vectorization execution of CASE statement
[ https://issues.apache.org/jira/browse/HIVE-20563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-20563: --- Assignee: Matt McCline > Exception in vectorization execution of CASE statement > -- > > Key: HIVE-20563 > URL: https://issues.apache.org/jira/browse/HIVE-20563 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Matt McCline >Priority: Major > > With the following stacktrace: > {code} > java.lang.Exception: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552) > [hadoop-mapreduce-client-common-3.1.0.jar:?] > Caused by: java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while > processing row > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime > Error while processing row > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:973) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:154) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271) > ~[hadoop-mapreduce-client-common-3.1.0.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[?:1.8.0_181] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_181] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_181] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_181] > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating > cstring1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:149) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.vector.VectorFilterOperator.process(VectorFilterOperator.java:136) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938) >
[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin
[ https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18908: Status: Patch Available (was: In Progress) > FULL OUTER JOIN to MapJoin > -- > > Key: HIVE-18908 > URL: https://issues.apache.org/jira/browse/HIVE-18908 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: FULL OUTER MapJoin Code Changes.docx, > HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, > HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, > HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, > HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, > HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, > HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, > HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, > HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, > HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, > HIVE-18908.09994.patch, HIVE-18908.09995.patch, HIVE-18908.09996.patch, > HIVE-18908.09997.patch, JOIN to MAPJOIN Transformation.pdf, SHARED-MEMORY > FULL OUTER MapJoin.pdf > > > Currently, we do not support FULL OUTER JOIN in MapJoin. > Rough TPC-DS timings run on laptop: > (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play) > FULL OUTER MapJoin OFF = MergeJoin > Query 51: > o Vectorization OFF > • FULL OUTER MapJoin OFF: 4:30 minutes > • FULL OUTER MapJoin ON: 4:37 minutes > o Vectorization ON > • FULL OUTER MapJoin OFF: 2:35 minutes > • FULL OUTER MapJoin ON: 1:47 minutes > Query 97: > o Vectorization OFF > • FULL OUTER MapJoin OFF: 2:37 minutes > • FULL OUTER MapJoin ON: 2:42 minutes > o Vectorization ON > • FULL OUTER MapJoin OFF: 1:17 minutes > • FULL OUTER MapJoin ON: 0:06 minutes > FULL OUTER Join 10,000,000 rows against 323,910 small table keys > o Vectorization ON > • FULL OUTER MapJoin OFF: 14:56 minutes > • FULL OUTER MapJoin ON: 1:45 minutes > FULL OUTER Join 10,000,000 rows against 1,000 small table keys > o Vectorization ON > • FULL OUTER MapJoin OFF: 12:37 minutes > • FULL OUTER MapJoin ON: 1:38 minutes > Hopefully, someone will do large scale cluster testing. > [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than > [Sort] MergeJoin reduce-shuffle. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin
[ https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18908: Attachment: HIVE-18908.09997.patch > FULL OUTER JOIN to MapJoin > -- > > Key: HIVE-18908 > URL: https://issues.apache.org/jira/browse/HIVE-18908 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: FULL OUTER MapJoin Code Changes.docx, > HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, > HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, > HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, > HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, > HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, > HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, > HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, > HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, > HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, > HIVE-18908.09994.patch, HIVE-18908.09995.patch, HIVE-18908.09996.patch, > HIVE-18908.09997.patch, JOIN to MAPJOIN Transformation.pdf, SHARED-MEMORY > FULL OUTER MapJoin.pdf > > > Currently, we do not support FULL OUTER JOIN in MapJoin. > Rough TPC-DS timings run on laptop: > (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play) > FULL OUTER MapJoin OFF = MergeJoin > Query 51: > o Vectorization OFF > • FULL OUTER MapJoin OFF: 4:30 minutes > • FULL OUTER MapJoin ON: 4:37 minutes > o Vectorization ON > • FULL OUTER MapJoin OFF: 2:35 minutes > • FULL OUTER MapJoin ON: 1:47 minutes > Query 97: > o Vectorization OFF > • FULL OUTER MapJoin OFF: 2:37 minutes > • FULL OUTER MapJoin ON: 2:42 minutes > o Vectorization ON > • FULL OUTER MapJoin OFF: 1:17 minutes > • FULL OUTER MapJoin ON: 0:06 minutes > FULL OUTER Join 10,000,000 rows against 323,910 small table keys > o Vectorization ON > • FULL OUTER MapJoin OFF: 14:56 minutes > • FULL OUTER MapJoin ON: 1:45 minutes > FULL OUTER Join 10,000,000 rows against 1,000 small table keys > o Vectorization ON > • FULL OUTER MapJoin OFF: 12:37 minutes > • FULL OUTER MapJoin ON: 1:38 minutes > Hopefully, someone will do large scale cluster testing. > [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than > [Sort] MergeJoin reduce-shuffle. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin
[ https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-18908: Status: In Progress (was: Patch Available) > FULL OUTER JOIN to MapJoin > -- > > Key: HIVE-18908 > URL: https://issues.apache.org/jira/browse/HIVE-18908 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: FULL OUTER MapJoin Code Changes.docx, > HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, > HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, > HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, > HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, > HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, > HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, > HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, > HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, > HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, > HIVE-18908.09994.patch, HIVE-18908.09995.patch, HIVE-18908.09996.patch, JOIN > to MAPJOIN Transformation.pdf, SHARED-MEMORY FULL OUTER MapJoin.pdf > > > Currently, we do not support FULL OUTER JOIN in MapJoin. > Rough TPC-DS timings run on laptop: > (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play) > FULL OUTER MapJoin OFF = MergeJoin > Query 51: > o Vectorization OFF > • FULL OUTER MapJoin OFF: 4:30 minutes > • FULL OUTER MapJoin ON: 4:37 minutes > o Vectorization ON > • FULL OUTER MapJoin OFF: 2:35 minutes > • FULL OUTER MapJoin ON: 1:47 minutes > Query 97: > o Vectorization OFF > • FULL OUTER MapJoin OFF: 2:37 minutes > • FULL OUTER MapJoin ON: 2:42 minutes > o Vectorization ON > • FULL OUTER MapJoin OFF: 1:17 minutes > • FULL OUTER MapJoin ON: 0:06 minutes > FULL OUTER Join 10,000,000 rows against 323,910 small table keys > o Vectorization ON > • FULL OUTER MapJoin OFF: 14:56 minutes > • FULL OUTER MapJoin ON: 1:45 minutes > FULL OUTER Join 10,000,000 rows against 1,000 small table keys > o Vectorization ON > • FULL OUTER MapJoin OFF: 12:37 minutes > • FULL OUTER MapJoin ON: 1:38 minutes > Hopefully, someone will do large scale cluster testing. > [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than > [Sort] MergeJoin reduce-shuffle. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL
[ https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20524: Status: In Progress (was: Patch Available) > Schema Evolution checking is broken in going from Hive version 2 to version 3 > for ALTER TABLE VARCHAR to DECIMAL > > > Key: HIVE-20524 > URL: https://issues.apache.org/jira/browse/HIVE-20524 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch, > HIVE-20524.03.patch, HIVE-20524.04.patch > > > Issue that started this JIRA: > {code} > create external table varchar_decimal (c1 varchar(25)); > alter table varchar_decimal change c1 c1 decimal(31,0); > ERROR : FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following > columns have types incompatible with the existing columns in their respective > positions : > c1 > {code} > There appear to be 2 issues here: > 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the > default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a > number that can hold the largest numbers. The theory being we don't want > data loss you would get by converting the StringFamily field into integers, > etc. In Hive version 2 the hierarchy of numbers had DECIMAL at the top. At > some point during Hive version 2 we realized this was incorrect and put > DOUBLE the top. > However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method > allows StringFamily to either DOUBLE or DECIMAL conversion. > The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version > 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only > allows DOUBLE. > This JIRA fixes that problem. > 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug > fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when > checking for Schema Evolution compatibility. So, when that code is checking > if a data type "varchar(25)" is StringFamily it fails because the "(25)" > didn't get removed properly. > This JIRA fixes issue #2 also. > NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version > performed the logic in TypeInfoUtils.implicitConvertible on the > PrimitiveCategory not the raw type string. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL
[ https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20524: Status: Patch Available (was: In Progress) Again. > Schema Evolution checking is broken in going from Hive version 2 to version 3 > for ALTER TABLE VARCHAR to DECIMAL > > > Key: HIVE-20524 > URL: https://issues.apache.org/jira/browse/HIVE-20524 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch, > HIVE-20524.03.patch, HIVE-20524.04.patch > > > Issue that started this JIRA: > {code} > create external table varchar_decimal (c1 varchar(25)); > alter table varchar_decimal change c1 c1 decimal(31,0); > ERROR : FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following > columns have types incompatible with the existing columns in their respective > positions : > c1 > {code} > There appear to be 2 issues here: > 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the > default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a > number that can hold the largest numbers. The theory being we don't want > data loss you would get by converting the StringFamily field into integers, > etc. In Hive version 2 the hierarchy of numbers had DECIMAL at the top. At > some point during Hive version 2 we realized this was incorrect and put > DOUBLE the top. > However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method > allows StringFamily to either DOUBLE or DECIMAL conversion. > The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version > 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only > allows DOUBLE. > This JIRA fixes that problem. > 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug > fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when > checking for Schema Evolution compatibility. So, when that code is checking > if a data type "varchar(25)" is StringFamily it fails because the "(25)" > didn't get removed properly. > This JIRA fixes issue #2 also. > NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version > performed the logic in TypeInfoUtils.implicitConvertible on the > PrimitiveCategory not the raw type string. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL
[ https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20524: Attachment: HIVE-20524.04.patch > Schema Evolution checking is broken in going from Hive version 2 to version 3 > for ALTER TABLE VARCHAR to DECIMAL > > > Key: HIVE-20524 > URL: https://issues.apache.org/jira/browse/HIVE-20524 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch, > HIVE-20524.03.patch, HIVE-20524.04.patch > > > Issue that started this JIRA: > {code} > create external table varchar_decimal (c1 varchar(25)); > alter table varchar_decimal change c1 c1 decimal(31,0); > ERROR : FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following > columns have types incompatible with the existing columns in their respective > positions : > c1 > {code} > There appear to be 2 issues here: > 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the > default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a > number that can hold the largest numbers. The theory being we don't want > data loss you would get by converting the StringFamily field into integers, > etc. In Hive version 2 the hierarchy of numbers had DECIMAL at the top. At > some point during Hive version 2 we realized this was incorrect and put > DOUBLE the top. > However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method > allows StringFamily to either DOUBLE or DECIMAL conversion. > The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version > 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only > allows DOUBLE. > This JIRA fixes that problem. > 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug > fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when > checking for Schema Evolution compatibility. So, when that code is checking > if a data type "varchar(25)" is StringFamily it fails because the "(25)" > didn't get removed properly. > This JIRA fixes issue #2 also. > NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version > performed the logic in TypeInfoUtils.implicitConvertible on the > PrimitiveCategory not the raw type string. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20562) Intermittent test failures from Druid tests
[ https://issues.apache.org/jira/browse/HIVE-20562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615314#comment-16615314 ] slim bouguerra commented on HIVE-20562: --- [~janulatha] you can follow the linked Jira > Intermittent test failures from Druid tests > --- > > Key: HIVE-20562 > URL: https://issues.apache.org/jira/browse/HIVE-20562 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: slim bouguerra >Priority: Major > > Druid tests are failing intermittently in Hive Pre-commit jobs. > The typical failures include: > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_dynamic_partition] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test1] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_alter] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_insert] > (batchId=193) > The test log shows the following: > Exception: org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: > java.sql.SQLException: Cannot create PoolableConnectionFactory > (java.net.ConnectException : Error connecting to server localhost on port > 60,000 with message Connection refused.) > org.apache.hadoop.hive.ql.metadata.HiveException: > org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: > java.sql.SQLException: Cannot create PoolableConnectionFactory > (java.net.ConnectException : Error connecting to server localhost on port > 60,000 with message Connection refused.) > at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1077) > at > org.apache.hadoop.hive.ql.QTestUtil.clearTablesCreatedDuringTests(QTestUtil.java:958) > at > org.apache.hadoop.hive.ql.QTestUtil.clearTestSideEffects(QTestUtil.java:1039) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$5.invokeInternal(CoreCliDriver.java:135) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$5.invokeInternal(CoreCliDriver.java:131) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.tearDown(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:94) > The following search shows many Hive Jiras with patches where Druid tests are > failing. > https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20text%20~%20druidmini%20ORDER%20BY%20key%20DESC -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20562) Intermittent test failures from Druid tests
[ https://issues.apache.org/jira/browse/HIVE-20562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra reassigned HIVE-20562: - Assignee: slim bouguerra > Intermittent test failures from Druid tests > --- > > Key: HIVE-20562 > URL: https://issues.apache.org/jira/browse/HIVE-20562 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: slim bouguerra >Priority: Major > > Druid tests are failing intermittently in Hive Pre-commit jobs. > The typical failures include: > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_dynamic_partition] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test1] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_alter] > (batchId=193) > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_insert] > (batchId=193) > The test log shows the following: > Exception: org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: > java.sql.SQLException: Cannot create PoolableConnectionFactory > (java.net.ConnectException : Error connecting to server localhost on port > 60,000 with message Connection refused.) > org.apache.hadoop.hive.ql.metadata.HiveException: > org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: > java.sql.SQLException: Cannot create PoolableConnectionFactory > (java.net.ConnectException : Error connecting to server localhost on port > 60,000 with message Connection refused.) > at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:1077) > at > org.apache.hadoop.hive.ql.QTestUtil.clearTablesCreatedDuringTests(QTestUtil.java:958) > at > org.apache.hadoop.hive.ql.QTestUtil.clearTestSideEffects(QTestUtil.java:1039) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$5.invokeInternal(CoreCliDriver.java:135) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver$5.invokeInternal(CoreCliDriver.java:131) > at > org.apache.hadoop.hive.util.ElapsedTimeLoggingWrapper.invoke(ElapsedTimeLoggingWrapper.java:33) > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.tearDown(CoreCliDriver.java:138) > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:94) > The following search shows many Hive Jiras with patches where Druid tests are > failing. > https://issues.apache.org/jira/issues/?jql=project%20%3D%20HIVE%20AND%20text%20~%20druidmini%20ORDER%20BY%20key%20DESC -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20325) FlakyTest: TestMiniDruidCliDriver
[ https://issues.apache.org/jira/browse/HIVE-20325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615313#comment-16615313 ] slim bouguerra commented on HIVE-20325: --- [~mmccline]/[~vgarg]/ can you please pull this one in. I have fused the Druid_only_tests and Druid_kafka_Tests under one Druid Driver. > FlakyTest: TestMiniDruidCliDriver > - > > Key: HIVE-20325 > URL: https://issues.apache.org/jira/browse/HIVE-20325 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-20325.patch > > > TestMiniDruidCliDriver is failing intermittently a significant percentage of > the time. > druid_timestamptz > druidmini_joins > druidmini_masking > druidmini_test1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20325) FlakyTest: TestMiniDruidCliDriver
[ https://issues.apache.org/jira/browse/HIVE-20325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-20325: -- Attachment: HIVE-20325.patch > FlakyTest: TestMiniDruidCliDriver > - > > Key: HIVE-20325 > URL: https://issues.apache.org/jira/browse/HIVE-20325 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: slim bouguerra >Priority: Blocker > Attachments: HIVE-20325.patch > > > TestMiniDruidCliDriver is failing intermittently a significant percentage of > the time. > druid_timestamptz > druidmini_joins > druidmini_masking > druidmini_test1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20325) FlakyTest: TestMiniDruidCliDriver
[ https://issues.apache.org/jira/browse/HIVE-20325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-20325: -- Status: Patch Available (was: Reopened) > FlakyTest: TestMiniDruidCliDriver > - > > Key: HIVE-20325 > URL: https://issues.apache.org/jira/browse/HIVE-20325 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: slim bouguerra >Priority: Blocker > > TestMiniDruidCliDriver is failing intermittently a significant percentage of > the time. > druid_timestamptz > druidmini_joins > druidmini_masking > druidmini_test1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
[ https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20556: --- Attachment: HIVE-20556.1.patch Status: Patch Available (was: Open) > Expose an API to retrieve the TBL_ID from TBLS in the metastore tables > -- > > Key: HIVE-20556 > URL: https://issues.apache.org/jira/browse/HIVE-20556 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20556.1.patch > > > We have two options to do this > 1) Use the current MTable and add a field for this value > 2) Add an independent API call to the metastore that would return the TBL_ID. > Option 1 is preferable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18908) FULL OUTER JOIN to MapJoin
[ https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615306#comment-16615306 ] Hive QA commented on HIVE-18908: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939611/HIVE-18908.09996.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13788/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13788/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13788/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-09-14 19:46:14.074 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-13788/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-09-14 19:46:14.078 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 5eaf0dd HIVE-20553: more acid stats tests (Eugene Koifman, reviewed by Sergey Shelukhin) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 5eaf0dd HIVE-20553: more acid stats tests (Eugene Koifman, reviewed by Sergey Shelukhin) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-09-14 19:46:14.763 + rm -rf ../yetus_PreCommit-HIVE-Build-13788 + mkdir ../yetus_PreCommit-HIVE-Build-13788 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-13788 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-13788/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/test/results/clientpositive/llap/orc_llap_counters.q.out:327 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/orc_llap_counters.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/orc_llap_counters1.q.out:267 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/orc_llap_counters1.q.out' cleanly. error: patch failed: ql/src/test/results/clientpositive/llap/vector_leftsemi_mapjoin.q.out:6075 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/vector_leftsemi_mapjoin.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/llap/vector_outer_join1.q.out:329 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/llap/vector_outer_join1.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/parquet_vectorization_limit.q.out:325 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/parquet_vectorization_limit.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/spark/parquet_vectorization_0.q.out:128 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/spark/parquet_vectorization_0.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/spark/parquet_vectorization_12.q.out:200 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/spark/parquet_vectorization_12.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/spark/parquet_vectorization_13.q.out:202 Falling back to three-way merge... Applied patch to 'ql/src/test/results/clientpositive/spark/parquet_vectorization_13.q.out' with conflicts. error: patch failed: ql/src/test/results/clientpositive/spark/parquet_vectorization_14.q.out:202 Falling
[jira] [Commented] (HIVE-18583) Enable DateRangeRules
[ https://issues.apache.org/jira/browse/HIVE-18583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615302#comment-16615302 ] Hive QA commented on HIVE-18583: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939609/HIVE-18583.5.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14940 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13787/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13787/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13787/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12939609 - PreCommit-HIVE-Build > Enable DateRangeRules > -- > > Key: HIVE-18583 > URL: https://issues.apache.org/jira/browse/HIVE-18583 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18583.2.patch, HIVE-18583.3.patch, > HIVE-18583.4.patch, HIVE-18583.5.patch, HIVE-18583.patch > > > Enable DateRangeRules to translate druid filters to date ranges. > Need calcite version to upgrade to 0.16.0 before merging this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20538) Allow to store a key value together with a transaction.
[ https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20538: --- Status: Open (was: Patch Available) > Allow to store a key value together with a transaction. > --- > > Key: HIVE-20538 > URL: https://issues.apache.org/jira/browse/HIVE-20538 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore, Transactions >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch > > > This can be useful for example to know if a transaction has already happened. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20538) Allow to store a key value together with a transaction.
[ https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jaume M updated HIVE-20538: --- Attachment: HIVE-20538.1.patch Status: Patch Available (was: Open) > Allow to store a key value together with a transaction. > --- > > Key: HIVE-20538 > URL: https://issues.apache.org/jira/browse/HIVE-20538 > Project: Hive > Issue Type: New Feature > Components: Standalone Metastore, Transactions >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch > > > This can be useful for example to know if a transaction has already happened. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-15121) Last MR job in Hive should be able to write to a different scratch directory
[ https://issues.apache.org/jira/browse/HIVE-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615279#comment-16615279 ] Damon Cortesi commented on HIVE-15121: -- Will this enable similar optimizations in the FileMergeOperators? Doesn't look like they use the new functionality to get a temp dir. > Last MR job in Hive should be able to write to a different scratch directory > > > Key: HIVE-15121 > URL: https://issues.apache.org/jira/browse/HIVE-15121 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.2.0 > > Attachments: HIVE-15121.1.patch, HIVE-15121.2.patch, > HIVE-15121.3.patch, HIVE-15121.WIP.1.patch, HIVE-15121.WIP.2.patch, > HIVE-15121.WIP.patch, HIVE-15121.patch > > > Hive should be able to configure all intermediate MR jobs to write to HDFS, > but the final MR job to write to S3. > This will be useful for implementing parallel renames on S3. The idea is that > for a multi-job query, all intermediate MR jobs write to HDFS, and then the > final job writes to S3. Writing to HDFS should be faster than writing to S3, > so it makes more sense to write intermediate data to HDFS. > The advantage is that any copying of data that needs to be done from the > scratch directory to the final table directory can be done server-side, > within the blobstore. The MoveTask simply renames data from the scratch > directory to the final table location, which should translate to a > server-side COPY request. This way HiveServer2 doesn't have to actually copy > any data, it just tells the blobstore to do all the work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18583) Enable DateRangeRules
[ https://issues.apache.org/jira/browse/HIVE-18583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615258#comment-16615258 ] Hive QA commented on HIVE-18583: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 3s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13787/dev-support/hive-personality.sh | | git revision | master / 5eaf0dd | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13787/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Enable DateRangeRules > -- > > Key: HIVE-18583 > URL: https://issues.apache.org/jira/browse/HIVE-18583 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18583.2.patch, HIVE-18583.3.patch, > HIVE-18583.4.patch, HIVE-18583.5.patch, HIVE-18583.patch > > > Enable DateRangeRules to translate druid filters to date ranges. > Need calcite version to upgrade to 0.16.0 before merging this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20291) Allow HiveStreamingConnection to receive a WriteId
[ https://issues.apache.org/jira/browse/HIVE-20291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615251#comment-16615251 ] Prasanth Jayachandran commented on HIVE-20291: -- The general flow/idea looks good. Let some comments concerning interface changes and maintaining backward compat if we are planning to backport to branch-3. > Allow HiveStreamingConnection to receive a WriteId > -- > > Key: HIVE-20291 > URL: https://issues.apache.org/jira/browse/HIVE-20291 > Project: Hive > Issue Type: Improvement >Reporter: Jaume M >Assignee: Jaume M >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20291.1.patch, HIVE-20291.2.patch, > HIVE-20291.3.patch, HIVE-20291.4.patch, HIVE-20291.5.patch > > > If the writeId is received externally it won't need to open connections to > the metastore. It won't be able to the commit in this case as well so it must > be done by the entity passing the writeId. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL
[ https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615212#comment-16615212 ] Hive QA commented on HIVE-20524: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939603/HIVE-20524.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14940 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic] (batchId=264) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13786/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13786/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13786/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12939603 - PreCommit-HIVE-Build > Schema Evolution checking is broken in going from Hive version 2 to version 3 > for ALTER TABLE VARCHAR to DECIMAL > > > Key: HIVE-20524 > URL: https://issues.apache.org/jira/browse/HIVE-20524 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch, > HIVE-20524.03.patch > > > Issue that started this JIRA: > {code} > create external table varchar_decimal (c1 varchar(25)); > alter table varchar_decimal change c1 c1 decimal(31,0); > ERROR : FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following > columns have types incompatible with the existing columns in their respective > positions : > c1 > {code} > There appear to be 2 issues here: > 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the > default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a > number that can hold the largest numbers. The theory being we don't want > data loss you would get by converting the StringFamily field into integers, > etc. In Hive version 2 the hierarchy of numbers had DECIMAL at the top. At > some point during Hive version 2 we realized this was incorrect and put > DOUBLE the top. > However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method > allows StringFamily to either DOUBLE or DECIMAL conversion. > The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version > 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only > allows DOUBLE. > This JIRA fixes that problem. > 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug > fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when > checking for Schema Evolution compatibility. So, when that code is checking > if a data type "varchar(25)" is StringFamily it fails because the "(25)" > didn't get removed properly. > This JIRA fixes issue #2 also. > NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version > performed the logic in TypeInfoUtils.implicitConvertible on the > PrimitiveCategory not the raw type string. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20553) more acid stats tests
[ https://issues.apache.org/jira/browse/HIVE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-20553: -- Resolution: Fixed Fix Version/s: 4.0.0 Release Note: n/a Status: Resolved (was: Patch Available) committed to master (4.0) thanks Sergey fore the review > more acid stats tests > - > > Key: HIVE-20553 > URL: https://issues.apache.org/jira/browse/HIVE-20553 > Project: Hive > Issue Type: Improvement > Components: Statistics, Transactions >Affects Versions: 4.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20553.01.patch, HIVE-20553.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20553) more acid stats tests
[ https://issues.apache.org/jira/browse/HIVE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615195#comment-16615195 ] Sergey Shelukhin commented on HIVE-20553: - +1 > more acid stats tests > - > > Key: HIVE-20553 > URL: https://issues.apache.org/jira/browse/HIVE-20553 > Project: Hive > Issue Type: Improvement > Components: Statistics, Transactions >Affects Versions: 4.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-20553.01.patch, HIVE-20553.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615186#comment-16615186 ] Misha Dmitriev commented on HIVE-17684: --- Probably {{MapJoinMemoryExhaustionHandler}} just doesn't get triggered in HoMR... but ok, I agree that in such a difficult-to-test area it's safer to be conservative and avoid changing things unless they are definitely broken. Let's wait for the test results for your latest patch then. I guess depending on the outcome it may be submitted as is or may need some small finishing touches. > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20079) Populate more accurate rawDataSize for parquet format
[ https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615173#comment-16615173 ] Sahil Takiar commented on HIVE-20079: - [~aihuaxu] not sure if you are still planning to work on this? If not, mind if I assign it to myself? > Populate more accurate rawDataSize for parquet format > - > > Key: HIVE-20079 > URL: https://issues.apache.org/jira/browse/HIVE-20079 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch > > > Run the following queries and you will see the raw data for the table is 4 > (that is the number of fields) incorrectly. We need to populate correct data > size so data can be split properly. > {noformat} > SET hive.stats.autogather=true; > CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET; > INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1'); > DESC FORMATTED parquet_stats; > {noformat} > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > numFiles1 > numRows 2 > rawDataSize 4 > totalSize 373 > transient_lastDdlTime 1530660523 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20079) Populate more accurate rawDataSize for parquet format
[ https://issues.apache.org/jira/browse/HIVE-20079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-20079: --- Assignee: Sahil Takiar (was: Aihua Xu) > Populate more accurate rawDataSize for parquet format > - > > Key: HIVE-20079 > URL: https://issues.apache.org/jira/browse/HIVE-20079 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 2.0.0 >Reporter: Aihua Xu >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-20079.1.patch, HIVE-20079.2.patch > > > Run the following queries and you will see the raw data for the table is 4 > (that is the number of fields) incorrectly. We need to populate correct data > size so data can be split properly. > {noformat} > SET hive.stats.autogather=true; > CREATE TABLE parquet_stats (id int,str string) STORED AS PARQUET; > INSERT INTO parquet_stats values(0, 'this is string 0'), (1, 'string 1'); > DESC FORMATTED parquet_stats; > {noformat} > {noformat} > Table Parameters: > COLUMN_STATS_ACCURATE true > numFiles1 > numRows 2 > rawDataSize 4 > totalSize 373 > transient_lastDdlTime 1530660523 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20527) Intern table descriptors from spark task
[ https://issues.apache.org/jira/browse/HIVE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615170#comment-16615170 ] Andrew Sherman commented on HIVE-20527: --- Pushed to master, thanks [~janulatha] > Intern table descriptors from spark task > > > Key: HIVE-20527 > URL: https://issues.apache.org/jira/browse/HIVE-20527 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-20527.1.patch, HIVE-20527.1.patch, > HIVE-20527.1.patch > > > Table descriptors from MR tasks and Tez tasks are interned. This fix is to > intern table desc from spark tasks as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489
[ https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615167#comment-16615167 ] Andrew Sherman commented on HIVE-20526: --- Pushed to master, thanks [~janulatha] > Add test case for HIVE-20489 > > > Key: HIVE-20526 > URL: https://issues.apache.org/jira/browse/HIVE-20526 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-20526.1.patch, HIVE-20526.1.patch > > > Add a test case for the issue discussed in HIVE-20489. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL
[ https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615159#comment-16615159 ] Hive QA commented on HIVE-20524: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 31s{color} | {color:blue} standalone-metastore/metastore-common in master has 28 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13786/dev-support/hive-personality.sh | | git revision | master / 577141a | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-common itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13786/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Schema Evolution checking is broken in going from Hive version 2 to version 3 > for ALTER TABLE VARCHAR to DECIMAL > > > Key: HIVE-20524 > URL: https://issues.apache.org/jira/browse/HIVE-20524 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch, > HIVE-20524.03.patch > > > Issue that started this JIRA: > {code} > create external table varchar_decimal (c1 varchar(25)); > alter table varchar_decimal change c1 c1 decimal(31,0); > ERROR : FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following > columns have types incompatible with the existing columns in their respective > positions : > c1 > {code} > There appear to be 2 issues here: > 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the > default) we only allow StringFamily (STRING, CHAR, VARCHAR)
[jira] [Updated] (HIVE-20555) HiveServer2: Preauthenticated subject for http transport is not retained for entire duration of http communication in some cases
[ https://issues.apache.org/jira/browse/HIVE-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-20555: Attachment: HIVE-20555.1.patch > HiveServer2: Preauthenticated subject for http transport is not retained for > entire duration of http communication in some cases > > > Key: HIVE-20555 > URL: https://issues.apache.org/jira/browse/HIVE-20555 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 2.3.2, 3.1.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-20555.1.patch > > > As implemented in HIVE-8705, for http transport, we add the logged in > subject's credentials in the http header via a request interceptor. The > request interceptor doesn't seem to be getting used for some http traffic > (e.g. knox ssl in the same rpc). It would also be better to cache the logged > in subject for the duration of the whole session. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615134#comment-16615134 ] Sahil Takiar commented on HIVE-17684: - We haven't seen any issues or complaints about the {{MapJoinMemoryExhaustionHandler}} for Hive-on-MR, only for HoS. So I don't think we should modify the check for HoMR unless we have more evidence that it affects HoMR users. My guess is that this affects HoS more because in Spark a single JVM can run multiple Hive tasks and often runs multiple tasks in parallel. In HoMR, a new JVM is spawned for each Hive tasks; each JVM runs exactly one Hive task, and then shutdowns. > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev >Priority: Major > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, > HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, > HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20527) Intern table descriptors from spark task
[ https://issues.apache.org/jira/browse/HIVE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615131#comment-16615131 ] Janaki Lahorani commented on HIVE-20527: The Druid test failures seen are not related to this patch. Filed HIVE-20562 to track these failures. > Intern table descriptors from spark task > > > Key: HIVE-20527 > URL: https://issues.apache.org/jira/browse/HIVE-20527 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-20527.1.patch, HIVE-20527.1.patch, > HIVE-20527.1.patch > > > Table descriptors from MR tasks and Tez tasks are interned. This fix is to > intern table desc from spark tasks as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20444) Parameter is not properly quoted in DbNotificationListener.addWriteNotificationLog
[ https://issues.apache.org/jira/browse/HIVE-20444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615132#comment-16615132 ] Daniel Dai commented on HIVE-20444: --- [~maheshk114], do you have additional comments on this? > Parameter is not properly quoted in > DbNotificationListener.addWriteNotificationLog > -- > > Key: HIVE-20444 > URL: https://issues.apache.org/jira/browse/HIVE-20444 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20444.1.patch, JDBCTest.java > > > See exception: > {code} > 2018-08-22T04:44:22,758 INFO [pool-8-thread-190]: > listener.DbNotificationListener > (DbNotificationListener.java:addWriteNotificationLog(765)) - Going to execute > insert "WNL_WRITEID", "WNL_DATABASE", "WNL_TABLE", "WNL_PARTITION", "WNL_TABLE_OBJ", > "WNL_PARTITION_OBJ", "WNL_FILES", "WNL_EVENT_TIME") values > (50,124,1,'default','t1_default','','{"1":{"str":"t1_default"},"2":{"str":"default"},"3":{"str":"hrt_qa"},"4":{"i32":1534913061},"5":{"i32":0},"6":{"i32":0},"7":{"rec":{"1":{"lst":["rec",15,{"1":{"str":"t"},"2":{"str":"tinyint"}},{"1":{"str":"si"},"2":{"str":"smallint"}},{"1":{"str":"i"},"2":{"str":"int"}},{"1":{"str":"b"},"2":{"str":"bigint"}},{"1":{"str":"f"},"2":{"str":"double"}},{"1":{"str":"d"},"2":{"str":"double"}},{"1":{"str":"s"},"2":{"str":"varchar(25)"}},{"1":{"str":"dc"},"2":{"str":"decimal(38,18)"}},{"1":{"str":"bo"},"2":{"str":"varchar(5)"}},{"1":{"str":"v"},"2":{"str":"varchar(25)"}},{"1":{"str":"c"},"2":{"str":"char(25)"}},{"1":{"str":"ts"},"2":{"str":"timestamp"}},{"1":{"str":"dt"},"2":{"str":"date"}},{"1":{"str":"st"},"2":{"str":"string"}},{"1":{"str":"tz"},"2":{"str":"timestamp > with local time > zone('UTC')"}}]},"2":{"str":"hdfs://mycluster/warehouse/tablespace/managed/hive/t1_default"},"3":{"str":"org.apache.hadoop.mapred.TextInputFormat"},"4":{"str":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"},"5":{"tf":0},"6":{"i32":-1},"7":{"rec":{"2":{"str":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"},"3":{"map":["str","str",1,{"serialization.format":"1"}]}}},"8":{"lst":["str",0]},"9":{"lst":["rec",0]},"10":{"map":["str","str",0,{}]},"11":{"rec":{"1":{"lst":["str",0]},"2":{"lst":["lst",0]},"3":{"map":["lst","str",0,{}]}}},"12":{"tf":0}}},"8":{"lst":["rec",0]},"9":{"map":["str","str",9,{"totalSize":"0","rawDataSize":"0","numRows":"0","transactional_properties":"insert_only","COLUMN_STATS_ACCURATE":"{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"b\":\"true\",\"bo\":\"true\",\"c\":\"true\",\"d\":\"true\",\"dc\":\"true\",\"dt\":\"true\",\"f\":\"true\",\"i\":\"true\",\"s\":\"true\",\"si\":\"true\",\"st\":\"true\",\"t\":\"true\",\"ts\":\"true\",\"tz\":\"true\",\"v\":\"true\"}}","numFiles":"0","transient_lastDdlTime":"1534913062","bucketing_version":"2","transactional":"true"}]},"12":{"str":"MANAGED_TABLE"},"15":{"tf":0},"17":{"str":"hive"},"18":{"i32":1},"19":{"i64":1}}','null','hdfs://mycluster/warehouse/tablespace/managed/hive/t1_default/delta_001_001_/00_0###delta_001_001_',1534913062)> > 2018-08-22T04:44:22,773 ERROR [pool-8-thread-190]: > metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - > MetaException(message:Unable to add write notification log > org.postgresql.util.PSQLException: ERROR: syntax error at or near "UTC" > Position: 1032 > at > org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284) > at > org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003) > at > org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200) > at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424) > at > org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321) > at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:313) > at > com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:92) > at > com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) > at > org.apache.hive.hcatalog.listener.DbNotificationListener.addWriteNotificationLog(DbNotificationListener.java:766) > at > org.apache.hive.hcatalog.listener.DbNotificationListener.onAcidWrite(DbNotificationListener.java:657) > at > org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier.lambda$static$12(MetaStoreListenerNotifier.java:249) > at > org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier.notifyEventWithDirectSql(MetaStoreListenerNotifier.java:305) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.addWriteNotificationLog(TxnHandler.java:1617) > at >
[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489
[ https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615129#comment-16615129 ] Janaki Lahorani commented on HIVE-20526: The Druid test failures seen are not related to this patch. Filed HIVE-20562 to track these failures. > Add test case for HIVE-20489 > > > Key: HIVE-20526 > URL: https://issues.apache.org/jira/browse/HIVE-20526 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-20526.1.patch, HIVE-20526.1.patch > > > Add a test case for the issue discussed in HIVE-20489. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20273) Spark jobs aren't cancelled if getSparkJobInfo or getSparkStagesInfo
[ https://issues.apache.org/jira/browse/HIVE-20273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615124#comment-16615124 ] Sahil Takiar commented on HIVE-20273: - This patch makes the following fixes: * Fixes the "double-nesting" issue by removing the second clause of the if statement mentioned above * Adds proper and consistent handling of interrupts to {{getWebUIURL}} and {{getAppID}} in {{RemoteSparkJobStatus}} * Adds several unit tests that validate that {{killJob}} is invoked whenever an RPC call is interrupted > Spark jobs aren't cancelled if getSparkJobInfo or getSparkStagesInfo > > > Key: HIVE-20273 > URL: https://issues.apache.org/jira/browse/HIVE-20273 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-20273.1.patch > > > HIVE-19053 and HIVE-19733 added handling of {{InterruptedException}} to > {{RemoteSparkJobStatus#getSparkJobInfo}} and > {{RemoteSparkJobStatus#getSparkStagesInfo}}. Now, these methods catch > {{InterruptedException}} and wrap the exception in a {{HiveException}} and > then throw the new {{HiveException}}. > This new {{HiveException}} is then caught in > {{RemoteSparkJobMonitor#startMonitor}} which then looks for exceptions that > match the condition: > {code:java} > if (e instanceof InterruptedException || > (e instanceof HiveException && e.getCause() instanceof > InterruptedException)) > {code} > If this condition is met (in this case it is), the exception will again be > wrapped in another {{HiveException}} and is thrown again. So the final > exception is a {{HiveException}} that wraps a {{HiveException}} that wraps an > {{InterruptedException}}. > The double nesting of hive exception causes the logic in > {{SparkTask#setSparkException}} to break, and doesn't cause {{killJob}} to > get triggered. > This causes interrupted Hive queries to not kill their corresponding Spark > jobs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20535) Add new configuration to set the size of the global compile lock
[ https://issues.apache.org/jira/browse/HIVE-20535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615107#comment-16615107 ] Hive QA commented on HIVE-20535: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12939602/HIVE-20535.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14939 tests executed *Failed tests:* {noformat} org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testGlobalCompileLockTimeout (batchId=247) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13785/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13785/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13785/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12939602 - PreCommit-HIVE-Build > Add new configuration to set the size of the global compile lock > > > Key: HIVE-20535 > URL: https://issues.apache.org/jira/browse/HIVE-20535 > Project: Hive > Issue Type: Task > Components: HiveServer2 >Reporter: denys kuzmenko >Assignee: denys kuzmenko >Priority: Major > Attachments: HIVE-20535.1.patch > > > When removing the compile lock, it is quite risky to remove it entirely. > It would be good to provide a pool size for the concurrent compilation, so > the administrator can limit the load -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20537) Multi-column joins estimates with uncorrelated columns different in CBO and Hive
[ https://issues.apache.org/jira/browse/HIVE-20537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20537: --- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. > Multi-column joins estimates with uncorrelated columns different in CBO and > Hive > > > Key: HIVE-20537 > URL: https://issues.apache.org/jira/browse/HIVE-20537 > Project: Hive > Issue Type: Bug > Components: Statistics >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20537.01.patch, HIVE-20537.01.patch, > HIVE-20537.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20535) Add new configuration to set the size of the global compile lock
[ https://issues.apache.org/jira/browse/HIVE-20535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16615077#comment-16615077 ] Hive QA commented on HIVE-20535: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} common in master has 64 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} common: The patch generated 1 new + 424 unchanged - 1 fixed = 425 total (was 425) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s{color} | {color:red} ql: The patch generated 12 new + 142 unchanged - 6 fixed = 154 total (was 148) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13785/dev-support/hive-personality.sh | | git revision | master / 35f86c7 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13785/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13785/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-13785/yetus/patch-asflicense-problems.txt | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13785/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add new configuration to set the size of the global compile lock > > > Key: HIVE-20535 > URL: https://issues.apache.org/jira/browse/HIVE-20535 > Project: Hive > Issue Type: Task > Components: HiveServer2 >Reporter: denys kuzmenko >Assignee: denys kuzmenko >Priority: Major > Attachments: HIVE-20535.1.patch > > > When removing the compile lock, it is quite risky to remove it entirely. > It would be good to provide a pool size for the concurrent compilation, so > the administrator can limit the load -- This message was sent by
[jira] [Updated] (HIVE-20561) Use the position of the Kafka Consumer to track progress instead of Consumer Records offsets
[ https://issues.apache.org/jira/browse/HIVE-20561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-20561: -- Attachment: HIVE-20561.patch > Use the position of the Kafka Consumer to track progress instead of Consumer > Records offsets > > > Key: HIVE-20561 > URL: https://issues.apache.org/jira/browse/HIVE-20561 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20561.patch > > > Kafka Partitions with transactional messages (post 0.11) will include commit > or abort markers which indicate the result of a transaction. The markers are > not returned to applications, yet have an offset in the log. Therefore the > end of Stream position can be the offset of a control message. > This Patch change the way how we keep track of the consumer position by using > {code} consumer.position(topicP) {code} as oppose to using the offset of the > consumed messages. > Also I have done some refactoring to help code readability hopefully. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20561) Use the position of the Kafka Consumer to track progress instead of Consumer Records offsets
[ https://issues.apache.org/jira/browse/HIVE-20561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-20561: -- Status: Patch Available (was: Open) > Use the position of the Kafka Consumer to track progress instead of Consumer > Records offsets > > > Key: HIVE-20561 > URL: https://issues.apache.org/jira/browse/HIVE-20561 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 4.0.0 > > > Kafka Partitions with transactional messages (post 0.11) will include commit > or abort markers which indicate the result of a transaction. The markers are > not returned to applications, yet have an offset in the log. Therefore the > end of Stream position can be the offset of a control message. > This Patch change the way how we keep track of the consumer position by using > {code} consumer.position(topicP) {code} as oppose to using the offset of the > consumed messages. > Also I have done some refactoring to help code readability hopefully. -- This message was sent by Atlassian JIRA (v7.6.3#76005)