[jira] [Commented] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979034#comment-16979034 ] Hive QA commented on HIVE-22505: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 12s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 6 new + 397 unchanged - 0 fixed = 403 total (was 397) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19523/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-19523/yetus/diff-checkstyle-ql.txt | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19523/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, HIVE-22505.patch, > query_error.out, query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-22369: -- Attachment: HIVE-22369.02.patch > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831] > or a > [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776]. > When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for > a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633] > node in the tree, which if won't find in case of a HiveTableFunctionScan was > returned. This is why TestNewGetSplitsFormat is failing with return path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-22369: -- Attachment: (was: HIVE-22369.02.patch) > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831] > or a > [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776]. > When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for > a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633] > node in the tree, which if won't find in case of a HiveTableFunctionScan was > returned. This is why TestNewGetSplitsFormat is failing with return path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22317) Beeline-site parser does not handle the variable substitution correctly
[ https://issues.apache.org/jira/browse/HIVE-22317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979013#comment-16979013 ] Hive QA commented on HIVE-22317: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986377/HIVE-22317.01.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19522/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19522/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19522/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-11-21 06:28:56.341 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-19522/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2019-11-21 06:28:56.345 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at df8e185 HIVE-22513: Constant propagation of casted column in filter ops can cause incorrect results (Adam Szita, reviewed by Zoltan Haindrich, Peter Vary) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at df8e185 HIVE-22513: Constant propagation of casted column in filter ops can cause incorrect results (Adam Szita, reviewed by Zoltan Haindrich, Peter Vary) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-11-21 06:28:57.558 + rm -rf ../yetus_PreCommit-HIVE-Build-19522 + mkdir ../yetus_PreCommit-HIVE-Build-19522 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-19522 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-19522/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/beeline/src/java/org/apache/hive/beeline/hs2connection/BeelineSiteParser.java: does not exist in index error: a/beeline/src/test/org/apache/hive/beeline/hs2connection/TestBeelineSiteParser.java: does not exist in index error: a/beeline/src/test/resources/beeline-site.xml: does not exist in index error: beeline/src/test/resources/beeline-site.xml: does not exist in index error: src/java/org/apache/hive/beeline/hs2connection/BeelineSiteParser.java: does not exist in index error: src/test/org/apache/hive/beeline/hs2connection/TestBeelineSiteParser.java: does not exist in index error: src/test/resources/beeline-site.xml: does not exist in index The patch does not appear to apply with p0, p1, or p2 + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-19522 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12986377 - PreCommit-HIVE-Build > Beeline-site parser does not handle the variable substitution correctly > --- > > Key: HIVE-22317 > URL: https://issues.apache.org/jira/browse/HIVE-22317 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 4.0.0 > Environment: Hive-4.0.0 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-22317.01.patch, HIVE-22317.patch > > > beeline-site.xml > {code:java} > http://www.w3.org/2001/XInclude;> > > > beeline.hs2.jdbc.url.container > > jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 > > > > beeline.hs2.jdbc.url.default > test > > > beeline.hs2.jdbc.url.test >
[jira] [Commented] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979012#comment-16979012 ] Hive QA commented on HIVE-22369: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986369/HIVE-22369.02.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17715 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation (batchId=279) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19521/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19521/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19521/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986369 - PreCommit-HIVE-Build > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831] > or a > [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776]. > When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for > a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633] > node in the tree, which if won't find in case of a HiveTableFunctionScan was > returned. This is why TestNewGetSplitsFormat is failing with return path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978995#comment-16978995 ] Hive QA commented on HIVE-22369: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 15s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 45s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 20 unchanged - 33 fixed = 21 total (was 53) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19521/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-19521/yetus/diff-checkstyle-itests_hive-unit.txt | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19521/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a >
[jira] [Commented] (HIVE-18685) Add catalogs to Hive
[ https://issues.apache.org/jira/browse/HIVE-18685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978988#comment-16978988 ] Shaohui Liu commented on HIVE-18685: Very impressive feature. Any progress update? [~gates] > Add catalogs to Hive > > > Key: HIVE-18685 > URL: https://issues.apache.org/jira/browse/HIVE-18685 > Project: Hive > Issue Type: New Feature > Components: Metastore, Parser, Security, SQL >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HMS Catalog Design Doc.pdf > > > SQL supports two levels of namespaces, called in the spec catalogs and > schemas (with schema being equivalent to Hive's database). I propose to add > the upper level of catalog. The attached design doc covers the use cases, > requirements, and brief discussion of how it will be implemented in a > backwards compatible way. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-18685) Add catalogs to Hive
[ https://issues.apache.org/jira/browse/HIVE-18685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978988#comment-16978988 ] Shaohui Liu edited comment on HIVE-18685 at 11/21/19 5:26 AM: -- [~gates] Very impressive feature. Any progress update? Thanks~ was (Author: liushaohui): Very impressive feature. Any progress update? [~gates] > Add catalogs to Hive > > > Key: HIVE-18685 > URL: https://issues.apache.org/jira/browse/HIVE-18685 > Project: Hive > Issue Type: New Feature > Components: Metastore, Parser, Security, SQL >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HMS Catalog Design Doc.pdf > > > SQL supports two levels of namespaces, called in the spec catalogs and > schemas (with schema being equivalent to Hive's database). I propose to add > the upper level of catalog. The attached design doc covers the use cases, > requirements, and brief discussion of how it will be implemented in a > backwards compatible way. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22486) Send only accessed columns for masking policies request
[ https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978980#comment-16978980 ] Hive QA commented on HIVE-22486: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986363/HIVE-22486.03.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 17709 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19520/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19520/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19520/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12986363 - PreCommit-HIVE-Build > Send only accessed columns for masking policies request > --- > > Key: HIVE-22486 > URL: https://issues.apache.org/jira/browse/HIVE-22486 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, > HIVE-22486.03.patch, HIVE-22486.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we send all columns for masking request, even if they are not > accessed by the given query. We could send only those columns for which the > masking policy will be necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22486) Send only accessed columns for masking policies request
[ https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978963#comment-16978963 ] Hive QA commented on HIVE-22486: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 21s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19520/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19520/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Send only accessed columns for masking policies request > --- > > Key: HIVE-22486 > URL: https://issues.apache.org/jira/browse/HIVE-22486 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, > HIVE-22486.03.patch, HIVE-22486.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we send all columns for masking request, even if they are not > accessed by the given query. We could send only those columns for which the > masking policy will be necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22520) MS-SQL server: Load partition throws error in TxnHandler (ACID dataset)
[ https://issues.apache.org/jira/browse/HIVE-22520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978961#comment-16978961 ] Gopal Vijayaraghavan commented on HIVE-22520: - https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/tools/SQLGenerator.java#L161 There's a loop here to avoid this particular bug. {code} insertPreparedStmts = sqlGenerator.createInsertValuesPreparedStmt(dbConn, "TXN_COMPONENTS (tc_txnid, tc_database, tc_table, tc_partition, tc_operation_type, tc_writeid)", rows, paramsList); for(PreparedStatement pst : insertPreparedStmts) { modCount = pst.executeUpdate(); } {code} > MS-SQL server: Load partition throws error in TxnHandler (ACID dataset) > --- > > Key: HIVE-22520 > URL: https://issues.apache.org/jira/browse/HIVE-22520 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.2 >Reporter: Rajesh Balamohan >Priority: Major > > When loading ACID table with MS-SQL server as backend, it ends up throwing > following exception. > > {noformat} > thrift.ProcessFunction: Internal error processing add_dynamic_partitions > org.apache.hadoop.hive.metastore.api.MetaException: Unable to insert into > from transaction database com.microsoft.sqlserver.jdbc.SQLServerException: > The incoming request has too many parameters. The server supports a maximum > of 2100 parameters. Reduce the number of parameters and resend the request. > at > com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:254) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1608) > at > com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:578) > at > com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:508) > at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7240) > at > com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2869) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:243) > at > com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:218) > at > com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:461) > at > com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61) > at > com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.addDynamicPartitions(TxnHandler.java:3149) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_dynamic_partitions(HiveMetaStore.java:7824) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy32.add_dynamic_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_dynamic_partitions.getResult(ThriftHiveMetastore.java:19038) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_dynamic_partitions.getResult(ThriftHiveMetastore.java:19022) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {noformat} > https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L3258 > -- This message was sent by Atlassian
[jira] [Commented] (HIVE-22522) llap doesn't work using complex join operation
[ https://issues.apache.org/jira/browse/HIVE-22522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978960#comment-16978960 ] Gopal Vijayaraghavan commented on HIVE-22522: - bq. SERVICE_UNAVAILABLE This looks like a YARN services misconfiguration at first sight and unrelated to the query as such. Any OOM etc, will cause a restart which should renable the service by healing it. Are there log lines which say "Exceeding its Physical Memory Limit Error" in YARN NodeManager logs? > llap doesn't work using complex join operation > -- > > Key: HIVE-22522 > URL: https://issues.apache.org/jira/browse/HIVE-22522 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 3.1.1 >Reporter: lv haiyang >Priority: Major > > ERROR : FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. > Dag received [DAG_TERMINATE, SERVICE_PLUGIN_ERROR] in RUNNING state. > Error reported by TaskScheduler [[2:LLAP]][SERVICE_UNAVAILABLE] > No LLAP Daemons are runningVertex killed, vertexName=Reducer 3, > vertexId=vertex_1574126686177_0029_47_08, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:1, Vertex vertex_1574126686177_0029_47_08 [Reducer > 3] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Map 1, > vertexId=vertex_1574126686177_0029_47_05, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:23, Vertex vertex_1574126686177_0029_47_05 [Map 1] > killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 2, > vertexId=vertex_1574126686177_0029_47_07, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:68, Vertex vertex_1574126686177_0029_47_07 > [Reducer 2] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 4, > vertexId=vertex_1574126686177_0029_47_06, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:72, Vertex vertex_1574126686177_0029_47_06 > [Reducer 4] killed/failed due to: > DAG_TERMINATED]DAG did not succeed due to SERVICE_PLUGIN_ERROR. > failedVertices:0 killedVertices:4 > INFO : Completed executing > command(queryId=hive_20191120101841_c7d177d8-28bb-48f8-a14f-eb65fc3b); > Time taken: 557.077 seconds > Error: Error while processing statement: FAILED: Execution Error, > return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. > Dag received [DAG_TERMINATE, SERVICE_PLUGIN_ERROR] in RUNNING state. > Error reported by TaskScheduler [[2:LLAP]][SERVICE_UNAVAILABLE] > No LLAP Daemons are runningVertex killed, vertexName=Reducer 3, > vertexId=vertex_1574126686177_0029_47_08, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:1, Vertex vertex_1574126686177_0029_47_08 [Reducer > 3] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Map 1, > vertexId=vertex_1574126686177_0029_47_05, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:23, Vertex vertex_1574126686177_0029_47_05 [Map 1] > killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 2, > vertexId=vertex_1574126686177_0029_47_07, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:68, Vertex vertex_1574126686177_0029_47_07 > [Reducer 2] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 4, > vertexId=vertex_1574126686177_0029_47_06, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:72, Vertex vertex_1574126686177_0029_47_06 > [Reducer 4] killed/failed due to: > DAG_TERMINATED]DAG did not succeed due to SERVICE_PLUGIN_ERROR. > failedVertices:0 killedVertices: > 4 (state=08S01,code=2) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22478: -- Attachment: HIVE-22478.4.patch > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch, HIVE-22478.4.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Status: Patch Available (was: Open) > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, HIVE-22505.patch, > query_error.out, query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Attachment: HIVE-22505.6.patch > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, HIVE-22505.patch, > query_error.out, query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Status: Open (was: Patch Available) > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978944#comment-16978944 ] Hive QA commented on HIVE-22505: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986361/HIVE-22505.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17711 tests executed *Failed tests:* {noformat} org.apache.hive.service.server.TestHS2HttpServer.testApiServletActiveSessions (batchId=240) org.apache.hive.service.server.TestHS2HttpServer.testApiServletHistoricalQueries (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19519/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19519/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19519/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986361 - PreCommit-HIVE-Build > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22522) llap doesn't work using complex join operation
[ https://issues.apache.org/jira/browse/HIVE-22522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978943#comment-16978943 ] lv haiyang commented on HIVE-22522: --- Do I need to configure parameters in hive-site.xml ? > llap doesn't work using complex join operation > -- > > Key: HIVE-22522 > URL: https://issues.apache.org/jira/browse/HIVE-22522 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 3.1.1 >Reporter: lv haiyang >Priority: Major > > ERROR : FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. > Dag received [DAG_TERMINATE, SERVICE_PLUGIN_ERROR] in RUNNING state. > Error reported by TaskScheduler [[2:LLAP]][SERVICE_UNAVAILABLE] > No LLAP Daemons are runningVertex killed, vertexName=Reducer 3, > vertexId=vertex_1574126686177_0029_47_08, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:1, Vertex vertex_1574126686177_0029_47_08 [Reducer > 3] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Map 1, > vertexId=vertex_1574126686177_0029_47_05, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:23, Vertex vertex_1574126686177_0029_47_05 [Map 1] > killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 2, > vertexId=vertex_1574126686177_0029_47_07, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:68, Vertex vertex_1574126686177_0029_47_07 > [Reducer 2] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 4, > vertexId=vertex_1574126686177_0029_47_06, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:72, Vertex vertex_1574126686177_0029_47_06 > [Reducer 4] killed/failed due to: > DAG_TERMINATED]DAG did not succeed due to SERVICE_PLUGIN_ERROR. > failedVertices:0 killedVertices:4 > INFO : Completed executing > command(queryId=hive_20191120101841_c7d177d8-28bb-48f8-a14f-eb65fc3b); > Time taken: 557.077 seconds > Error: Error while processing statement: FAILED: Execution Error, > return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. > Dag received [DAG_TERMINATE, SERVICE_PLUGIN_ERROR] in RUNNING state. > Error reported by TaskScheduler [[2:LLAP]][SERVICE_UNAVAILABLE] > No LLAP Daemons are runningVertex killed, vertexName=Reducer 3, > vertexId=vertex_1574126686177_0029_47_08, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:1, Vertex vertex_1574126686177_0029_47_08 [Reducer > 3] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Map 1, > vertexId=vertex_1574126686177_0029_47_05, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:23, Vertex vertex_1574126686177_0029_47_05 [Map 1] > killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 2, > vertexId=vertex_1574126686177_0029_47_07, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:68, Vertex vertex_1574126686177_0029_47_07 > [Reducer 2] killed/failed due to: > DAG_TERMINATED]Vertex killed, vertexName=Reducer 4, > vertexId=vertex_1574126686177_0029_47_06, > diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not > succeed due to DAG_TERMINATED, > failedTasks:0 killedTasks:72, Vertex vertex_1574126686177_0029_47_06 > [Reducer 4] killed/failed due to: > DAG_TERMINATED]DAG did not succeed due to SERVICE_PLUGIN_ERROR. > failedVertices:0 killedVertices: > 4 (state=08S01,code=2) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22317) Beeline-site parser does not handle the variable substitution correctly
[ https://issues.apache.org/jira/browse/HIVE-22317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978933#comment-16978933 ] Rajkumar Singh commented on HIVE-22317: --- [~maheshk114] Thanks for reviewing this, updated the patch with the Unit test and comments as suggested. > Beeline-site parser does not handle the variable substitution correctly > --- > > Key: HIVE-22317 > URL: https://issues.apache.org/jira/browse/HIVE-22317 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 4.0.0 > Environment: Hive-4.0.0 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-22317.01.patch, HIVE-22317.patch > > > beeline-site.xml > {code:java} > http://www.w3.org/2001/XInclude;> > > > beeline.hs2.jdbc.url.container > > jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 > > > > beeline.hs2.jdbc.url.default > test > > > beeline.hs2.jdbc.url.test > ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue > > > beeline.hs2.jdbc.url.llap > > jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive > > > {code} > beeline fail to connect because it does not parse the substituted value > correctly > {code:java} > beeline > Error in parsing jdbc url: > ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue from beeline-site.xml > beeline> {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22317) Beeline-site parser does not handle the variable substitution correctly
[ https://issues.apache.org/jira/browse/HIVE-22317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-22317: -- Attachment: HIVE-22317.01.patch Status: Patch Available (was: Open) > Beeline-site parser does not handle the variable substitution correctly > --- > > Key: HIVE-22317 > URL: https://issues.apache.org/jira/browse/HIVE-22317 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 4.0.0 > Environment: Hive-4.0.0 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-22317.01.patch, HIVE-22317.patch > > > beeline-site.xml > {code:java} > http://www.w3.org/2001/XInclude;> > > > beeline.hs2.jdbc.url.container > > jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 > > > > beeline.hs2.jdbc.url.default > test > > > beeline.hs2.jdbc.url.test > ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue > > > beeline.hs2.jdbc.url.llap > > jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive > > > {code} > beeline fail to connect because it does not parse the substituted value > correctly > {code:java} > beeline > Error in parsing jdbc url: > ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue from beeline-site.xml > beeline> {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22317) Beeline-site parser does not handle the variable substitution correctly
[ https://issues.apache.org/jira/browse/HIVE-22317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-22317: -- Status: Open (was: Patch Available) > Beeline-site parser does not handle the variable substitution correctly > --- > > Key: HIVE-22317 > URL: https://issues.apache.org/jira/browse/HIVE-22317 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 4.0.0 > Environment: Hive-4.0.0 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-22317.01.patch, HIVE-22317.patch > > > beeline-site.xml > {code:java} > http://www.w3.org/2001/XInclude;> > > > beeline.hs2.jdbc.url.container > > jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2 > > > > beeline.hs2.jdbc.url.default > test > > > beeline.hs2.jdbc.url.test > ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue > > > beeline.hs2.jdbc.url.llap > > jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive > > > {code} > beeline fail to connect because it does not parse the substituted value > correctly > {code:java} > beeline > Error in parsing jdbc url: > ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue from beeline-site.xml > beeline> {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978921#comment-16978921 ] Hive QA commented on HIVE-22505: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 59s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s{color} | {color:red} ql: The patch generated 6 new + 397 unchanged - 0 fixed = 403 total (was 397) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19519/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-19519/yetus/diff-checkstyle-ql.txt | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19519/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978902#comment-16978902 ] Hive QA commented on HIVE-22478: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986359/HIVE-22478.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17710 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[import_non_acid_to_acid] (batchId=40) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19518/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19518/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19518/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986359 - PreCommit-HIVE-Build > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978876#comment-16978876 ] Hive QA commented on HIVE-22478: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 18s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19518/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19518/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at >
[jira] [Comment Edited] (HIVE-22435) Exception when using VectorTopNKeyOperator operator
[ https://issues.apache.org/jira/browse/HIVE-22435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978866#comment-16978866 ] Jesus Camacho Rodriguez edited comment on HIVE-22435 at 11/21/19 12:54 AM: --- [~rajesh.balamohan], did [~kkasa] address your comments? Is this ready to be pushed? Thanks was (Author: jcamachorodriguez): [~rbalamohan], did [~kkasa] address your comments? Is this ready to be pushed? Thanks > Exception when using VectorTopNKeyOperator operator > --- > > Key: HIVE-22435 > URL: https://issues.apache.org/jira/browse/HIVE-22435 > Project: Hive > Issue Type: Bug >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-20150.15.patch, HIVE-22435.1.patch, > HIVE-22435.2.patch, HIVE-22435.3.patch, HIVE-22435.4.patch, > HIVE-22435.5.patch, HIVE-22435.6.patch, HIVE-22435.7.patch > > > Steps to reproduce: > 1. Apply the attached patch > {code} > git apply -3 -p0 HIVE-20150.15.patch > {code} > 2. rebuild project > {code:java} > mvn clean install -DskipTests > {code} > 3. Run the following test > {code:java} > mvn test -DskipSparkTests -Dtest=TestMiniLlapLocalCliDriver > -Dqfile=limit_pushdown3.q -pl itests/qtest -Pitests > {code} > Query execution fails with exception > {code:java} > select ctinyint, count(distinct(cdouble)) from alltypesorc group by ctinyint > order by ctinyint limit 20 > {code} > {code:java} > [ERROR] Failures: > [ERROR] TestMiniLlapLocalCliDriver.testCliDriver:59 Client execution failed > with error code = 2 > running > select ctinyint, count(distinct(cdouble)) from alltypesorc group by ctinyint > order by ctinyint limit 20 > fname=limit_pushdown3.q > See ./ql/target/tmp/log/hive.log or ./itests/qtest/target/tmp/log/hive.log, > or check ./ql/target/surefire-reports or > ./itests/qtest/target/surefire-reports/ for specific test cases logs. > org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, > vertexName=Reducer 2, vertexId=vertex_1572454329409_0001_9_01, > diagnostics=[Task failed, taskId=task_1572454329409_0001_9_01_00, > diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( > failure ) : > attempt_1572454329409_0001_9_01_00_0:java.lang.RuntimeException: > java.lang.RuntimeException: cannot find field key from [0:key._col0, > 1:key._col1] > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: cannot find field key from > [0:key._col0, 1:key._col1] > at > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:538) > at > org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:153) > at > org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:80) > at > org.apache.hadoop.hive.ql.exec.TopNKeyOperator.initializeOp(TopNKeyOperator.java:106) > at > org.apache.hadoop.hive.ql.exec.vector.VectorTopNKeyOperator.initializeOp(VectorTopNKeyOperator.java:71) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:360) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.init(ReduceRecordProcessor.java:191) > at >
[jira] [Work logged] (HIVE-22488) Break up DDLSemanticAnalyzer - extract Table creation analyzers
[ https://issues.apache.org/jira/browse/HIVE-22488?focusedWorklogId=347104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347104 ] ASF GitHub Bot logged work on HIVE-22488: - Author: ASF GitHub Bot Created on: 21/Nov/19 00:52 Start Date: 21/Nov/19 00:52 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #843: HIVE-22488 Break up DDLSemanticAnalyzer - extract Table creation analyzers URL: https://github.com/apache/hive/pull/843#discussion_r348849492 ## File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/creation/createlike/CreateTableLikeDesc.java ## @@ -16,7 +16,7 @@ * limitations under the License. */ -package org.apache.hadoop.hive.ql.ddl.table.creation; Review comment: Instead of `org.apache.hadoop.hive.ql.ddl.table.creation.create` or `org.apache.hadoop.hive.ql.ddl.table.creation.drop`, isn't it more natural to have `org.apache.hadoop.hive.ql.ddl.table.create` or `org.apache.hadoop.hive.ql.ddl.table.drop`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 347104) Time Spent: 20m (was: 10m) > Break up DDLSemanticAnalyzer - extract Table creation analyzers > --- > > Key: HIVE-22488 > URL: https://issues.apache.org/jira/browse/HIVE-22488 > Project: Hive > Issue Type: Sub-task >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Attachments: HIVE-22488.01.patch > > Time Spent: 20m > Remaining Estimate: 0h > > DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is > to refactor it in order to have everything cut into more handleable classes > under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each analyzers > * have a package for each operation, containing an analyzer, a description, > and an operation, so the amount of classes under a package is more manageable > Step #9: extract the table creationanalyzers from DDLSemanticAnalyzer, and > move them under the new package. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22435) Exception when using VectorTopNKeyOperator operator
[ https://issues.apache.org/jira/browse/HIVE-22435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978866#comment-16978866 ] Jesus Camacho Rodriguez commented on HIVE-22435: [~rbalamohan], did [~kkasa] address your comments? Is this ready to be pushed? Thanks > Exception when using VectorTopNKeyOperator operator > --- > > Key: HIVE-22435 > URL: https://issues.apache.org/jira/browse/HIVE-22435 > Project: Hive > Issue Type: Bug >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Attachments: HIVE-20150.15.patch, HIVE-22435.1.patch, > HIVE-22435.2.patch, HIVE-22435.3.patch, HIVE-22435.4.patch, > HIVE-22435.5.patch, HIVE-22435.6.patch, HIVE-22435.7.patch > > > Steps to reproduce: > 1. Apply the attached patch > {code} > git apply -3 -p0 HIVE-20150.15.patch > {code} > 2. rebuild project > {code:java} > mvn clean install -DskipTests > {code} > 3. Run the following test > {code:java} > mvn test -DskipSparkTests -Dtest=TestMiniLlapLocalCliDriver > -Dqfile=limit_pushdown3.q -pl itests/qtest -Pitests > {code} > Query execution fails with exception > {code:java} > select ctinyint, count(distinct(cdouble)) from alltypesorc group by ctinyint > order by ctinyint limit 20 > {code} > {code:java} > [ERROR] Failures: > [ERROR] TestMiniLlapLocalCliDriver.testCliDriver:59 Client execution failed > with error code = 2 > running > select ctinyint, count(distinct(cdouble)) from alltypesorc group by ctinyint > order by ctinyint limit 20 > fname=limit_pushdown3.q > See ./ql/target/tmp/log/hive.log or ./itests/qtest/target/tmp/log/hive.log, > or check ./ql/target/surefire-reports or > ./itests/qtest/target/surefire-reports/ for specific test cases logs. > org.apache.hadoop.hive.ql.metadata.HiveException: Vertex failed, > vertexName=Reducer 2, vertexId=vertex_1572454329409_0001_9_01, > diagnostics=[Task failed, taskId=task_1572454329409_0001_9_01_00, > diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( > failure ) : > attempt_1572454329409_0001_9_01_00_0:java.lang.RuntimeException: > java.lang.RuntimeException: cannot find field key from [0:key._col0, > 1:key._col1] > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: cannot find field key from > [0:key._col0, 1:key._col1] > at > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:538) > at > org.apache.hadoop.hive.serde2.objectinspector.StandardStructObjectInspector.getStructFieldRef(StandardStructObjectInspector.java:153) > at > org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:80) > at > org.apache.hadoop.hive.ql.exec.TopNKeyOperator.initializeOp(TopNKeyOperator.java:106) > at > org.apache.hadoop.hive.ql.exec.vector.VectorTopNKeyOperator.initializeOp(VectorTopNKeyOperator.java:71) > at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:360) > at > org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.init(ReduceRecordProcessor.java:191) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) > ... 15 more > ], TaskAttempt 1 failed, info=[Error: Error while running task ( failure ) : >
[jira] [Commented] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978863#comment-16978863 ] Jesus Camacho Rodriguez commented on HIVE-22369: [~mgergely], I believe latest patch has some unrelated changes. Can we create a new issue and submit a new patch for them? Other than that, +1 > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831] > or a > [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776]. > When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for > a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633] > node in the tree, which if won't find in case of a HiveTableFunctionScan was > returned. This is why TestNewGetSplitsFormat is failing with return path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?focusedWorklogId=347096=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347096 ] ASF GitHub Bot logged work on HIVE-22369: - Author: ASF GitHub Bot Created on: 21/Nov/19 00:40 Start Date: 21/Nov/19 00:40 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #845: HIVE-22369 Handle HiveTableFunctionScan at return path URL: https://github.com/apache/hive/pull/845#discussion_r348844843 ## File path: itests/hive-unit/src/test/java/org/apache/hive/jdbc/BaseJdbcWithMiniLlap.java ## @@ -530,12 +498,10 @@ protected int processQuery(String currentDatabase, String query, int numSplits, InputSplit[] splits = inputFormat.getSplits(job, numSplits); // Fetch rows from splits -boolean first = true; Review comment: These changes do not seem related to this patch? If they are not, can we submit it in a different one? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 347096) Time Spent: 1h 10m (was: 1h) > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831] > or a > [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776]. > When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for > a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633] > node in the tree, which if won't find in case of a HiveTableFunctionScan was > returned. This is why TestNewGetSplitsFormat is failing with return path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22476) Hive datediff function provided inconsistent results when hive.fetch.task.conversion is set to none
[ https://issues.apache.org/jira/browse/HIVE-22476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978862#comment-16978862 ] Hive QA commented on HIVE-22476: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986357/HIVE-22476.7.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17710 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation (batchId=279) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19517/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19517/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19517/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986357 - PreCommit-HIVE-Build > Hive datediff function provided inconsistent results when > hive.fetch.task.conversion is set to none > --- > > Key: HIVE-22476 > URL: https://issues.apache.org/jira/browse/HIVE-22476 > Project: Hive > Issue Type: Bug >Reporter: Slim Bouguerra >Assignee: Slim Bouguerra >Priority: Major > Attachments: HIVE-22476.2.patch, HIVE-22476.3.patch, > HIVE-22476.5.patch, HIVE-22476.6.patch, HIVE-22476.7.patch, HIVE-22476.7.patch > > > The actual issue stems to the different date parser used by various part of > the engine. > Fetch task uses udfdatediff via {code} > org.apache.hadoop.hive.ql.udf.generic.GenericUDFToDate{code} while the > vectorized llap execution uses {code}VectorUDFDateDiffScalarCol{code}. > This fix is meant to be not very intrusive and will add more support to the > GenericUDFToDate by enhancing the parser. > For the longer term will be better to use one parser for all the operators. > Thanks [~Rajkumar Singh] for the repro example > {code} > create external table testdatediff(datetimecol string) stored as orc; > insert into testdatediff values ('2019-09-09T10:45:49+02:00'),('2019-07-24'); > select datetimecol from testdatediff where datediff(cast(current_timestamp as > string), datetimecol)<183; > set hive.ferch.task.conversion=none; > select datetimecol from testdatediff where datediff(cast(current_timestamp as > string), datetimecol)<183; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22476) Hive datediff function provided inconsistent results when hive.fetch.task.conversion is set to none
[ https://issues.apache.org/jira/browse/HIVE-22476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978822#comment-16978822 ] Hive QA commented on HIVE-22476: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 3s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} common in master has 65 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 18s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19517/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19517/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive datediff function provided inconsistent results when > hive.fetch.task.conversion is set to none > --- > > Key: HIVE-22476 > URL: https://issues.apache.org/jira/browse/HIVE-22476 > Project: Hive > Issue Type: Bug >Reporter: Slim Bouguerra >Assignee: Slim Bouguerra >Priority: Major > Attachments: HIVE-22476.2.patch, HIVE-22476.3.patch, > HIVE-22476.5.patch, HIVE-22476.6.patch, HIVE-22476.7.patch, HIVE-22476.7.patch > > > The actual issue stems to the different date parser used by various part of > the engine. > Fetch task uses udfdatediff via {code} > org.apache.hadoop.hive.ql.udf.generic.GenericUDFToDate{code} while the > vectorized llap execution uses {code}VectorUDFDateDiffScalarCol{code}. > This fix is meant to be not very intrusive and will add more support to the > GenericUDFToDate by enhancing the parser. > For the longer term will be better to use one parser for all the operators. > Thanks [~Rajkumar Singh] for the repro example > {code} > create external table testdatediff(datetimecol string) stored as orc; > insert into testdatediff
[jira] [Commented] (HIVE-22511) Fix case of Month token in datetime to string conversion
[ https://issues.apache.org/jira/browse/HIVE-22511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978807#comment-16978807 ] Hive QA commented on HIVE-22511: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986347/HIVE-22511.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 17709 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19516/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19516/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19516/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12986347 - PreCommit-HIVE-Build > Fix case of Month token in datetime to string conversion > > > Key: HIVE-22511 > URL: https://issues.apache.org/jira/browse/HIVE-22511 > Project: Hive > Issue Type: Bug >Reporter: Gabor Kaszab >Assignee: Karen Coppage >Priority: Major > Attachments: HIVE-22511.01.patch > > > Currently Hive doesn't allow month tokens with weird spelling like 'MONth', > 'mONTH' etc. However, Oracle does and Hive should follow that approach. > The rules: > - If the first letter is lowercase then the output is lowercase: 'mONTH' -> > 'may' > - If the first two letters are uppercase then the output is uppercase: > 'MOnth' -> 'MAY' > - If the first letter is uppercase and the second is lowercase then the > output is capitalized: 'Month' -> 'May'. > Oracle: > {code:java} > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'MOnth') from > DUAL; > MAY 2019 > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'mONTH') from > DUAL; > may 2019 > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'MoNTH') from > DUAL; > May 2019 > {code} > Please check the same for 'Name of the day' tokens. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22511) Fix case of Month token in datetime to string conversion
[ https://issues.apache.org/jira/browse/HIVE-22511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978771#comment-16978771 ] Hive QA commented on HIVE-22511: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} common in master has 65 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19516/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: common U: common | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19516/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix case of Month token in datetime to string conversion > > > Key: HIVE-22511 > URL: https://issues.apache.org/jira/browse/HIVE-22511 > Project: Hive > Issue Type: Bug >Reporter: Gabor Kaszab >Assignee: Karen Coppage >Priority: Major > Attachments: HIVE-22511.01.patch > > > Currently Hive doesn't allow month tokens with weird spelling like 'MONth', > 'mONTH' etc. However, Oracle does and Hive should follow that approach. > The rules: > - If the first letter is lowercase then the output is lowercase: 'mONTH' -> > 'may' > - If the first two letters are uppercase then the output is uppercase: > 'MOnth' -> 'MAY' > - If the first letter is uppercase and the second is lowercase then the > output is capitalized: 'Month' -> 'May'. > Oracle: > {code:java} > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'MOnth') from > DUAL; > MAY 2019 > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'mONTH') from > DUAL; > may 2019 > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'MoNTH') from > DUAL; > May 2019 > {code} > Please check the same for 'Name of the day' tokens. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-21378) Support TeraData in JDBC StorageHandler
[ https://issues.apache.org/jira/browse/HIVE-21378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978769#comment-16978769 ] Jesus Camacho Rodriguez commented on HIVE-21378: [~justinleet], thanks for your contribution! Can you create a PR so I can review the patch and we can move this forward? Thanks > Support TeraData in JDBC StorageHandler > --- > > Key: HIVE-21378 > URL: https://issues.apache.org/jira/browse/HIVE-21378 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.0, 4.0.0 >Reporter: Donald FOSSOUO >Assignee: Justin Leet >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21378.1.patch > > > Make TeraData a first class member of JdbcStorageHandler. It doesn't work > even using POSTGRES, MSSQL or any existing available storage handler. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?focusedWorklogId=347020=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347020 ] ASF GitHub Bot logged work on HIVE-22369: - Author: ASF GitHub Bot Created on: 20/Nov/19 21:50 Start Date: 20/Nov/19 21:50 Worklog Time Spent: 10m Work Description: miklosgergely commented on issue #845: HIVE-22369 Handle HiveTableFunctionScan at return path URL: https://github.com/apache/hive/pull/845#issuecomment-556444626 @jcamachor, I think I understood what you asked for, please check out the new patch, and let me know if this is how you wanted it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 347020) Time Spent: 1h (was: 50m) > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831] > or a > [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776]. > When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for > a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633] > node in the tree, which if won't find in case of a HiveTableFunctionScan was > returned. This is why TestNewGetSplitsFormat is failing with return path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22369) Handle HiveTableFunctionScan at return path
[ https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-22369: -- Attachment: HIVE-22369.02.patch > Handle HiveTableFunctionScan at return path > --- > > Key: HIVE-22369 > URL: https://issues.apache.org/jira/browse/HIVE-22369 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch > > Time Spent: 1h > Remaining Estimate: 0h > > The > [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573] > at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by > CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831] > or a > [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776]. > When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for > a > [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633] > node in the tree, which if won't find in case of a HiveTableFunctionScan was > returned. This is why TestNewGetSplitsFormat is failing with return path. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22506) Read-only transactions feature flag
[ https://issues.apache.org/jira/browse/HIVE-22506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978763#comment-16978763 ] Hive QA commented on HIVE-22506: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986343/HIVE-22506.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19515/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19515/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19515/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-11-20 21:36:13.036 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-19515/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2019-11-20 21:36:13.040 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 7534f82..d377783 branch-1 -> origin/branch-1 9bcdb54..6002c51 branch-1.2 -> origin/branch-1.2 292a98f..0359921 branch-2.1 -> origin/branch-2.1 b148507..67f9139 branch-2.2 -> origin/branch-2.2 + git reset --hard HEAD HEAD is now at df8e185 HIVE-22513: Constant propagation of casted column in filter ops can cause incorrect results (Adam Szita, reviewed by Zoltan Haindrich, Peter Vary) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at df8e185 HIVE-22513: Constant propagation of casted column in filter ops can cause incorrect results (Adam Szita, reviewed by Zoltan Haindrich, Peter Vary) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-11-20 21:36:15.255 + rm -rf ../yetus_PreCommit-HIVE-Build-19515 + mkdir ../yetus_PreCommit-HIVE-Build-19515 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-19515 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-19515/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils.java: does not exist in index error: a/ql/src/test/org/apache/hadoop/hive/ql/parse/TestParseUtils.java: does not exist in index error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/Driver.java:473 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/Driver.java' with conflicts. Going to apply patch with: git apply -p1 error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/Driver.java:473 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/Driver.java' with conflicts. U ql/src/java/org/apache/hadoop/hive/ql/Driver.java + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-19515 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12986343 - PreCommit-HIVE-Build > Read-only transactions feature flag > --- > > Key: HIVE-22506 > URL: https://issues.apache.org/jira/browse/HIVE-22506 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Denys Kuzmenko >Assignee: Denys Kuzmenko >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22506.1.patch, HIVE-22506.2.patch > > > Introduce a feature flag, so that read-only transaction functionality could >
[jira] [Commented] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978760#comment-16978760 ] Hive QA commented on HIVE-22514: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986336/HIVE-22514.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 17710 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19514/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19514/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19514/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12986336 - PreCommit-HIVE-Build > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22480) IndexOutOfBounds exception while reading ORC files written with empty positions list in first row index entry
[ https://issues.apache.org/jira/browse/HIVE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-22480: --- Fix Version/s: 2.2.1 > IndexOutOfBounds exception while reading ORC files written with empty > positions list in first row index entry > - > > Key: HIVE-22480 > URL: https://issues.apache.org/jira/browse/HIVE-22480 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.2, 2.3.6 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 1.3.0, 1.2.3, 2.1.2, 2.2.1 > > Attachments: HIVE-22480.branch-1.patch, HIVE-22480.branch-2.patch > > > Although this should not happen, we may end up with empty positions list in > first row index entry due to some bug (see ORC-569). Since positions in first > row index are always zero, it would be good if the reader could still read > these files instead of fail. > The error stack looks like this: > {code} > ERROR : FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, > vertexId=vertex_1566395485735_11359_2_00, diagnostics=[Task failed, > taskId=task_1566395485735_11359_2_00_00, diagnostics=[TaskAttempt 0 > failed, info=[Error: Error while running task ( failure ) : > attempt_1566395485735_11359_2_00_00_0:java.lang.RuntimeException: > java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:218) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:377) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) > at > org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:694) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:653) > at > org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:525) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:171) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:188) > ... 14 more > Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:380) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) >
[jira] [Updated] (HIVE-22480) IndexOutOfBounds exception while reading ORC files written with empty positions list in first row index entry
[ https://issues.apache.org/jira/browse/HIVE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-22480: --- Fix Version/s: 1.2.3 > IndexOutOfBounds exception while reading ORC files written with empty > positions list in first row index entry > - > > Key: HIVE-22480 > URL: https://issues.apache.org/jira/browse/HIVE-22480 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.2, 2.3.6 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 1.3.0, 1.2.3, 2.1.2 > > Attachments: HIVE-22480.branch-1.patch, HIVE-22480.branch-2.patch > > > Although this should not happen, we may end up with empty positions list in > first row index entry due to some bug (see ORC-569). Since positions in first > row index are always zero, it would be good if the reader could still read > these files instead of fail. > The error stack looks like this: > {code} > ERROR : FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, > vertexId=vertex_1566395485735_11359_2_00, diagnostics=[Task failed, > taskId=task_1566395485735_11359_2_00_00, diagnostics=[TaskAttempt 0 > failed, info=[Error: Error while running task ( failure ) : > attempt_1566395485735_11359_2_00_00_0:java.lang.RuntimeException: > java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:218) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:377) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) > at > org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:694) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:653) > at > org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:525) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:171) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:188) > ... 14 more > Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:380) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) > ... 25
[jira] [Resolved] (HIVE-22480) IndexOutOfBounds exception while reading ORC files written with empty positions list in first row index entry
[ https://issues.apache.org/jira/browse/HIVE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez resolved HIVE-22480. Fix Version/s: (was: 2.3.7) (was: 2.4.0) (was: 1.2.3) 2.1.2 Resolution: Fixed > IndexOutOfBounds exception while reading ORC files written with empty > positions list in first row index entry > - > > Key: HIVE-22480 > URL: https://issues.apache.org/jira/browse/HIVE-22480 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.2, 2.3.6 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 1.3.0, 2.1.2 > > Attachments: HIVE-22480.branch-1.patch, HIVE-22480.branch-2.patch > > > Although this should not happen, we may end up with empty positions list in > first row index entry due to some bug (see ORC-569). Since positions in first > row index are always zero, it would be good if the reader could still read > these files instead of fail. > The error stack looks like this: > {code} > ERROR : FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, > vertexId=vertex_1566395485735_11359_2_00, diagnostics=[Task failed, > taskId=task_1566395485735_11359_2_00_00, diagnostics=[TaskAttempt 0 > failed, info=[Error: Error while running task ( failure ) : > attempt_1566395485735_11359_2_00_00_0:java.lang.RuntimeException: > java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:218) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:377) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) > at > org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:694) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:653) > at > org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:525) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:171) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:188) > ... 14 more > Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:380) > at >
[jira] [Updated] (HIVE-22486) Send only accessed columns for masking policies request
[ https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-22486: --- Attachment: HIVE-22486.03.patch > Send only accessed columns for masking policies request > --- > > Key: HIVE-22486 > URL: https://issues.apache.org/jira/browse/HIVE-22486 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, > HIVE-22486.03.patch, HIVE-22486.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we send all columns for masking request, even if they are not > accessed by the given query. We could send only those columns for which the > masking policy will be necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22486) Send only accessed columns for masking policies request
[ https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978720#comment-16978720 ] Jesus Camacho Rodriguez commented on HIVE-22486: [~kgyrtkirk], could you review this patch? https://github.com/apache/hive/pull/848 Thanks > Send only accessed columns for masking policies request > --- > > Key: HIVE-22486 > URL: https://issues.apache.org/jira/browse/HIVE-22486 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, > HIVE-22486.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we send all columns for masking request, even if they are not > accessed by the given query. We could send only those columns for which the > masking policy will be necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22486) Send only accessed columns for masking policies request
[ https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-22486: -- Labels: pull-request-available (was: ) > Send only accessed columns for masking policies request > --- > > Key: HIVE-22486 > URL: https://issues.apache.org/jira/browse/HIVE-22486 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, > HIVE-22486.patch > > > Currently, we send all columns for masking request, even if they are not > accessed by the given query. We could send only those columns for which the > masking policy will be necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-22486) Send only accessed columns for masking policies request
[ https://issues.apache.org/jira/browse/HIVE-22486?focusedWorklogId=346996=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346996 ] ASF GitHub Bot logged work on HIVE-22486: - Author: ASF GitHub Bot Created on: 20/Nov/19 20:46 Start Date: 20/Nov/19 20:46 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #848: HIVE-22486 URL: https://github.com/apache/hive/pull/848 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 346996) Remaining Estimate: 0h Time Spent: 10m > Send only accessed columns for masking policies request > --- > > Key: HIVE-22486 > URL: https://issues.apache.org/jira/browse/HIVE-22486 > Project: Hive > Issue Type: Improvement > Components: CBO >Affects Versions: 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, > HIVE-22486.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently, we send all columns for masking request, even if they are not > accessed by the given query. We could send only those columns for which the > masking policy will be necessary. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978721#comment-16978721 ] Hive QA commented on HIVE-22514: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} common in master has 65 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 22s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19514/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-19514/yetus/diff-checkstyle-ql.txt | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19514/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22480) IndexOutOfBounds exception while reading ORC files written with empty positions list in first row index entry
[ https://issues.apache.org/jira/browse/HIVE-22480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978717#comment-16978717 ] Prasanth Jayachandran commented on HIVE-22480: -- lgtm, +1 > IndexOutOfBounds exception while reading ORC files written with empty > positions list in first row index entry > - > > Key: HIVE-22480 > URL: https://issues.apache.org/jira/browse/HIVE-22480 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 1.2.2, 2.3.6 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 1.3.0, 1.2.3, 2.4.0, 2.3.7 > > Attachments: HIVE-22480.branch-1.patch, HIVE-22480.branch-2.patch > > > Although this should not happen, we may end up with empty positions list in > first row index entry due to some bug (see ORC-569). Since positions in first > row index are always zero, it would be good if the reader could still read > these files instead of fail. > The error stack looks like this: > {code} > ERROR : FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, > vertexId=vertex_1566395485735_11359_2_00, diagnostics=[Task failed, > taskId=task_1566395485735_11359_2_00_00, diagnostics=[TaskAttempt 0 > failed, info=[Error: Error while running task ( failure ) : > attempt_1566395485735_11359_2_00_00_0:java.lang.RuntimeException: > java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:218) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:377) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.RuntimeException: java.io.IOException: > java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145) > at > org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157) > at > org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) > at > org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:694) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:653) > at > org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:525) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:171) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:188) > ... 14 more > Caused by: java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0 > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:380) > at >
[jira] [Commented] (HIVE-22461) NPE Metastore Transformer
[ https://issues.apache.org/jira/browse/HIVE-22461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978710#comment-16978710 ] Hive QA commented on HIVE-22461: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986335/HIVE-22461.6.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17709 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.common.metrics.metrics2.TestCodahaleMetrics.testFileReporting (batchId=310) org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation (batchId=279) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19513/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19513/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19513/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986335 - PreCommit-HIVE-Build > NPE Metastore Transformer > - > > Key: HIVE-22461 > URL: https://issues.apache.org/jira/browse/HIVE-22461 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.2 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Major > Attachments: HIVE-22461.1.patch, HIVE-22461.5.patch, > HIVE-22461.6.patch > > > The stack looks as following: > {noformat} > 2019-10-08 18:09:12,198 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-328]: Starting translation for processor > Hiveserver2#3.1.2000.7.0.2.0...@vc0732.halxg.cloudera.com on list 1 > 2019-10-08 18:09:12,198 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-328]: > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16617) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-10-08 18:09:12,199 ERROR org.apache.thrift.server.TThreadPoolServer: > [pool-6-thread-328]: Error occurred during processing of message. > java.lang.NullPointerException: null > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) >
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Status: Open (was: Patch Available) > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Attachment: HIVE-22505.5.patch > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Status: Patch Available (was: Open) > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22461) NPE Metastore Transformer
[ https://issues.apache.org/jira/browse/HIVE-22461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978677#comment-16978677 ] Hive QA commented on HIVE-22461: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 21s{color} | {color:blue} standalone-metastore/metastore-server in master has 178 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} standalone-metastore/metastore-server: The patch generated 0 new + 372 unchanged - 35 fixed = 372 total (was 407) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} standalone-metastore/metastore-server generated 0 new + 177 unchanged - 1 fixed = 177 total (was 178) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19513/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19513/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > NPE Metastore Transformer > - > > Key: HIVE-22461 > URL: https://issues.apache.org/jira/browse/HIVE-22461 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.2 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Major > Attachments: HIVE-22461.1.patch, HIVE-22461.5.patch, > HIVE-22461.6.patch > > > The stack looks as following: > {noformat} > 2019-10-08 18:09:12,198 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-328]: Starting translation for processor > Hiveserver2#3.1.2000.7.0.2.0...@vc0732.halxg.cloudera.com on list 1 > 2019-10-08 18:09:12,198 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-328]: > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at
[jira] [Commented] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978668#comment-16978668 ] Naresh P R commented on HIVE-22478: --- [~szita] I am able to repro this issue with the testcase in the attached 3.patch. > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22478) Import command fails from lower version to higher version when hive.strict.managed.tables enabled
[ https://issues.apache.org/jira/browse/HIVE-22478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naresh P R updated HIVE-22478: -- Attachment: HIVE-22478.3.patch > Import command fails from lower version to higher version when > hive.strict.managed.tables enabled > - > > Key: HIVE-22478 > URL: https://issues.apache.org/jira/browse/HIVE-22478 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Naresh P R >Assignee: Naresh P R >Priority: Major > Attachments: HIVE-22478.1.patch, HIVE-22478.2.patch, > HIVE-22478.3.patch > > > Created non-acid managed orc table in lower version, after inserting some > records, exported the table. > In higher version where hive.strict.managed.enabled=true, > 1) on first attempt, ACID Table is getting created, but LoadTable is failing > with below exception > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MoveTask : Write > id is not set in the config by open txn task for migration > at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:400) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2712) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2383) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2055) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1753) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1747) > at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226){code} > 2) On second attempt, as the table is already exist as ACID, > ImportSemanticAnalyzer is creating writeId for the ACID table & LoadTable > command is successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar
[ https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978662#comment-16978662 ] Hive QA commented on HIVE-22483: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986334/HIVE-22483.04.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17711 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb_schq] (batchId=177) org.apache.hadoop.hive.metastore.TestGetPartitionsUsingProjectionAndFilterSpecs.testGetPartitionsUsingValuesWithJDO (batchId=228) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19512/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19512/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19512/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986334 - PreCommit-HIVE-Build > Vectorize UDF datetime_legacy_hybrid_calendar > - > > Key: HIVE-22483 > URL: https://issues.apache.org/jira/browse/HIVE-22483 > Project: Hive > Issue Type: Improvement >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, > HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, > HIVE-22483.04.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Status: Open (was: Patch Available) > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection
[ https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Panagiotis Garefalakis updated HIVE-22505: -- Status: Patch Available (was: Open) > ClassCastException caused by wrong Vectorized operator selection > > > Key: HIVE-22505 > URL: https://issues.apache.org/jira/browse/HIVE-22505 > Project: Hive > Issue Type: Bug > Components: Vectorization >Reporter: Panagiotis Garefalakis >Assignee: Panagiotis Garefalakis >Priority: Critical > Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, > HIVE-22505.4.patch, HIVE-22505.patch, query_error.out, > query_vector_explain.out, vectorized_join.q > > > VectorMapJoinOuterFilteredOperator does not currently support full outer > joins but using the current Vectorizer logic it can be selected when a there > is a filter involved. This can make queries fail with ClassCastException when > their data and metadata in the VectorMapJoinOuterFilteredOperator do not > match. > The query attached demonstrates the issue and the log attached shows the > java.lang.ClassCastException -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22476) Hive datediff function provided inconsistent results when hive.fetch.task.conversion is set to none
[ https://issues.apache.org/jira/browse/HIVE-22476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Slim Bouguerra updated HIVE-22476: -- Attachment: HIVE-22476.7.patch > Hive datediff function provided inconsistent results when > hive.fetch.task.conversion is set to none > --- > > Key: HIVE-22476 > URL: https://issues.apache.org/jira/browse/HIVE-22476 > Project: Hive > Issue Type: Bug >Reporter: Slim Bouguerra >Assignee: Slim Bouguerra >Priority: Major > Attachments: HIVE-22476.2.patch, HIVE-22476.3.patch, > HIVE-22476.5.patch, HIVE-22476.6.patch, HIVE-22476.7.patch, HIVE-22476.7.patch > > > The actual issue stems to the different date parser used by various part of > the engine. > Fetch task uses udfdatediff via {code} > org.apache.hadoop.hive.ql.udf.generic.GenericUDFToDate{code} while the > vectorized llap execution uses {code}VectorUDFDateDiffScalarCol{code}. > This fix is meant to be not very intrusive and will add more support to the > GenericUDFToDate by enhancing the parser. > For the longer term will be better to use one parser for all the operators. > Thanks [~Rajkumar Singh] for the repro example > {code} > create external table testdatediff(datetimecol string) stored as orc; > insert into testdatediff values ('2019-09-09T10:45:49+02:00'),('2019-07-24'); > select datetimecol from testdatediff where datediff(cast(current_timestamp as > string), datetimecol)<183; > set hive.ferch.task.conversion=none; > select datetimecol from testdatediff where datediff(cast(current_timestamp as > string), datetimecol)<183; > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-20776) Run HMS filterHooks on server-side in addition to client-side
[ https://issues.apache.org/jira/browse/HIVE-20776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas Nair updated HIVE-20776: --- Fix Version/s: 4.0.0 Assignee: Na Li Resolution: Fixed Status: Resolved (was: Patch Available) Marking as resolved since this is committed to master > Run HMS filterHooks on server-side in addition to client-side > - > > Key: HIVE-20776 > URL: https://issues.apache.org/jira/browse/HIVE-20776 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Karthik Manamcheri >Assignee: Na Li >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20776.001.patch, HIVE-20776.003.patch, > HIVE-20776.004.patch, HIVE-20776.005.patch, HIVE-20776.006.patch, > HIVE-20776.007.patch, HIVE-20776.007.patch, HIVE-20776.008.patch, > HIVE-20776.009.patch, HIVE-20776.009.patch, HIVE-20776.010.patch, > HIVE-20776.011.patch, HIVE-20776.011.patch, HIVE-20776.012.patch, > HIVE-20776.013.patch, HIVE-20776.014.patch, HIVE-20776.015-branch-3.patch, > HIVE-20776.015.branch-3.patch, HIVE-20776.015.patch, > HIVE-20776.015_a.branch-3.patch, HIVE-20776.016-branch-3.patch, > HIVE-20776.017-branch-3.patch > > > In HMS, I noticed that all the filter hooks are applied on the client side > (in HiveMetaStoreClient.java). Is there any reason why we can't apply the > filters on the server-side? > Motivation: Some newer apache projects such as Kudu use HMS for metadata > storage. Kudu is not completely Java-based and there are interaction points > where they have C++ clients. In such cases, it would be ideal to have > consistent behavior from HMS side as far as filters, etc are concerned. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar
[ https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978629#comment-16978629 ] Hive QA commented on HIVE-22483: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 19s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} ql: The patch generated 0 new + 33 unchanged - 4 fixed = 33 total (was 37) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19512/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19512/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorize UDF datetime_legacy_hybrid_calendar > - > > Key: HIVE-22483 > URL: https://issues.apache.org/jira/browse/HIVE-22483 > Project: Hive > Issue Type: Improvement >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, > HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, > HIVE-22483.04.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22517) Sysdb related qtests also output the sysdb sql commands to q.out
[ https://issues.apache.org/jira/browse/HIVE-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978591#comment-16978591 ] Hive QA commented on HIVE-22517: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 17s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 42s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 55s{color} | {color:blue} itests/util in master has 53 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 37s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 78m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19511/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-19511/yetus/whitespace-eol.txt | | modules | C: ql . itests/util U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19511/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Sysdb related qtests also output the sysdb sql commands to q.out > > > Key: HIVE-22517 > URL: https://issues.apache.org/jira/browse/HIVE-22517 > Project: Hive > Issue Type: Improvement > Components: Test >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-22517.01.patch > > > it would be better to not have it on the outputs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22327) Repl: Ignore read-only transactions in notification log
[ https://issues.apache.org/jira/browse/HIVE-22327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978574#comment-16978574 ] mahesh kumar behera commented on HIVE-22327: [^HIVE-22327.10.patch] looks fine to me. +1 > Repl: Ignore read-only transactions in notification log > --- > > Key: HIVE-22327 > URL: https://issues.apache.org/jira/browse/HIVE-22327 > Project: Hive > Issue Type: Improvement > Components: repl >Reporter: Gopal Vijayaraghavan >Assignee: Denys Kuzmenko >Priority: Major > Attachments: HIVE-22327.1.patch, HIVE-22327.10.patch, > HIVE-22327.2.patch, HIVE-22327.3.patch, HIVE-22327.4.patch, > HIVE-22327.5.patch, HIVE-22327.6.patch, HIVE-22327.7.patch, > HIVE-22327.8.patch, HIVE-22327.9.patch > > > Read txns need not be replicated. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22521) Both Driver and SessionState has a userName
[ https://issues.apache.org/jira/browse/HIVE-22521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978569#comment-16978569 ] Zoltan Haindrich commented on HIVE-22521: - I think at any time we should have only 1 userName thing...I think we should rely on SessionState and retire the one at the Driver level - or at least short-circuit it back to the SessionState based value > Both Driver and SessionState has a userName > --- > > Key: HIVE-22521 > URL: https://issues.apache.org/jira/browse/HIVE-22521 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > This caused some confusing behaviour to me...especially when the 2 values > were different. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-22521) Both Driver and SessionState has a userName
[ https://issues.apache.org/jira/browse/HIVE-22521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reassigned HIVE-22521: --- > Both Driver and SessionState has a userName > --- > > Key: HIVE-22521 > URL: https://issues.apache.org/jira/browse/HIVE-22521 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > > This caused some confusing behaviour to me...especially when the 2 values > were different. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22511) Fix case of Month token in datetime to string conversion
[ https://issues.apache.org/jira/browse/HIVE-22511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage updated HIVE-22511: - Attachment: HIVE-22511.01.patch Status: Patch Available (was: Open) > Fix case of Month token in datetime to string conversion > > > Key: HIVE-22511 > URL: https://issues.apache.org/jira/browse/HIVE-22511 > Project: Hive > Issue Type: Bug >Reporter: Gabor Kaszab >Assignee: Karen Coppage >Priority: Major > Attachments: HIVE-22511.01.patch > > > Currently Hive doesn't allow month tokens with weird spelling like 'MONth', > 'mONTH' etc. However, Oracle does and Hive should follow that approach. > The rules: > - If the first letter is lowercase then the output is lowercase: 'mONTH' -> > 'may' > - If the first two letters are uppercase then the output is uppercase: > 'MOnth' -> 'MAY' > - If the first letter is uppercase and the second is lowercase then the > output is capitalized: 'Month' -> 'May'. > Oracle: > {code:java} > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'MOnth') from > DUAL; > MAY 2019 > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'mONTH') from > DUAL; > may 2019 > select to_char(to_timestamp('2019-05-10', '-MM-DD'), 'MoNTH') from > DUAL; > May 2019 > {code} > Please check the same for 'Name of the day' tokens. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges
[ https://issues.apache.org/jira/browse/HIVE-22512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978510#comment-16978510 ] Hive QA commented on HIVE-22512: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986330/HIVE-22512.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17709 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation (batchId=279) org.apache.hive.minikdc.TestJdbcWithMiniKdc.testTokenAuth (batchId=301) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19510/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19510/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19510/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986330 - PreCommit-HIVE-Build > Use direct SQL to fetch column privileges in refreshPrivileges > -- > > Key: HIVE-22512 > URL: https://issues.apache.org/jira/browse/HIVE-22512 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch > > Time Spent: 10m > Remaining Estimate: 0h > > refreshPrivileges() calls listTableAllColumnGrants() to fetch the column > level privileges. The later function retrieves the individual column objects > by firing one query per column privilege object, thus causing the backend db > to be swamped by these queries when PrivilegeSynchronizer is run. > PrivilegeSynchronizer synchronizes privileges of all the databases, tables > and columns and thus the backend db can get swamped really bad when there are > thousands of tables with hundreds of columns. > The output of listTableAllColumnGrants() is not used completely so all the > columns the PM has tried to retrieves anyway goes waste. > Fix this by using direct SQL to fetch column privileges. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22506) Read-only transactions feature flag
[ https://issues.apache.org/jira/browse/HIVE-22506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978503#comment-16978503 ] Denys Kuzmenko commented on HIVE-22506: --- Thanks [~pvary]. Added test when feature is turned off. > Read-only transactions feature flag > --- > > Key: HIVE-22506 > URL: https://issues.apache.org/jira/browse/HIVE-22506 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Denys Kuzmenko >Assignee: Denys Kuzmenko >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22506.1.patch, HIVE-22506.2.patch > > > Introduce a feature flag, so that read-only transaction functionality could > be conditionally turned on/off. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22506) Read-only transactions feature flag
[ https://issues.apache.org/jira/browse/HIVE-22506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denys Kuzmenko updated HIVE-22506: -- Attachment: HIVE-22506.2.patch > Read-only transactions feature flag > --- > > Key: HIVE-22506 > URL: https://issues.apache.org/jira/browse/HIVE-22506 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Denys Kuzmenko >Assignee: Denys Kuzmenko >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22506.1.patch, HIVE-22506.2.patch > > > Introduce a feature flag, so that read-only transaction functionality could > be conditionally turned on/off. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges
[ https://issues.apache.org/jira/browse/HIVE-22512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978466#comment-16978466 ] Hive QA commented on HIVE-22512: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 20s{color} | {color:blue} standalone-metastore/metastore-server in master has 178 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 7 new + 615 unchanged - 6 fixed = 622 total (was 621) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 16m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19510/dev-support/hive-personality.sh | | git revision | master / df8e185 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-19510/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19510/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Use direct SQL to fetch column privileges in refreshPrivileges > -- > > Key: HIVE-22512 > URL: https://issues.apache.org/jira/browse/HIVE-22512 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch > > Time Spent: 10m > Remaining Estimate: 0h > > refreshPrivileges() calls listTableAllColumnGrants() to fetch the column > level privileges. The later function retrieves the individual column objects > by firing one query per column privilege object, thus causing the backend db > to be swamped by these queries when PrivilegeSynchronizer is run. > PrivilegeSynchronizer synchronizes privileges of all the databases, tables > and columns and thus the backend db can get swamped really bad when there are > thousands of tables with hundreds of columns. > The output of listTableAllColumnGrants() is not used completely so all the > columns the PM has tried to retrieves anyway goes waste. > Fix this by using direct SQL to fetch column privileges. -- This message was sent by Atlassian Jira
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Status: Patch Available (was: Open) > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Attachment: (was: HIVE-22514.1.patch) > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Attachment: HIVE-22514.1.patch > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Status: Open (was: Patch Available) > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22461) NPE Metastore Transformer
[ https://issues.apache.org/jira/browse/HIVE-22461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978453#comment-16978453 ] Yongzhi Chen commented on HIVE-22461: - Attach the patch again (patch 6) to re-run the tests > NPE Metastore Transformer > - > > Key: HIVE-22461 > URL: https://issues.apache.org/jira/browse/HIVE-22461 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.2 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Major > Attachments: HIVE-22461.1.patch, HIVE-22461.5.patch, > HIVE-22461.6.patch > > > The stack looks as following: > {noformat} > 2019-10-08 18:09:12,198 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-328]: Starting translation for processor > Hiveserver2#3.1.2000.7.0.2.0...@vc0732.halxg.cloudera.com on list 1 > 2019-10-08 18:09:12,198 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-328]: > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16617) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-10-08 18:09:12,199 ERROR org.apache.thrift.server.TThreadPoolServer: > [pool-6-thread-328]: Error occurred during processing of message. > java.lang.NullPointerException: null > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_141] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_141] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) >
[jira] [Updated] (HIVE-22461) NPE Metastore Transformer
[ https://issues.apache.org/jira/browse/HIVE-22461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-22461: Attachment: HIVE-22461.6.patch > NPE Metastore Transformer > - > > Key: HIVE-22461 > URL: https://issues.apache.org/jira/browse/HIVE-22461 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.2 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Major > Attachments: HIVE-22461.1.patch, HIVE-22461.5.patch, > HIVE-22461.6.patch > > > The stack looks as following: > {noformat} > 2019-10-08 18:09:12,198 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-328]: Starting translation for processor > Hiveserver2#3.1.2000.7.0.2.0...@vc0732.halxg.cloudera.com on list 1 > 2019-10-08 18:09:12,198 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-328]: > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16617) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-10-08 18:09:12,199 ERROR org.apache.thrift.server.TThreadPoolServer: > [pool-6-thread-328]: Error occurred during processing of message. > java.lang.NullPointerException: null > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_141] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_141] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] >
[jira] [Updated] (HIVE-22461) NPE Metastore Transformer
[ https://issues.apache.org/jira/browse/HIVE-22461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-22461: Attachment: (was: HIVE-22461.6.patch) > NPE Metastore Transformer > - > > Key: HIVE-22461 > URL: https://issues.apache.org/jira/browse/HIVE-22461 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.2 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Major > Attachments: HIVE-22461.1.patch, HIVE-22461.5.patch > > > The stack looks as following: > {noformat} > 2019-10-08 18:09:12,198 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-328]: Starting translation for processor > Hiveserver2#3.1.2000.7.0.2.0...@vc0732.halxg.cloudera.com on list 1 > 2019-10-08 18:09:12,198 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-328]: > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16617) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-10-08 18:09:12,199 ERROR org.apache.thrift.server.TThreadPoolServer: > [pool-6-thread-328]: Error occurred during processing of message. > java.lang.NullPointerException: null > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_141] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_141] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at >
[jira] [Updated] (HIVE-22461) NPE Metastore Transformer
[ https://issues.apache.org/jira/browse/HIVE-22461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongzhi Chen updated HIVE-22461: Attachment: (was: HIVE-22461.7.patch) > NPE Metastore Transformer > - > > Key: HIVE-22461 > URL: https://issues.apache.org/jira/browse/HIVE-22461 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.1.2 >Reporter: Yongzhi Chen >Assignee: Yongzhi Chen >Priority: Major > Attachments: HIVE-22461.1.patch, HIVE-22461.5.patch > > > The stack looks as following: > {noformat} > 2019-10-08 18:09:12,198 INFO > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer: > [pool-6-thread-328]: Starting translation for processor > Hiveserver2#3.1.2000.7.0.2.0...@vc0732.halxg.cloudera.com on list 1 > 2019-10-08 18:09:12,198 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-6-thread-328]: > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16617) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876) > at > org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > 2019-10-08 18:09:12,199 ERROR org.apache.thrift.server.TThreadPoolServer: > [pool-6-thread-328]: Error occurred during processing of message. > java.lang.NullPointerException: null > at > org.apache.hadoop.hive.metastore.MetastoreDefaultTransformer.transform(MetastoreDefaultTransformer.java:99) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getTableInternal(HiveMetaStore.java:3391) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table_req(HiveMetaStore.java:3352) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_141] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_141] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at com.sun.proxy.$Proxy28.get_table_req(Unknown Source) ~[?:?] > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table_req.getResult(ThriftHiveMetastore.java:16633) > ~[hive-exec-3.1.2000.7.0.2.0-59.jar:3.1.2000.7.0.2.0-59] > at >
[jira] [Commented] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978445#comment-16978445 ] Hive QA commented on HIVE-22514: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12986326/HIVE-22514.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17709 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=112) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/19509/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19509/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19509/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12986326 - PreCommit-HIVE-Build > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar
[ https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage updated HIVE-22483: - Attachment: HIVE-22483.04.patch Status: Patch Available (was: Open) > Vectorize UDF datetime_legacy_hybrid_calendar > - > > Key: HIVE-22483 > URL: https://issues.apache.org/jira/browse/HIVE-22483 > Project: Hive > Issue Type: Improvement >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, > HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, > HIVE-22483.04.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar
[ https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage updated HIVE-22483: - Status: Open (was: Patch Available) > Vectorize UDF datetime_legacy_hybrid_calendar > - > > Key: HIVE-22483 > URL: https://issues.apache.org/jira/browse/HIVE-22483 > Project: Hive > Issue Type: Improvement >Reporter: Karen Coppage >Assignee: Karen Coppage >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, > HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, > HIVE-22483.04.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978408#comment-16978408 ] Hive QA commented on HIVE-22514: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} common in master has 65 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 20s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-19509/dev-support/hive-personality.sh | | git revision | master / ad2bb41 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-19509/yetus/diff-checkstyle-ql.txt | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-19509/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22517) Sysdb related qtests also output the sysdb sql commands to q.out
[ https://issues.apache.org/jira/browse/HIVE-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-22517: Attachment: HIVE-22517.01.patch > Sysdb related qtests also output the sysdb sql commands to q.out > > > Key: HIVE-22517 > URL: https://issues.apache.org/jira/browse/HIVE-22517 > Project: Hive > Issue Type: Improvement > Components: Test >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-22517.01.patch > > > it would be better to not have it on the outputs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22517) Sysdb related qtests also output the sysdb sql commands to q.out
[ https://issues.apache.org/jira/browse/HIVE-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-22517: Status: Patch Available (was: Open) > Sysdb related qtests also output the sysdb sql commands to q.out > > > Key: HIVE-22517 > URL: https://issues.apache.org/jira/browse/HIVE-22517 > Project: Hive > Issue Type: Improvement > Components: Test >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-22517.01.patch > > > it would be better to not have it on the outputs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges
[ https://issues.apache.org/jira/browse/HIVE-22512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Bapat updated HIVE-22512: -- Attachment: HIVE-22512.02.patch Status: Patch Available (was: In Progress) Checkstyle comments fixed. Some checkstyle comments are existing ones, which I haven't fixed. > Use direct SQL to fetch column privileges in refreshPrivileges > -- > > Key: HIVE-22512 > URL: https://issues.apache.org/jira/browse/HIVE-22512 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch > > Time Spent: 10m > Remaining Estimate: 0h > > refreshPrivileges() calls listTableAllColumnGrants() to fetch the column > level privileges. The later function retrieves the individual column objects > by firing one query per column privilege object, thus causing the backend db > to be swamped by these queries when PrivilegeSynchronizer is run. > PrivilegeSynchronizer synchronizes privileges of all the databases, tables > and columns and thus the backend db can get swamped really bad when there are > thousands of tables with hundreds of columns. > The output of listTableAllColumnGrants() is not used completely so all the > columns the PM has tried to retrieves anyway goes waste. > Fix this by using direct SQL to fetch column privileges. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges
[ https://issues.apache.org/jira/browse/HIVE-22512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Bapat updated HIVE-22512: -- Status: In Progress (was: Patch Available) > Use direct SQL to fetch column privileges in refreshPrivileges > -- > > Key: HIVE-22512 > URL: https://issues.apache.org/jira/browse/HIVE-22512 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22512.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > refreshPrivileges() calls listTableAllColumnGrants() to fetch the column > level privileges. The later function retrieves the individual column objects > by firing one query per column privilege object, thus causing the backend db > to be swamped by these queries when PrivilegeSynchronizer is run. > PrivilegeSynchronizer synchronizes privileges of all the databases, tables > and columns and thus the backend db can get swamped really bad when there are > thousands of tables with hundreds of columns. > The output of listTableAllColumnGrants() is not used completely so all the > columns the PM has tried to retrieves anyway goes waste. > Fix this by using direct SQL to fetch column privileges. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (HIVE-17395) HiveServer2 parsing a command with a lot of "("
[ https://issues.apache.org/jira/browse/HIVE-17395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich reopened HIVE-17395: - > HiveServer2 parsing a command with a lot of "(" > --- > > Key: HIVE-17395 > URL: https://issues.apache.org/jira/browse/HIVE-17395 > Project: Hive > Issue Type: Bug > Components: Beeline, HiveServer2 >Affects Versions: 2.3.0 >Reporter: dan young >Priority: Major > > Hello, > We're seeing what appears to be the same issue that was outlined in > HIVE-15388 where the query parser spends a lot of time (never returns and I > need to kill the beeline process) parsing a command with a lot of "(" . I > tried this in both 2.2 and now 2.3. > Here's an example query (this is auto generated SQL BTW) in beeline that > never completes/parses, I end up just killing the beeline process. > It looks like something similar was addressed as part of HIVE-15388. Any > ideas on how to address this? write better SQL? patch? > Regards, > Dano > {noformat} > Connected to: Apache Hive (version 2.3.0) > Driver: Hive JDBC (version 2.3.0) > Transaction isolation: TRANSACTION_REPEATABLE_READ > Beeline version 2.3.0 by Apache Hive > 0: jdbc:hive2://localhost:1/test_db> SELECT > ((UNIX_TIMESTAMP(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP(CONCAT(ADD_MONTHS(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP), > 1),SUBSTRING(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP),11))), 'MM'))), > -3),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP(CONCAT(ADD_MONTHS(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP), > 1),SUBSTRING(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP),11))), 'MM'))),11)); > When I did a jstack on the HiveServer2, it appears the be stuck/running in > the HiveParser/antlr. > "e62658bd-5ea9-43c4-898f-3048d913f192 HiveServer2-Handler-Pool: Thread-96" > #96 prio=5 os_prio=0 tid=0x7fb78c366000 nid=0x4476 runnable > [0x7fb77d7bb000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser$DFA36.specialStateTransition(HiveParser_IdentifiersParser.java:31502) > at org.antlr.runtime.DFA.predict(DFA.java:80) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6746) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6988) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7324) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7380) > at >
[jira] [Resolved] (HIVE-17395) HiveServer2 parsing a command with a lot of "("
[ https://issues.apache.org/jira/browse/HIVE-17395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich resolved HIVE-17395. - Resolution: Fixed yes this was a duplicate of HIVE-18624 - as its caused by a synpred which is evaluating a function > HiveServer2 parsing a command with a lot of "(" > --- > > Key: HIVE-17395 > URL: https://issues.apache.org/jira/browse/HIVE-17395 > Project: Hive > Issue Type: Bug > Components: Beeline, HiveServer2 >Affects Versions: 2.3.0 >Reporter: dan young >Priority: Major > > Hello, > We're seeing what appears to be the same issue that was outlined in > HIVE-15388 where the query parser spends a lot of time (never returns and I > need to kill the beeline process) parsing a command with a lot of "(" . I > tried this in both 2.2 and now 2.3. > Here's an example query (this is auto generated SQL BTW) in beeline that > never completes/parses, I end up just killing the beeline process. > It looks like something similar was addressed as part of HIVE-15388. Any > ideas on how to address this? write better SQL? patch? > Regards, > Dano > {noformat} > Connected to: Apache Hive (version 2.3.0) > Driver: Hive JDBC (version 2.3.0) > Transaction isolation: TRANSACTION_REPEATABLE_READ > Beeline version 2.3.0 by Apache Hive > 0: jdbc:hive2://localhost:1/test_db> SELECT > ((UNIX_TIMESTAMP(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP(CONCAT(ADD_MONTHS(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP), > 1),SUBSTRING(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP),11))), 'MM'))), > -3),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP(CONCAT(ADD_MONTHS(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP), > 1),SUBSTRING(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP),11))), 'MM'))),11)); > When I did a jstack on the HiveServer2, it appears the be stuck/running in > the HiveParser/antlr. > "e62658bd-5ea9-43c4-898f-3048d913f192 HiveServer2-Handler-Pool: Thread-96" > #96 prio=5 os_prio=0 tid=0x7fb78c366000 nid=0x4476 runnable > [0x7fb77d7bb000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser$DFA36.specialStateTransition(HiveParser_IdentifiersParser.java:31502) > at org.antlr.runtime.DFA.predict(DFA.java:80) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6746) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6988) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7324) > at >
[jira] [Resolved] (HIVE-17395) HiveServer2 parsing a command with a lot of "("
[ https://issues.apache.org/jira/browse/HIVE-17395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich resolved HIVE-17395. - Resolution: Duplicate > HiveServer2 parsing a command with a lot of "(" > --- > > Key: HIVE-17395 > URL: https://issues.apache.org/jira/browse/HIVE-17395 > Project: Hive > Issue Type: Bug > Components: Beeline, HiveServer2 >Affects Versions: 2.3.0 >Reporter: dan young >Priority: Major > > Hello, > We're seeing what appears to be the same issue that was outlined in > HIVE-15388 where the query parser spends a lot of time (never returns and I > need to kill the beeline process) parsing a command with a lot of "(" . I > tried this in both 2.2 and now 2.3. > Here's an example query (this is auto generated SQL BTW) in beeline that > never completes/parses, I end up just killing the beeline process. > It looks like something similar was addressed as part of HIVE-15388. Any > ideas on how to address this? write better SQL? patch? > Regards, > Dano > {noformat} > Connected to: Apache Hive (version 2.3.0) > Driver: Hive JDBC (version 2.3.0) > Transaction isolation: TRANSACTION_REPEATABLE_READ > Beeline version 2.3.0 by Apache Hive > 0: jdbc:hive2://localhost:1/test_db> SELECT > ((UNIX_TIMESTAMP(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP(CONCAT(ADD_MONTHS(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP), > 1),SUBSTRING(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP),11))), 'MM'))), > -3),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP(CONCAT(ADD_MONTHS(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP), > 1),SUBSTRING(CAST(CONCAT(CAST(YEAR(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 AS STRING), '-', > LPAD(CAST(((CAST(CEIL(MONTH(TIMESTAMP(CONCAT(ADD_MONTHS(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))), > -1),SUBSTRING(TIMESTAMP(DATE(TRUNC(TIMESTAMP('2012-04-20 > 00:00:00.0'), 'MM'))),11 / 3) AS INT) - 1) * 3) + 1 AS STRING), > 2, '0'), '-01 00:00:00') AS TIMESTAMP),11))), 'MM'))),11)); > When I did a jstack on the HiveServer2, it appears the be stuck/running in > the HiveParser/antlr. > "e62658bd-5ea9-43c4-898f-3048d913f192 HiveServer2-Handler-Pool: Thread-96" > #96 prio=5 os_prio=0 tid=0x7fb78c366000 nid=0x4476 runnable > [0x7fb77d7bb000] >java.lang.Thread.State: RUNNABLE > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser$DFA36.specialStateTransition(HiveParser_IdentifiersParser.java:31502) > at org.antlr.runtime.DFA.predict(DFA.java:80) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_IdentifiersParser.java:6746) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6988) > at > org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnaryPrefixExpression(HiveParser_IdentifiersParser.java:7324) > at >
[jira] [Updated] (HIVE-22513) Constant propagation of casted column in filter ops can cause incorrect results
[ https://issues.apache.org/jira/browse/HIVE-22513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ádám Szita updated HIVE-22513: -- Fix Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Constant propagation of casted column in filter ops can cause incorrect > results > --- > > Key: HIVE-22513 > URL: https://issues.apache.org/jira/browse/HIVE-22513 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Ádám Szita >Assignee: Ádám Szita >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22513.0.patch, HIVE-22513.1.patch > > > This issue happens if CBO is disabled. > We should not be propagating constants if the corresponding > ExprNodeColumnDesc instance is wrapped inside a CAST operator as casting > might truncate information from the original column. > This can happen if we're using CAST in a WHERE clause, which will cause the > projected columns to be replaced in a SELECT operator. Their new value will > be the result of casting which could be a different value compared to that in > the original column: > {code:java} > set hive.cbo.enable=false; > set hive.fetch.task.conversion=more; --just for testing convenience > create table testtb (id string); > insert into testtb values('2019-11-05 01:01:11'); > select id, CAST(id AS VARCHAR(10)) from testtb where CAST(id AS VARCHAR(9)) = > '2019-11-0'; > +++ > | id | _c1 | > +++ > | 2019-11-0 | 2019-11-0 | > +++ > 1 row selected (0.168 seconds) > -- VS expected: 2019-11-05 01:01:11 | 2019-11-05 {code} > As to what types of casting (from and where types) cause information loss > it's hard to properly keep track of, and I don't think it should be taken > into consideration when deciding whether or not to propagate a constant. > Rather than adding a big and potentially convoluted and fragile check for > this, I propose to prevent constant mappings to be spawned out of CASTed > columns. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22513) Constant propagation of casted column in filter ops can cause incorrect results
[ https://issues.apache.org/jira/browse/HIVE-22513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978391#comment-16978391 ] Ádám Szita commented on HIVE-22513: --- Committed to master. Thanks for reviewing Peter and Zoltan. Note: Before committing I've made the small amend of omitting {{--! qt:dataset:src}} as per Peter's comment. (I successfully re-ran the test locally.) > Constant propagation of casted column in filter ops can cause incorrect > results > --- > > Key: HIVE-22513 > URL: https://issues.apache.org/jira/browse/HIVE-22513 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Ádám Szita >Assignee: Ádám Szita >Priority: Major > Attachments: HIVE-22513.0.patch, HIVE-22513.1.patch > > > This issue happens if CBO is disabled. > We should not be propagating constants if the corresponding > ExprNodeColumnDesc instance is wrapped inside a CAST operator as casting > might truncate information from the original column. > This can happen if we're using CAST in a WHERE clause, which will cause the > projected columns to be replaced in a SELECT operator. Their new value will > be the result of casting which could be a different value compared to that in > the original column: > {code:java} > set hive.cbo.enable=false; > set hive.fetch.task.conversion=more; --just for testing convenience > create table testtb (id string); > insert into testtb values('2019-11-05 01:01:11'); > select id, CAST(id AS VARCHAR(10)) from testtb where CAST(id AS VARCHAR(9)) = > '2019-11-0'; > +++ > | id | _c1 | > +++ > | 2019-11-0 | 2019-11-0 | > +++ > 1 row selected (0.168 seconds) > -- VS expected: 2019-11-05 01:01:11 | 2019-11-05 {code} > As to what types of casting (from and where types) cause information loss > it's hard to properly keep track of, and I don't think it should be taken > into consideration when deciding whether or not to propagate a constant. > Rather than adding a big and potentially convoluted and fragile check for > this, I propose to prevent constant mappings to be spawned out of CASTed > columns. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22428) Remove superfluous "Failed to get database" WARN Logging in ObjectStore
[ https://issues.apache.org/jira/browse/HIVE-22428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978389#comment-16978389 ] Ashutosh Bapat commented on HIVE-22428: --- Looks ok. A slight suggestion "Thread Stack Trace for debugging (Not an Error)". Somehow indicate that this is a debug output. > Remove superfluous "Failed to get database" WARN Logging in ObjectStore > --- > > Key: HIVE-22428 > URL: https://issues.apache.org/jira/browse/HIVE-22428 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22428.1.patch > > > In my testing, I get lots of logs like this: > {code:none} > Line 26319: 2019-10-28T21:09:52,134 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26327: 2019-10-28T21:09:52,135 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26504: 2019-10-28T21:09:52,600 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.tstatsfast, returning > NoSuchObjectException > Line 26519: 2019-10-28T21:09:52,606 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.tstatsfast, returning > NoSuchObjectException > Line 26695: 2019-10-28T21:09:52,922 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.createDb, returning > NoSuchObjectException > Line 26703: 2019-10-28T21:09:52,923 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.createDb, returning > NoSuchObjectException > Line 26763: 2019-10-28T21:09:52,936 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26778: 2019-10-28T21:09:52,939 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26963: 2019-10-28T21:09:53,273 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db1, returning > NoSuchObjectException > Line 26978: 2019-10-28T21:09:53,276 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db2, returning > NoSuchObjectException > Line 26986: 2019-10-28T21:09:53,277 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db1, returning > NoSuchObjectException > Line 27018: 2019-10-28T21:09:53,300 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db2, returning > NoSuchObjectException > {code} > This is a superfluous log message. It might be pretty common for a database > to not exists if, for example, a user fat-fingers the name of the database. > The code also has the bad habit of log-and-throw. Just log or throw, not > both. > Since I'm looking at this class, touch up some of the other logging as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22513) Constant propagation of casted column in filter ops can cause incorrect results
[ https://issues.apache.org/jira/browse/HIVE-22513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ádám Szita updated HIVE-22513: -- Component/s: Query Planning > Constant propagation of casted column in filter ops can cause incorrect > results > --- > > Key: HIVE-22513 > URL: https://issues.apache.org/jira/browse/HIVE-22513 > Project: Hive > Issue Type: Bug > Components: Query Planning >Reporter: Ádám Szita >Assignee: Ádám Szita >Priority: Major > Attachments: HIVE-22513.0.patch, HIVE-22513.1.patch > > > This issue happens if CBO is disabled. > We should not be propagating constants if the corresponding > ExprNodeColumnDesc instance is wrapped inside a CAST operator as casting > might truncate information from the original column. > This can happen if we're using CAST in a WHERE clause, which will cause the > projected columns to be replaced in a SELECT operator. Their new value will > be the result of casting which could be a different value compared to that in > the original column: > {code:java} > set hive.cbo.enable=false; > set hive.fetch.task.conversion=more; --just for testing convenience > create table testtb (id string); > insert into testtb values('2019-11-05 01:01:11'); > select id, CAST(id AS VARCHAR(10)) from testtb where CAST(id AS VARCHAR(9)) = > '2019-11-0'; > +++ > | id | _c1 | > +++ > | 2019-11-0 | 2019-11-0 | > +++ > 1 row selected (0.168 seconds) > -- VS expected: 2019-11-05 01:01:11 | 2019-11-05 {code} > As to what types of casting (from and where types) cause information loss > it's hard to properly keep track of, and I don't think it should be taken > into consideration when deciding whether or not to propagate a constant. > Rather than adding a big and potentially convoluted and fragile check for > this, I propose to prevent constant mappings to be spawned out of CASTed > columns. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges
[ https://issues.apache.org/jira/browse/HIVE-22512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-22512: -- Labels: pull-request-available (was: ) > Use direct SQL to fetch column privileges in refreshPrivileges > -- > > Key: HIVE-22512 > URL: https://issues.apache.org/jira/browse/HIVE-22512 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22512.01.patch > > > refreshPrivileges() calls listTableAllColumnGrants() to fetch the column > level privileges. The later function retrieves the individual column objects > by firing one query per column privilege object, thus causing the backend db > to be swamped by these queries when PrivilegeSynchronizer is run. > PrivilegeSynchronizer synchronizes privileges of all the databases, tables > and columns and thus the backend db can get swamped really bad when there are > thousands of tables with hundreds of columns. > The output of listTableAllColumnGrants() is not used completely so all the > columns the PM has tried to retrieves anyway goes waste. > Fix this by using direct SQL to fetch column privileges. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges
[ https://issues.apache.org/jira/browse/HIVE-22512?focusedWorklogId=346671=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346671 ] ASF GitHub Bot logged work on HIVE-22512: - Author: ASF GitHub Bot Created on: 20/Nov/19 12:42 Start Date: 20/Nov/19 12:42 Worklog Time Spent: 10m Work Description: ashutosh-bapat commented on pull request #847: HIVE-22512 : Use direct SQL to fetch column privileges in refreshPrivileges. URL: https://github.com/apache/hive/pull/847 @maheshk114 can you please review it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 346671) Remaining Estimate: 0h Time Spent: 10m > Use direct SQL to fetch column privileges in refreshPrivileges > -- > > Key: HIVE-22512 > URL: https://issues.apache.org/jira/browse/HIVE-22512 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-22512.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > refreshPrivileges() calls listTableAllColumnGrants() to fetch the column > level privileges. The later function retrieves the individual column objects > by firing one query per column privilege object, thus causing the backend db > to be swamped by these queries when PrivilegeSynchronizer is run. > PrivilegeSynchronizer synchronizes privileges of all the databases, tables > and columns and thus the backend db can get swamped really bad when there are > thousands of tables with hundreds of columns. > The output of listTableAllColumnGrants() is not used completely so all the > columns the PM has tried to retrieves anyway goes waste. > Fix this by using direct SQL to fetch column privileges. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Attachment: (was: HIVE-22514.1.patch) > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Status: Open (was: Patch Available) > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Attachment: HIVE-22514.1.patch > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory
[ https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Magyar updated HIVE-22514: - Status: Patch Available (was: Open) > HiveProtoLoggingHook might consume lots of memory > - > > Key: HIVE-22514 > URL: https://issues.apache.org/jira/browse/HIVE-22514 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Attila Magyar >Assignee: Attila Magyar >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22514.1.patch, Screen Shot 2019-11-18 at 2.19.24 > PM.png > > > HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer > tasks and to periodically handle rollover. The builtin > ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced > from the outside. If log events are generated at a very fast rate this queue > can grow large. > !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-21226) Exclude read-only transactions from ValidTxnList
[ https://issues.apache.org/jira/browse/HIVE-21226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21226: -- Fix Version/s: 4.0.0 Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master. Thanks for the patch [~dkuzmenko]! > Exclude read-only transactions from ValidTxnList > > > Key: HIVE-21226 > URL: https://issues.apache.org/jira/browse/HIVE-21226 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Denys Kuzmenko >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21226.1.patch, HIVE-21226.2.patch, > HIVE-21226.3.patch, HIVE-21226.4.patch, HIVE-21226.5.patch > > > Once HIVE-21114 is done, we should make sure that ValidTxnList doesn't > contain any read-only txns in the exceptions list since by definition there > is no data tagged with such txnid. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-22428) Remove superfluous "Failed to get database" WARN Logging in ObjectStore
[ https://issues.apache.org/jira/browse/HIVE-22428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978344#comment-16978344 ] David Mollitor edited comment on HIVE-22428 at 11/20/19 12:10 PM: -- Hey [~ashutosh.bapat], I'm sorry you found this confusing. This is the best mechanism in place to do stack traces in log messages. However, I understand your confusion and I would like to propose the following: {code} java.lang.Exception: null {code} The 'null' there is the exception message and it comes from this line: {code} LOG.debug("{}", message, new Exception()); {code} I can update to something like: {code} LOG.debug("{}", message, new Exception("Thread Stack Trace (Not an Error)")); {code} Does that clarify? was (Author: belugabehr): Hey [~ashutosh.bapat], I'm sorry you found this confusing. This is the best mechanism in place to do stack traces in log messages. However, I understand your confusion and I would like to propose the following: {code} java.lang.Exception: null {code} The 'null' there is the exception message and it comes from this line: {code} LOG.debug("{}", message, new Exception()); {code} I can update to something like: {code} LOG.debug("{}", message, new Exception("DEBUG - Dumping Stacktrace (Not an Error)")); {code} Does that clarify? > Remove superfluous "Failed to get database" WARN Logging in ObjectStore > --- > > Key: HIVE-22428 > URL: https://issues.apache.org/jira/browse/HIVE-22428 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22428.1.patch > > > In my testing, I get lots of logs like this: > {code:none} > Line 26319: 2019-10-28T21:09:52,134 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26327: 2019-10-28T21:09:52,135 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26504: 2019-10-28T21:09:52,600 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.tstatsfast, returning > NoSuchObjectException > Line 26519: 2019-10-28T21:09:52,606 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.tstatsfast, returning > NoSuchObjectException > Line 26695: 2019-10-28T21:09:52,922 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.createDb, returning > NoSuchObjectException > Line 26703: 2019-10-28T21:09:52,923 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.createDb, returning > NoSuchObjectException > Line 26763: 2019-10-28T21:09:52,936 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26778: 2019-10-28T21:09:52,939 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.compdb, returning > NoSuchObjectException > Line 26963: 2019-10-28T21:09:53,273 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db1, returning > NoSuchObjectException > Line 26978: 2019-10-28T21:09:53,276 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db2, returning > NoSuchObjectException > Line 26986: 2019-10-28T21:09:53,277 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db1, returning > NoSuchObjectException > Line 27018: 2019-10-28T21:09:53,300 WARN [pool-6-thread-5] > metastore.ObjectStore: Failed to get database hive.db2, returning > NoSuchObjectException > {code} > This is a superfluous log message. It might be pretty common for a database > to not exists if, for example, a user fat-fingers the name of the database. > The code also has the bad habit of log-and-throw. Just log or throw, not > both. > Since I'm looking at this class, touch up some of the other logging as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)