[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527568#comment-16527568 ] Hive QA commented on HIVE-16674: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12869451/HIVE-16674.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/12243/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/12243/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-12243/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-29 12:36:40.243 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-12243/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-29 12:36:40.246 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 2b0cb07 HIVE-20011: Move away from append mode in proto logging hook (Harish JP, reviewd by Anishek Agarwal) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 2b0cb07 HIVE-20011: Move away from append mode in proto logging hook (Harish JP, reviewd by Anishek Agarwal) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-29 12:36:40.830 + rm -rf ../yetus_PreCommit-HIVE-Build-12243 + mkdir ../yetus_PreCommit-HIVE-Build-12243 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-12243 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-12243/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: does not exist in index error: a/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index error: metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: does not exist in index error: metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index error: src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: does not exist in index error: src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index The patch does not appear to apply with p0, p1, or p2 + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-12243 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12869451 - PreCommit-HIVE-Build > Hive metastore JVM dumps core > - > > Key: HIVE-16674 > URL: https://issues.apache.org/jira/browse/HIVE-16674 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 > Environment: Hive-1.2.1 > Kerberos enabled cluster >Reporter: Vlad Gudikov >Assignee: Vlad Gudikov >Priority: Blocker > Fix For: 1.2.1, 3.2.0 > > Attachments: HIVE-16674.1.patch, HIVE-16674.patch > > > While trying to run a Hive query on 24 partitions executed on an external > table with large amount of partitions (4K+). I get an error > {code} > - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], > int, int) @bci=27, line=568 (Compiled frame) > - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 > (Compiled frame) > - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 > (Compiled frame) > - org.apache.thrift.ProcessFunction.process(int, > org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol, > java.lang.Object)
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525433#comment-16525433 ] Vineet Garg commented on HIVE-16674: Deferring this to 3.1.0 since the branch for 3.0.0 has been cut off. > Hive metastore JVM dumps core > - > > Key: HIVE-16674 > URL: https://issues.apache.org/jira/browse/HIVE-16674 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 > Environment: Hive-1.2.1 > Kerberos enabled cluster >Reporter: Vlad Gudikov >Assignee: Vlad Gudikov >Priority: Blocker > Fix For: 1.2.1, 3.2.0 > > Attachments: HIVE-16674.1.patch, HIVE-16674.patch > > > While trying to run a Hive query on 24 partitions executed on an external > table with large amount of partitions (4K+). I get an error > {code} > - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], > int, int) @bci=27, line=568 (Compiled frame) > - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 > (Compiled frame) > - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 > (Compiled frame) > - org.apache.thrift.ProcessFunction.process(int, > org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol, > java.lang.Object) @bci=236, line=55 (Compiled frame) > - > org.apache.thrift.TBaseProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=126, line=39 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=15, line=690 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=1, line=685 (Compiled frame) > - > java.security.AccessController.doPrivileged(java.security.PrivilegedExceptionAction, > java.security.AccessControlContext) @bci=0 (Compiled frame) > - javax.security.auth.Subject.doAs(javax.security.auth.Subject, > java.security.PrivilegedExceptionAction) @bci=42, line=422 (Compiled frame) > - > org.apache.hadoop.security.UserGroupInformation.doAs(java.security.PrivilegedExceptionAction) > @bci=14, line=1595 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=273, line=685 (Compiled frame) > - org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() @bci=151, > line=285 (Interpreted frame) > - > java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) > @bci=95, line=1142 (Interpreted frame) > - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=617 > (Interpreted frame) > - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16432195#comment-16432195 ] Hive QA commented on HIVE-16674: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12869451/HIVE-16674.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10117/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10117/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10117/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-04-10 12:36:57.794 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10117/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-04-10 12:36:57.797 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at be42009 HIVE-18839: Implement incremental rebuild for materialized views (only insert operations in source tables) (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at be42009 HIVE-18839: Implement incremental rebuild for materialized views (only insert operations in source tables) (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-04-10 12:36:58.310 + rm -rf ../yetus_PreCommit-HIVE-Build-10117 + mkdir ../yetus_PreCommit-HIVE-Build-10117 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10117 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10117/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: does not exist in index error: a/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index error: metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: does not exist in index error: metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index error: src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: does not exist in index error: src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12869451 - PreCommit-HIVE-Build > Hive metastore JVM dumps core > - > > Key: HIVE-16674 > URL: https://issues.apache.org/jira/browse/HIVE-16674 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 > Environment: Hive-1.2.1 > Kerberos enabled cluster >Reporter: Vlad Gudikov >Assignee: Vlad Gudikov >Priority: Blocker > Fix For: 1.2.1, 3.1.0 > > Attachments: HIVE-16674.1.patch, HIVE-16674.patch > > > While trying to run a Hive query on 24 partitions executed on an external > table with large amount of partitions (4K+). I get an error > {code} > - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], > int, int) @bci=27, line=568 (Compiled frame) > - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 > (Compiled frame) > - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 > (Compiled frame) > - org.apache.thrift.ProcessFunction.process(int, > org.apache.thrift.protocol.TProtocol,
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16431274#comment-16431274 ] Vineet Garg commented on HIVE-16674: Deferring this to 3.1.0 since the branch for 3.0.0 has been cut off. Please update the JIRA if you would like to get your patch in 3.0.0. > Hive metastore JVM dumps core > - > > Key: HIVE-16674 > URL: https://issues.apache.org/jira/browse/HIVE-16674 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 > Environment: Hive-1.2.1 > Kerberos enabled cluster >Reporter: Vlad Gudikov >Assignee: Vlad Gudikov >Priority: Blocker > Fix For: 1.2.1, 3.1.0 > > Attachments: HIVE-16674.1.patch, HIVE-16674.patch > > > While trying to run a Hive query on 24 partitions executed on an external > table with large amount of partitions (4K+). I get an error > {code} > - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], > int, int) @bci=27, line=568 (Compiled frame) > - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 > (Compiled frame) > - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 > (Compiled frame) > - org.apache.thrift.ProcessFunction.process(int, > org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol, > java.lang.Object) @bci=236, line=55 (Compiled frame) > - > org.apache.thrift.TBaseProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=126, line=39 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=15, line=690 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=1, line=685 (Compiled frame) > - > java.security.AccessController.doPrivileged(java.security.PrivilegedExceptionAction, > java.security.AccessControlContext) @bci=0 (Compiled frame) > - javax.security.auth.Subject.doAs(javax.security.auth.Subject, > java.security.PrivilegedExceptionAction) @bci=42, line=422 (Compiled frame) > - > org.apache.hadoop.security.UserGroupInformation.doAs(java.security.PrivilegedExceptionAction) > @bci=14, line=1595 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=273, line=685 (Compiled frame) > - org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() @bci=151, > line=285 (Interpreted frame) > - > java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) > @bci=95, line=1142 (Interpreted frame) > - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=617 > (Interpreted frame) > - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190791#comment-16190791 ] Hive QA commented on HIVE-16674: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12869451/HIVE-16674.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7106/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7106/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7106/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-10-04 04:30:22.219 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-7106/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-10-04 04:30:22.222 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 073e847..d376157 master -> origin/master 6401597..14c9482 hive-14535 -> origin/hive-14535 + git reset --hard HEAD HEAD is now at 073e847 HIVE-17432: Enable join and aggregate materialized view rewriting (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan) + git clean -f -d Removing standalone-metastore/src/gen/org/ + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at d376157 HIVE-17682: Vectorization: IF stmt produces wrong results (Matt McCline, reviewed by Gopal Vijayaraghavan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-10-04 04:30:23.794 + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: No such file or directory error: a/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: No such file or directory The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12869451 - PreCommit-HIVE-Build > Hive metastore JVM dumps core > - > > Key: HIVE-16674 > URL: https://issues.apache.org/jira/browse/HIVE-16674 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 > Environment: Hive-1.2.1 > Kerberos enabled cluster >Reporter: Vlad Gudikov >Assignee: Vlad Gudikov >Priority: Blocker > Fix For: 1.2.1, 3.0.0 > > Attachments: HIVE-16674.1.patch, HIVE-16674.patch > > > While trying to run a Hive query on 24 partitions executed on an external > table with large amount of partitions (4K+). I get an error > {code} > - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], > int, int) @bci=27, line=568 (Compiled frame) > - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 > (Compiled frame) > - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 > (Compiled frame) > - org.apache.thrift.ProcessFunction.process(int, > org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol, > java.lang.Object) @bci=236, line=55 (Compiled frame) > - > org.apache.thrift.TBaseProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=126, line=39 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=15, line=690 (Compiled frame) > - >
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021278#comment-16021278 ] Vlad Gudikov commented on HIVE-16674: - Are these failures related to fix? > Hive metastore JVM dumps core > - > > Key: HIVE-16674 > URL: https://issues.apache.org/jira/browse/HIVE-16674 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 > Environment: Hive-1.2.1 > Kerberos enabled cluster >Reporter: Vlad Gudikov >Assignee: Vlad Gudikov >Priority: Blocker > Fix For: 1.2.1, 2.3.0 > > Attachments: HIVE-16674.1.patch, HIVE-16674.patch > > > While trying to run a Hive query on 24 partitions executed on an external > table with large amount of partitions (4K+). I get an error > {code} > - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], > int, int) @bci=27, line=568 (Compiled frame) > - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 > (Compiled frame) > - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 > (Compiled frame) > - org.apache.thrift.ProcessFunction.process(int, > org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol, > java.lang.Object) @bci=236, line=55 (Compiled frame) > - > org.apache.thrift.TBaseProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=126, line=39 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=15, line=690 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=1, line=685 (Compiled frame) > - > java.security.AccessController.doPrivileged(java.security.PrivilegedExceptionAction, > java.security.AccessControlContext) @bci=0 (Compiled frame) > - javax.security.auth.Subject.doAs(javax.security.auth.Subject, > java.security.PrivilegedExceptionAction) @bci=42, line=422 (Compiled frame) > - > org.apache.hadoop.security.UserGroupInformation.doAs(java.security.PrivilegedExceptionAction) > @bci=14, line=1595 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=273, line=685 (Compiled frame) > - org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() @bci=151, > line=285 (Interpreted frame) > - > java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) > @bci=95, line=1142 (Interpreted frame) > - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=617 > (Interpreted frame) > - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021227#comment-16021227 ] Hive QA commented on HIVE-16674: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12869451/HIVE-16674.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 245 failed/errored test(s), 6687 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite] (batchId=236) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.org.apache.hadoop.hive.cli.TestBlobstoreCliDriver (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_like] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_local] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_warehouse] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_local_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_local] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_local_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_directory] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_table] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join2] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join_on_filter] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[nested_outer_join] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_buckets] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_part] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_nonstd_partitions_loc] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_general_queries] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_matchpath] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_orcfile] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_persistence] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_rcfile] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_seqfile] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_buckets] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_format_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_format_part] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_nonstd_partitions_loc] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[write_final_output_blobstore]
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16020972#comment-16020972 ] Hive QA commented on HIVE-16674: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12869413/HIVE-16674.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 244 failed/errored test(s), 6687 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.org.apache.hadoop.hive.cli.TestBlobstoreCliDriver (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_like] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_local] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_warehouse] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_local_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_local] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_local_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_directory] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_table] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join2] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join_on_filter] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[nested_outer_join] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_buckets] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_part] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_nonstd_partitions_loc] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_general_queries] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_matchpath] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_orcfile] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_persistence] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_rcfile] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_seqfile] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_buckets] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_format_nonpart] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_format_part] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_nonstd_partitions_loc] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[write_final_output_blobstore] (batchId=239) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[zero_rows_hdfs] (batchId=239)
[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core
[ https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16015781#comment-16015781 ] Vlad Gudikov commented on HIVE-16674: - Most of the rpc call in MetaStore are of fairly small payload. But in this case we get more than 256 mb of data while calling get_partitions method. It is so due to getting all information about partitions including column level comments. Do we actually need them while getting partitions, because they are duplicated for each partition? Here is the code where we get column comments. Do we actuualy need them while getting information about partitions? {code} // Get FieldSchema stuff if any. if (!colss.isEmpty()) { // We are skipping the CDS table here, as it seems to be totally useless. queryText = "select \"CD_ID\", {color:red}\"COMMENT\"{color}, \"COLUMN_NAME\", \"TYPE_NAME\"" + " from \"COLUMNS_V2\" where \"CD_ID\" in (" + colIds + ") and \"INTEGER_IDX\" >= 0" + " order by \"CD_ID\" asc, \"INTEGER_IDX\" asc"; loopJoinOrderedResult(colss, queryText, 0, new ApplyFunc() { @Override public void apply(List t, Object[] fields) { t.add(new FieldSchema((String)fields[2], (String)fields[3], (String)fields[1])); }}); } {code} > Hive metastore JVM dumps core > - > > Key: HIVE-16674 > URL: https://issues.apache.org/jira/browse/HIVE-16674 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 > Environment: Hive-1.2.1 > Kerberos enabled cluster >Reporter: Vlad Gudikov >Priority: Blocker > Fix For: 1.2.1, 2.3.0 > > > While trying to run a Hive query on 24 partitions executed on an external > table with large amount of partitions (4K+). I get an error > {code} > - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], > int, int) @bci=27, line=568 (Compiled frame) > - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 > (Compiled frame) > - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 > (Compiled frame) > - org.apache.thrift.ProcessFunction.process(int, > org.apache.thrift.protocol.TProtocol, org.apache.thrift.protocol.TProtocol, > java.lang.Object) @bci=236, line=55 (Compiled frame) > - > org.apache.thrift.TBaseProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=126, line=39 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=15, line=690 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run() > @bci=1, line=685 (Compiled frame) > - > java.security.AccessController.doPrivileged(java.security.PrivilegedExceptionAction, > java.security.AccessControlContext) @bci=0 (Compiled frame) > - javax.security.auth.Subject.doAs(javax.security.auth.Subject, > java.security.PrivilegedExceptionAction) @bci=42, line=422 (Compiled frame) > - > org.apache.hadoop.security.UserGroupInformation.doAs(java.security.PrivilegedExceptionAction) > @bci=14, line=1595 (Compiled frame) > - > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(org.apache.thrift.protocol.TProtocol, > org.apache.thrift.protocol.TProtocol) @bci=273, line=685 (Compiled frame) > - org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() @bci=151, > line=285 (Interpreted frame) > - > java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) > @bci=95, line=1142 (Interpreted frame) > - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=617 > (Interpreted frame) > - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame) > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)