[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id
[ https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469928#comment-16469928 ] Sankar Hariappan commented on HIVE-18193: - Thanks for the review [~thejas], [~ekoifman] and [~maheshk114]! Patch committed to master. Will submit the branch-3 patch shortly. > Migrate existing ACID tables to use write id per table rather than global > transaction id > > > Key: HIVE-18193 > URL: https://issues.apache.org/jira/browse/HIVE-18193 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: Sankar Hariappan >Priority: Blocker > Labels: ACID, Upgrade > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch > > > dependent upon HIVE-18192 > For existing ACID Tables we need to update the table level write id > metatables/sequences so any new operations on these tables works seamlessly > without any conflicting data in existing base/delta files. > 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID. > 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is > set to current value of NEXT_TXN_ID.NTXN_NEXT. > 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID > such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId. > 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in > COMPLETED_TXN_COMPONENTS to store the write id which should be set as > respective values of TC_TXNID and CTC_TXNID from the same row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19484) 'IN' & '=' do not behave the same way for Date/Timestamp comparison.
[ https://issues.apache.org/jira/browse/HIVE-19484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469913#comment-16469913 ] Venu Yanamandra commented on HIVE-19484: It is - hive-1.1.0 > 'IN' & '=' do not behave the same way for Date/Timestamp comparison. > > > Key: HIVE-19484 > URL: https://issues.apache.org/jira/browse/HIVE-19484 > Project: Hive > Issue Type: Bug >Reporter: Venu Yanamandra >Priority: Major > > We find that there is a difference in the way '=' operator and 'IN' behave > when operating on timestamps. > The issue could be demonstrated using below - >i) create table test_table (test_date timestamp); > ii) insert into test_table values('2018-01-01'); > iii) select * from test_table where test_date='2018-01-01'; -- Works > iv) select * from test_table where test_date in ('2018-01-01'); -- Fails > with error [1] > v) However, casting works - >select * from test_table where test_date in (cast ('2018-01-01' as > timestamp)); > As per url [2], we find no references to limitations when '=' or 'IN' are > used. > As per the url [3], we find that there are implicit type conversions defined. > However, '=' operates in a different way than the 'IN' operator. > We would like to see if 'IN' could be made to behave the same way as '='. > [1]: > Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: The > arguments for IN should be the same type! Types are: {timestamp IN (string)} > [2]: > > https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators > [3]: > > https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-AllowedImplicitConversions -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19484) 'IN' & '=' do not behave the same way for Date/Timestamp comparison.
[ https://issues.apache.org/jira/browse/HIVE-19484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469911#comment-16469911 ] Vineet Garg commented on HIVE-19484: [~venu.yanaman...@live.com] Which version of hive are you seeing this issue on? I tried it on master and I am getting expected result in both cases. > 'IN' & '=' do not behave the same way for Date/Timestamp comparison. > > > Key: HIVE-19484 > URL: https://issues.apache.org/jira/browse/HIVE-19484 > Project: Hive > Issue Type: Bug >Reporter: Venu Yanamandra >Priority: Major > > We find that there is a difference in the way '=' operator and 'IN' behave > when operating on timestamps. > The issue could be demonstrated using below - >i) create table test_table (test_date timestamp); > ii) insert into test_table values('2018-01-01'); > iii) select * from test_table where test_date='2018-01-01'; -- Works > iv) select * from test_table where test_date in ('2018-01-01'); -- Fails > with error [1] > v) However, casting works - >select * from test_table where test_date in (cast ('2018-01-01' as > timestamp)); > As per url [2], we find no references to limitations when '=' or 'IN' are > used. > As per the url [3], we find that there are implicit type conversions defined. > However, '=' operates in a different way than the 'IN' operator. > We would like to see if 'IN' could be made to behave the same way as '='. > [1]: > Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: The > arguments for IN should be the same type! Types are: {timestamp IN (string)} > [2]: > > https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators > [3]: > > https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-AllowedImplicitConversions -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19465) Upgrade ORC to 1.5.0
[ https://issues.apache.org/jira/browse/HIVE-19465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469902#comment-16469902 ] Gopal V commented on HIVE-19465: The builds needs the ORC-SHIMS change to work from HIVE-17463, otherwise queries fail with this change-set. > Upgrade ORC to 1.5.0 > > > Key: HIVE-19465 > URL: https://issues.apache.org/jira/browse/HIVE-19465 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19465.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Attachment: HIVE-19384.04.patch > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch, > HIVE-19384.04.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Attachment: (was: HIVE-19384.04.patch) > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch, > HIVE-19384.04.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19178) TestMiniTezCliDriver.testCliDriver[explainanalyze_5] failure
[ https://issues.apache.org/jira/browse/HIVE-19178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469887#comment-16469887 ] Hive QA commented on HIVE-19178: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 50s{color} | {color:blue} ql in master has 2321 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s{color} | {color:red} ql: The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 25s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10795/dev-support/hive-personality.sh | | git revision | master / 1cd5274 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10795/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10795/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10795/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > TestMiniTezCliDriver.testCliDriver[explainanalyze_5] failure > > > Key: HIVE-19178 > URL: https://issues.apache.org/jira/browse/HIVE-19178 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19178.patch > > > I have verified that this failure is due to HIVE-18825. > Error stack: > {code} > java.lang.IllegalStateException: calling recordValidTxn() more than once in > the same txnid:5 > at org.apache.hadoop.hive.ql.Driver.acquireLocks(Driver.java:1439) > at org.apache.hadoop.hive.ql.Driver.lockAndRespond(Driver.java:1624) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1794) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1538) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1527) > at > org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:137) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:287) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:635) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1655) > at
[jira] [Commented] (HIVE-19381) Function replication in cloud fail when download resource from AWS
[ https://issues.apache.org/jira/browse/HIVE-19381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469879#comment-16469879 ] Hive QA commented on HIVE-19381: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922542/HIVE-19381.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10794/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10794/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10794/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-05-10 03:35:16.214 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10794/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-05-10 03:35:16.216 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 1cd5274 HIVE-19467: Make storage format configurable for temp tables created using LLAP external client (Jason Dere, reviewed by Deepak Jaiswal) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 1cd5274 HIVE-19467: Make storage format configurable for temp tables created using LLAP external client (Jason Dere, reviewed by Deepak Jaiswal) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-05-10 03:35:16.757 + rm -rf ../yetus_PreCommit-HIVE-Build-10794 + mkdir ../yetus_PreCommit-HIVE-Build-10794 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10794 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10794/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionTask.java: does not exist in index Going to apply patch with: git apply -p1 + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: executing: [/tmp/protoc2289875772426516247.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc2289875772426516247.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g log4j:WARN No appenders could be found for logger (DataNucleus.Persistence). log4j:WARN Please initialize the log4j system properly. DataNucleus Enhancer (version 4.1.17) for API "JDO" DataNucleus Enhancer completed with success for 40 classes. ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/ql/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveLexer.java does not exist: must build /data/hiveptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g org/apache/hadoop/hive/ql/parse/HiveLexer.g Output file
[jira] [Commented] (HIVE-19465) Upgrade ORC to 1.5.0
[ https://issues.apache.org/jira/browse/HIVE-19465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469878#comment-16469878 ] Hive QA commented on HIVE-19465: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922541/HIVE-19465.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10793/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10793/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10793/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-05-10 03:32:09.084 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10793/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-05-10 03:32:09.087 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 1cd5274 HIVE-19467: Make storage format configurable for temp tables created using LLAP external client (Jason Dere, reviewed by Deepak Jaiswal) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 1cd5274 HIVE-19467: Make storage format configurable for temp tables created using LLAP external client (Jason Dere, reviewed by Deepak Jaiswal) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-05-10 03:32:13.685 + rm -rf ../yetus_PreCommit-HIVE-Build-10793 + mkdir ../yetus_PreCommit-HIVE-Build-10793 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10793 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10793/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/OrcEncodedDataReader.java: does not exist in index error: a/llap-server/src/java/org/apache/hadoop/hive/llap/io/encoded/SerDeEncodedDataReader.java: does not exist in index error: a/llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/ConsumerFileMetadata.java: does not exist in index error: a/llap-server/src/java/org/apache/hadoop/hive/llap/io/metadata/OrcFileMetadata.java: does not exist in index error: a/pom.xml: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/encoded/EncodedTreeReaderFactory.java: does not exist in index Going to apply patch with: git apply -p1 + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: executing: [/tmp/protoc214795556400053030.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc214795556400053030.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 Output file /data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java does not exist: must build /data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g org/apache/hadoop/hive/metastore/parser/Filter.g [ERROR] Failed
[jira] [Commented] (HIVE-19468) Add Apache license to TestTxnConcatenate
[ https://issues.apache.org/jira/browse/HIVE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469875#comment-16469875 ] Hive QA commented on HIVE-19468: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922540/HIVE-19468.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 42 failed/errored test(s), 13544 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95)
[jira] [Updated] (HIVE-19455) Create JDBC External Table NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-19455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] gonglinglei updated HIVE-19455: --- Attachment: HIVE-19455.2.patch Status: Patch Available (was: Open) > Create JDBC External Table NullPointerException > --- > > Key: HIVE-19455 > URL: https://issues.apache.org/jira/browse/HIVE-19455 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 3.0.0, 2.3.3 >Reporter: gonglinglei >Priority: Major > Attachments: HIVE-19455.1.patch, HIVE-19455.2.patch > > > {{JdbcSerDe.initialize}} use > {{tbl.containsKey(JdbcStorageConfig.DATABASE_TYPE.getPropertyName())}} to > decide whether properties is empty and whether to initialize serde. But when > creating a external table > without {{hive.sql.database.type}} provided, it will throw a > NullPointerException. > {quote} > 2018-05-08T11:21:03,745 ERROR [88c8bc6c-cd5b-4b74-b6d6-242e3cc12165 main] > metadata.Table: Unable to get field from serde: > org.apache.hive.storage.jdbc.JdbcSerDe > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getFieldsFromDeserializer(MetaStoreUtils.java:1426) > ~[hive-exec-2.3.3.jar:2.3.3] > at > org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:641) > ~[hive-exec-2.3.3.jar:2.3.3] > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19455) Create JDBC External Table NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-19455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] gonglinglei updated HIVE-19455: --- Status: Open (was: Patch Available) > Create JDBC External Table NullPointerException > --- > > Key: HIVE-19455 > URL: https://issues.apache.org/jira/browse/HIVE-19455 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 3.0.0, 2.3.3 >Reporter: gonglinglei >Priority: Major > Attachments: HIVE-19455.1.patch > > > {{JdbcSerDe.initialize}} use > {{tbl.containsKey(JdbcStorageConfig.DATABASE_TYPE.getPropertyName())}} to > decide whether properties is empty and whether to initialize serde. But when > creating a external table > without {{hive.sql.database.type}} provided, it will throw a > NullPointerException. > {quote} > 2018-05-08T11:21:03,745 ERROR [88c8bc6c-cd5b-4b74-b6d6-242e3cc12165 main] > metadata.Table: Unable to get field from serde: > org.apache.hive.storage.jdbc.JdbcSerDe > java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getFieldsFromDeserializer(MetaStoreUtils.java:1426) > ~[hive-exec-2.3.3.jar:2.3.3] > at > org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:641) > ~[hive-exec-2.3.3.jar:2.3.3] > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Attachment: HIVE-19384.04.patch > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch, > HIVE-19384.04.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Attachment: (was: HIVE-19384.04.patch) > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19468) Add Apache license to TestTxnConcatenate
[ https://issues.apache.org/jira/browse/HIVE-19468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469836#comment-16469836 ] Hive QA commented on HIVE-19468: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 51s{color} | {color:blue} ql in master has 2321 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} ql: The patch generated 0 new + 2 unchanged - 1 fixed = 2 total (was 3) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10792/dev-support/hive-personality.sh | | git revision | master / 1cd5274 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10792/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add Apache license to TestTxnConcatenate > > > Key: HIVE-19468 > URL: https://issues.apache.org/jira/browse/HIVE-19468 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Igor Kryvenko >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-19468.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19108) Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q causes Wrong Query Results
[ https://issues.apache.org/jira/browse/HIVE-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469831#comment-16469831 ] Matt McCline commented on HIVE-19108: - [~jerrychenhf] thank you -- I think it is good to go! Did you want me to commit it on your behalf? > Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q > causes Wrong Query Results > --- > > Key: HIVE-19108 > URL: https://issues.apache.org/jira/browse/HIVE-19108 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Haifeng Chen >Priority: Critical > Attachments: HIVE-19108.01.patch, HIVE-19108.02.patch, > HIVE-19108.03.patch, HIVE-19108.04.patch > > > Found in vectorization enable by default experiment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19016) Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces RuntimeException: Unsupported type used
[ https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469822#comment-16469822 ] Haifeng Chen commented on HIVE-19016: - [~vihangk1] I will assign to to me if you have not yet started the work. > Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces > RuntimeException: Unsupported type used > - > > Key: HIVE-19016 > URL: https://issues.apache.org/jira/browse/HIVE-19016 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Vihang Karajgaonkar >Priority: Critical > > Adding "SET hive.vectorized.execution.enabled=true;" to > parquet_nested_complex.q triggers this call stack: > {noformat} > Caused by: java.lang.RuntimeException: Unsupported type used in > list:array> at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > {noformat} > FYI: [~vihangk1] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18576) Support to read nested complex type with Parquet in vectorization mode
[ https://issues.apache.org/jira/browse/HIVE-18576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haifeng Chen reassigned HIVE-18576: --- Assignee: Haifeng Chen (was: Colin Ma) > Support to read nested complex type with Parquet in vectorization mode > -- > > Key: HIVE-18576 > URL: https://issues.apache.org/jira/browse/HIVE-18576 > Project: Hive > Issue Type: Sub-task >Reporter: Colin Ma >Assignee: Haifeng Chen >Priority: Major > > Nested complex type is common used, eg: Struct, s2 > List>. Currently, nested complex type can't be parsed in vectorization > mode, this ticket is target to support it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19015) Vectorization and Parquet: When vectorized, parquet_map_of_arrays_of_ints.q gets a ClassCastException
[ https://issues.apache.org/jira/browse/HIVE-19015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469821#comment-16469821 ] Haifeng Chen commented on HIVE-19015: - [~vihangk1] This is the same nested complex type problem as it is not yet implement in Parquet vectorized reader. I will get this done together with HIVE-19016. The nested complex type handling will be much complex than primitives but fewer cases. My current thought is for root columns which is primitive or List, Struct and Map with primitives, we will go with the current implementation as fast path. When we found a root column with nested complex types, we will go with a tree reader which can handling the definition level and repetition level properly. > Vectorization and Parquet: When vectorized, parquet_map_of_arrays_of_ints.q > gets a ClassCastException > - > > Key: HIVE-19015 > URL: https://issues.apache.org/jira/browse/HIVE-19015 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Vihang Karajgaonkar >Priority: Critical > > Adding "SET hive.vectorized.execution.enabled=true;" to > parquet_map_of_arrays_of_ints.q triggers this call stack: > {noformat} > Caused by: java.lang.ClassCastException: > org.apache.hadoop.hive.serde2.typeinfo.ListTypeInfo cannot be cast to > org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedListColumnReader.readBatch(VectorizedListColumnReader.java:67) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedMapColumnReader.readBatch(VectorizedMapColumnReader.java:57) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:410) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > {noformat} > FYI: [~vihangk1] > Adding parquet_map_of_maps.q, too. Stack trace seems related. > {noformat} > Caused by: java.lang.ClassCastException: optional group value (MAP) { > repeated group key_value { > optional binary key (UTF8); > required int32 value; > } > } is not primitive > at org.apache.parquet.schema.Type.asPrimitiveType(Type.java:213) > ~[parquet-hadoop-bundle-1.9.0.jar:1.9.0] > at > org.apache.hadoop.hive.ql.io.parquet.vector.BaseVectorizedColumnReader.(BaseVectorizedColumnReader.java:130) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedListColumnReader.(VectorizedListColumnReader.java:52) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:568) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19483) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19483: --- Attachment: HIVE-19483.patch > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19483 > URL: https://issues.apache.org/jira/browse/HIVE-19483 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19483.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19483) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19483: --- Status: Patch Available (was: In Progress) > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19483 > URL: https://issues.apache.org/jira/browse/HIVE-19483 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19483.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-19483) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-19483 started by Jesus Camacho Rodriguez. -- > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19483 > URL: https://issues.apache.org/jira/browse/HIVE-19483 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19483) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-19483: -- > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19483 > URL: https://issues.apache.org/jira/browse/HIVE-19483 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19462) Fix mapping for char_length function to enable pushdown to Druid.
[ https://issues.apache.org/jira/browse/HIVE-19462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469817#comment-16469817 ] Hive QA commented on HIVE-19462: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922525/HIVE-19462.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 44 failed/errored test(s), 13543 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95)
[jira] [Commented] (HIVE-19466) Update constraint violation error message
[ https://issues.apache.org/jira/browse/HIVE-19466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469811#comment-16469811 ] Jesus Camacho Rodriguez commented on HIVE-19466: +1 > Update constraint violation error message > - > > Key: HIVE-19466 > URL: https://issues.apache.org/jira/browse/HIVE-19466 > Project: Hive > Issue Type: Improvement >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19466.1.patch > > > Currently for both CHECK and NOT NULL constraint violation hive throws {{NOT > NULL Constraint violated}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19466) Update constraint violation error message
[ https://issues.apache.org/jira/browse/HIVE-19466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469809#comment-16469809 ] Vineet Garg commented on HIVE-19466: Test failures are unrelated. [~jcamachorodriguez] Can you take a look? This is small fix to update error message + rename of UDF. > Update constraint violation error message > - > > Key: HIVE-19466 > URL: https://issues.apache.org/jira/browse/HIVE-19466 > Project: Hive > Issue Type: Improvement >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19466.1.patch > > > Currently for both CHECK and NOT NULL constraint violation hive throws {{NOT > NULL Constraint violated}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19016) Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces RuntimeException: Unsupported type used
[ https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469808#comment-16469808 ] Haifeng Chen commented on HIVE-19016: - [~vihangk1] I have some research on this. And the nested complex type (nested struct, map and list) is not yet implement in Parquet vectorized reader. I got a study on the details and trying figure out a implementation. I will try to work out a patch. > Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces > RuntimeException: Unsupported type used > - > > Key: HIVE-19016 > URL: https://issues.apache.org/jira/browse/HIVE-19016 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Vihang Karajgaonkar >Priority: Critical > > Adding "SET hive.vectorized.execution.enabled=true;" to > parquet_nested_complex.q triggers this call stack: > {noformat} > Caused by: java.lang.RuntimeException: Unsupported type used in > list:array> at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > {noformat} > FYI: [~vihangk1] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19118) Vectorization: Turning on vectorization in escape_crlf produces wrong results
[ https://issues.apache.org/jira/browse/HIVE-19118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469804#comment-16469804 ] Haifeng Chen commented on HIVE-19118: - [~mmccline] The test is out. Would you please have a review on the patch and commit if there is no further feedback? Thanks! > Vectorization: Turning on vectorization in escape_crlf produces wrong results > - > > Key: HIVE-19118 > URL: https://issues.apache.org/jira/browse/HIVE-19118 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Haifeng Chen >Priority: Critical > Attachments: HIVE-19118.01.patch, HIVE-19118.02.patch, > HIVE-19118.03.patch, HIVE-19118.04.patch, HIVE-19118.05.patch > > > Found in vectorization enable by default experiment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19110) Vectorization: Enabling vectorization causes TestContribCliDriver udf_example_arraymapstruct.q to produce Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-19110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469803#comment-16469803 ] Haifeng Chen commented on HIVE-19110: - [~mmccline] The test is out. Would you please have a review on the patch and commit if there is no further feedback? Thanks! > Vectorization: Enabling vectorization causes TestContribCliDriver > udf_example_arraymapstruct.q to produce Wrong Results > --- > > Key: HIVE-19110 > URL: https://issues.apache.org/jira/browse/HIVE-19110 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Haifeng Chen >Priority: Critical > Attachments: HIVE-19110.01.patch, HIVE-19110.02.patch, > HIVE-19110.03.patch, HIVE-19110.04.patch > > > Found in vectorization enable by default experiment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19108) Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q causes Wrong Query Results
[ https://issues.apache.org/jira/browse/HIVE-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469802#comment-16469802 ] Haifeng Chen commented on HIVE-19108: - [~mmccline] The test is out. Would you please have a review on the patch and commit if there is no further feedback? Thanks! > Vectorization and Parquet: Turning on vectorization in parquet_ppd_decimal.q > causes Wrong Query Results > --- > > Key: HIVE-19108 > URL: https://issues.apache.org/jira/browse/HIVE-19108 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Haifeng Chen >Priority: Critical > Attachments: HIVE-19108.01.patch, HIVE-19108.02.patch, > HIVE-19108.03.patch, HIVE-19108.04.patch > > > Found in vectorization enable by default experiment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19467) Make storage format configurable for temp tables created using LLAP external client
[ https://issues.apache.org/jira/browse/HIVE-19467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19467: -- Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Committed to master > Make storage format configurable for temp tables created using LLAP external > client > --- > > Key: HIVE-19467 > URL: https://issues.apache.org/jira/browse/HIVE-19467 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19467.1.patch > > > Temp tables created for complex queries when using the LLAP external client > are created using the default storage format. Default to orc, and make > configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18963) JDBC: Provide an option to simplify beeline usage by supporting default and named URL for beeline
[ https://issues.apache.org/jira/browse/HIVE-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18963: Release Note: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Usingbeeline-site.xmltoautomaticallyconnecttoHiveServer2 > JDBC: Provide an option to simplify beeline usage by supporting default and > named URL for beeline > - > > Key: HIVE-18963 > URL: https://issues.apache.org/jira/browse/HIVE-18963 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-18963.1.patch, HIVE-18963.2.patch, > HIVE-18963.3.patch > > > Currently, after opening Beeline CLI, the user needs to supply a connection > string to use the HS2 instance and set up the jdbc driver. Since we plan to > replace Hive CLI with Beeline in future (HIVE-10511), it will help the > usability if the user can simply type {{beeline}} and get start the hive > session. The jdbc url can be specified in a beeline-site.xml (which can > contain other named jdbc urls as well, and they can be accessed by something > like: {{beeline -c namedUrl}}. The use of beeline-site.xml can also be > potentially expanded later if needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19462) Fix mapping for char_length function to enable pushdown to Druid.
[ https://issues.apache.org/jira/browse/HIVE-19462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469776#comment-16469776 ] Hive QA commented on HIVE-19462: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 33s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2321 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10791/dev-support/hive-personality.sh | | git revision | master / 8ac6257 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10791/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10791/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix mapping for char_length function to enable pushdown to Druid. > -- > > Key: HIVE-19462 > URL: https://issues.apache.org/jira/browse/HIVE-19462 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19462.patch > > > currently char_length is not push down to Druid because of missing mapping > form/to calcite > This patch will add this mapping. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Deleted] (HIVE-19482) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez deleted HIVE-19482: --- > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19482 > URL: https://issues.apache.org/jira/browse/HIVE-19482 > Project: Hive > Issue Type: Bug >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > This can lead to a large number of cleaner objects depending on the number of > metastore clients. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19482) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19482: --- Attachment: HIVE-19482.patch > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19482 > URL: https://issues.apache.org/jira/browse/HIVE-19482 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19482.patch > > > This can lead to a large number of cleaner objects depending on the number of > metastore clients. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-19482) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-19482 started by Jesus Camacho Rodriguez. -- > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19482 > URL: https://issues.apache.org/jira/browse/HIVE-19482 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19482.patch > > > This can lead to a large number of cleaner objects depending on the number of > metastore clients. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19482) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19482: --- Status: Patch Available (was: In Progress) > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19482 > URL: https://issues.apache.org/jira/browse/HIVE-19482 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19482.patch > > > This can lead to a large number of cleaner objects depending on the number of > metastore clients. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19482) Metastore cleaner tasks that run periodically are created more than once
[ https://issues.apache.org/jira/browse/HIVE-19482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-19482: -- > Metastore cleaner tasks that run periodically are created more than once > > > Key: HIVE-19482 > URL: https://issues.apache.org/jira/browse/HIVE-19482 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19482.patch > > > This can lead to a large number of cleaner objects depending on the number of > metastore clients. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-12342) Set default value of hive.optimize.index.filter to true
[ https://issues.apache.org/jira/browse/HIVE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469760#comment-16469760 ] Hive QA commented on HIVE-12342: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922530/HIVE-12342.10.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10790/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10790/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10790/ Messages: {noformat} This message was trimmed, see log for full details error: a/ql/src/test/results/clientpositive/spark/union_remove_12.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/union_remove_13.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/union_remove_14.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/union_remove_19.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/union_remove_23.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/union_remove_25.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/union_top_level.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/union_view.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vector_between_in.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vector_decimal_mapjoin.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vector_elt.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vector_inner_join.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vector_mapjoin_reduce.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_0.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_1.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_10.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_11.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_12.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_13.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_14.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_15.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_16.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_17.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_2.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_3.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_4.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_5.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_6.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_9.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_decimal_date.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_div0.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_input_format_excludes.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_parquet_projection.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorization_short_regress.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorized_case.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorized_mapjoin.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorized_math_funcs.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorized_nested_mapjoin.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorized_ptf.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorized_shufflejoin.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/spark/vectorized_string_funcs.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/special_character_in_tabnames_2.q.out: does not exist in index error:
[jira] [Commented] (HIVE-19467) Make storage format configurable for temp tables created using LLAP external client
[ https://issues.apache.org/jira/browse/HIVE-19467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469748#comment-16469748 ] Hive QA commented on HIVE-19467: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922536/HIVE-19467.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 41 failed/errored test(s), 13544 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95)
[jira] [Commented] (HIVE-19041) Thrift deserialization of Partition objects should intern fields
[ https://issues.apache.org/jira/browse/HIVE-19041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469733#comment-16469733 ] Misha Dmitriev commented on HIVE-19041: --- Agree - in our internal heap dump analysis, the above {{*Request}} classes don't show up at all as sources of duplicate strings. However, several class.field combinations do show up, that are not in this patch. Namely: {{org.apache.hadoop.hive.metastore.model.MStorageDescriptor.inputFormat,outputFormat}} {{org.apache.hadoop.hive.metastore.model.MSerDeInfo.serializationLib}} {{org.apache.hadoop.hive.metastore.model.MPartition.partitionName,values}} They contribute relatively little overhead (about 3% together), but probably it's still worth interning them to be on the safe side. > Thrift deserialization of Partition objects should intern fields > > > Key: HIVE-19041 > URL: https://issues.apache.org/jira/browse/HIVE-19041 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-19041.01.patch, HIVE-19041.02.patch, > HIVE-19041.03.patch, HIVE-19041.04.patch > > > When a client is creating large number of partitions, the thrift objects are > deserialized into Partition objects. The read method of these objects does > not intern the inputformat, location, outputformat which cause large number > of duplicate Strings in the HMS memory. We should intern these objects while > deserialization to reduce memory pressure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19382) Acquire locks before generating valid transaction list for some operations
[ https://issues.apache.org/jira/browse/HIVE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19382: --- Attachment: (was: HIVE-19382.01.patch) > Acquire locks before generating valid transaction list for some operations > -- > > Key: HIVE-19382 > URL: https://issues.apache.org/jira/browse/HIVE-19382 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19382.01.patch, HIVE-19382.02.patch, > HIVE-19382.patch > > > To ensure correctness, in particular for operations that require exclusive > ({{INSERT OVERWRITE}}) and semishared ({{UPDATE}}/{{DELETE}}) locks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19382) Acquire locks before generating valid transaction list for some operations
[ https://issues.apache.org/jira/browse/HIVE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19382: --- Attachment: HIVE-19382.01.patch > Acquire locks before generating valid transaction list for some operations > -- > > Key: HIVE-19382 > URL: https://issues.apache.org/jira/browse/HIVE-19382 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19382.01.patch, HIVE-19382.02.patch, > HIVE-19382.patch > > > To ensure correctness, in particular for operations that require exclusive > ({{INSERT OVERWRITE}}) and semishared ({{UPDATE}}/{{DELETE}}) locks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19382) Acquire locks before generating valid transaction list for some operations
[ https://issues.apache.org/jira/browse/HIVE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19382: --- Attachment: HIVE-19382.02.patch > Acquire locks before generating valid transaction list for some operations > -- > > Key: HIVE-19382 > URL: https://issues.apache.org/jira/browse/HIVE-19382 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19382.01.patch, HIVE-19382.02.patch, > HIVE-19382.patch > > > To ensure correctness, in particular for operations that require exclusive > ({{INSERT OVERWRITE}}) and semishared ({{UPDATE}}/{{DELETE}}) locks. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19481) sample10.q returns possibly wrong results for insert-only transactional table
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-19481: -- Description: Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after changing the table to be insert-only transactional. The following queries returns couple of rows whereis no row results returns for non-ACID table. query: select ds, count(1) from srcpartbucket tablesample (bucket 2 out of 4 on key) where ds is not null group by ds ORDER BY ds ASC 2008-04-08 14 2008-04-09 14 .. query: select ds, count(1) from srcpartbucket tablesample (bucket 1 out of 2 on key) where ds is not null group by ds ORDER BY ds ASC 2008-04-08 4 2008-04-09 4 was: Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after changing the table to be insert-only transactional. The following queries returns couple of rows whereis no row results returns for non-ACID table. POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 2 out of 4 on key) where ds is not null group by ds ORDER BY ds ASC A masked pattern was here 2008-04-08 14 2008-04-09 14 POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 1 out of 2 on key) where ds is not null group by ds ORDER BY ds ASC A masked pattern was here 2008-04-08 4 2008-04-09 4 > sample10.q returns possibly wrong results for insert-only transactional table > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > > Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after > changing the table to be > insert-only transactional. > The following queries returns couple of rows whereis no row results returns > for non-ACID table. > query: select ds, count(1) from srcpartbucket tablesample (bucket 2 out of 4 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 14 > 2008-04-09 14 > .. > query: select ds, count(1) from srcpartbucket tablesample (bucket 1 out of 2 > on key) where ds is not null group by ds ORDER BY ds ASC > 2008-04-08 4 > 2008-04-09 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19481) sample10.q returns possibly wrong results for insert-only transactional table
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469683#comment-16469683 ] Steve Yeom edited comment on HIVE-19481 at 5/9/18 11:39 PM: The query on insert-only transactional table returns no rows (or correct) when vectorization is off. was (Author: steveyeom2017): The query on insert-only transactional table returns no rows when vectorization is off. > sample10.q returns possibly wrong results for insert-only transactional table > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > > Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after > changing the table to be > insert-only transactional. > The following queries returns couple of rows whereis no row results returns > for non-ACID table. > POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 2 > out of 4 on key) where ds is not null group by ds ORDER BY ds ASC > A masked pattern was here > 2008-04-08 14 > 2008-04-09 14 > POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 1 > out of 2 on key) where ds is not null group by ds ORDER BY ds ASC > A masked pattern was here > 2008-04-08 4 > 2008-04-09 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19481) sample10.q returns possibly wrong results for insert-only transactional table
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469683#comment-16469683 ] Steve Yeom commented on HIVE-19481: --- The query on insert-only transactional table returns no rows when vectorization is off. > sample10.q returns possibly wrong results for insert-only transactional table > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > > Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after > changing the table to be > insert-only transactional. > The following queries returns couple of rows whereis no row results returns > for non-ACID table. > POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 2 > out of 4 on key) where ds is not null group by ds ORDER BY ds ASC > A masked pattern was here > 2008-04-08 14 > 2008-04-09 14 > POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 1 > out of 2 on key) where ds is not null group by ds ORDER BY ds ASC > A masked pattern was here > 2008-04-08 4 > 2008-04-09 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19467) Make storage format configurable for temp tables created using LLAP external client
[ https://issues.apache.org/jira/browse/HIVE-19467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469682#comment-16469682 ] Hive QA commented on HIVE-19467: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 16s{color} | {color:blue} ql in master has 2321 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 2 new + 12 unchanged - 0 fixed = 14 total (was 12) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10789/dev-support/hive-personality.sh | | git revision | master / 8ac6257 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10789/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10789/yetus/patch-asflicense-problems.txt | | modules | C: common itests/hive-unit ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10789/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Make storage format configurable for temp tables created using LLAP external > client > --- > > Key: HIVE-19467 > URL: https://issues.apache.org/jira/browse/HIVE-19467 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19467.1.patch > > > Temp tables created for complex queries when using the LLAP external client > are created using the default storage format. Default to orc, and make > configurable. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19481) sample10.q returns possibly wrong results for insert-only transactional table
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-19481: -- Description: Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after changing the table to be insert-only transactional. The following queries returns couple of rows whereis no row results returns for non-ACID table. POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 2 out of 4 on key) where ds is not null group by ds ORDER BY ds ASC A masked pattern was here 2008-04-08 14 2008-04-09 14 POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 1 out of 2 on key) where ds is not null group by ds ORDER BY ds ASC A masked pattern was here 2008-04-08 4 2008-04-09 4 > sample10.q returns possibly wrong results for insert-only transactional table > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > > Ran "mvn test -Dtest=TestMiniLlapLocalCliDriver -Dqfile=sample10.q " after > changing the table to be > insert-only transactional. > The following queries returns couple of rows whereis no row results returns > for non-ACID table. > POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 2 > out of 4 on key) where ds is not null group by ds ORDER BY ds ASC > A masked pattern was here > 2008-04-08 14 > 2008-04-09 14 > POSTHOOK: query: select ds, count(1) from srcpartbucket tablesample (bucket 1 > out of 2 on key) where ds is not null group by ds ORDER BY ds ASC > A masked pattern was here > 2008-04-08 4 > 2008-04-09 4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19041) Thrift deserialization of Partition objects should intern fields
[ https://issues.apache.org/jira/browse/HIVE-19041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469680#comment-16469680 ] Vihang Karajgaonkar commented on HIVE-19041: Talked with [~stakiar] offline. Removed {{AddPartitionsRequest}}, {{DropPartitionsRequest}}, {{AddDynamicPartitionsRequest}} and {{PartitionsStatsRequest}} from the patch since these are request objects and it is very unlikely that duplicate strings from these objects overwhelm the heap. The {{Partition}} list in these objects will however will interned because of the changes to Partition class from this patch. > Thrift deserialization of Partition objects should intern fields > > > Key: HIVE-19041 > URL: https://issues.apache.org/jira/browse/HIVE-19041 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-19041.01.patch, HIVE-19041.02.patch, > HIVE-19041.03.patch, HIVE-19041.04.patch > > > When a client is creating large number of partitions, the thrift objects are > deserialized into Partition objects. The read method of these objects does > not intern the inputformat, location, outputformat which cause large number > of duplicate Strings in the HMS memory. We should intern these objects while > deserialization to reduce memory pressure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19041) Thrift deserialization of Partition objects should intern fields
[ https://issues.apache.org/jira/browse/HIVE-19041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-19041: --- Attachment: HIVE-19041.04.patch > Thrift deserialization of Partition objects should intern fields > > > Key: HIVE-19041 > URL: https://issues.apache.org/jira/browse/HIVE-19041 > Project: Hive > Issue Type: Improvement > Components: Metastore >Affects Versions: 3.0.0, 2.3.2 >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-19041.01.patch, HIVE-19041.02.patch, > HIVE-19041.03.patch, HIVE-19041.04.patch > > > When a client is creating large number of partitions, the thrift objects are > deserialized into Partition objects. The read method of these objects does > not intern the inputformat, location, outputformat which cause large number > of duplicate Strings in the HMS memory. We should intern these objects while > deserialization to reduce memory pressure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)
[ https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469660#comment-16469660 ] Jason Dere commented on HIVE-19258: --- +1 pending tests > add originals support to MM tables (and make the conversion a metadata only > operation) > -- > > Key: HIVE-19258 > URL: https://issues.apache.org/jira/browse/HIVE-19258 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19258.01.patch, HIVE-19258.02.patch, > HIVE-19258.03.patch, HIVE-19258.04.patch, HIVE-19258.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Status: Patch Available (was: In Progress) > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch, > HIVE-19384.04.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Status: In Progress (was: Patch Available) > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch, > HIVE-19384.04.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Attachment: HIVE-19384.04.patch > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch, > HIVE-19384.04.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19449) Create standalone jar for hive streaming module
[ https://issues.apache.org/jira/browse/HIVE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469639#comment-16469639 ] Prasanth Jayachandran commented on HIVE-19449: -- cc/ [~ewohlstadter] > Create standalone jar for hive streaming module > --- > > Key: HIVE-19449 > URL: https://issues.apache.org/jira/browse/HIVE-19449 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19449.1.patch > > > Hive streaming API depends on several hive modules (common, serde, ql, orc, > standalone-metastore etc). Users of the API has to include all the > dependencies in the classpath for it to work correctly. Provide a uber jar > with minimal set of dependencies that are required to make use of new > streaming API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19449) Create standalone jar for hive streaming module
[ https://issues.apache.org/jira/browse/HIVE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19449: - Summary: Create standalone jar for hive streaming module (was: Create minimized uber jar for hive streaming module) > Create standalone jar for hive streaming module > --- > > Key: HIVE-19449 > URL: https://issues.apache.org/jira/browse/HIVE-19449 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19449.1.patch > > > Hive streaming API depends on several hive modules (common, serde, ql, orc, > standalone-metastore etc). Users of the API has to include all the > dependencies in the classpath for it to work correctly. Provide a uber jar > with minimal set of dependencies that are required to make use of new > streaming API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19449) Create minimized uber jar for hive streaming module
[ https://issues.apache.org/jira/browse/HIVE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469636#comment-16469636 ] Prasanth Jayachandran commented on HIVE-19449: -- This will let clients use hive-streaming--standalone.jar without requiring any other jars. > Create minimized uber jar for hive streaming module > --- > > Key: HIVE-19449 > URL: https://issues.apache.org/jira/browse/HIVE-19449 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19449.1.patch > > > Hive streaming API depends on several hive modules (common, serde, ql, orc, > standalone-metastore etc). Users of the API has to include all the > dependencies in the classpath for it to work correctly. Provide a uber jar > with minimal set of dependencies that are required to make use of new > streaming API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19449) Create minimized uber jar for hive streaming module
[ https://issues.apache.org/jira/browse/HIVE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469637#comment-16469637 ] Prasanth Jayachandran commented on HIVE-19449: -- [~ekoifman]/[~sershe] can someone please take a look? > Create minimized uber jar for hive streaming module > --- > > Key: HIVE-19449 > URL: https://issues.apache.org/jira/browse/HIVE-19449 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19449.1.patch > > > Hive streaming API depends on several hive modules (common, serde, ql, orc, > standalone-metastore etc). Users of the API has to include all the > dependencies in the classpath for it to work correctly. Provide a uber jar > with minimal set of dependencies that are required to make use of new > streaming API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19449) Create minimized uber jar for hive streaming module
[ https://issues.apache.org/jira/browse/HIVE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19449: - Status: Patch Available (was: Open) > Create minimized uber jar for hive streaming module > --- > > Key: HIVE-19449 > URL: https://issues.apache.org/jira/browse/HIVE-19449 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19449.1.patch > > > Hive streaming API depends on several hive modules (common, serde, ql, orc, > standalone-metastore etc). Users of the API has to include all the > dependencies in the classpath for it to work correctly. Provide a uber jar > with minimal set of dependencies that are required to make use of new > streaming API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19449) Create minimized uber jar for hive streaming module
[ https://issues.apache.org/jira/browse/HIVE-19449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19449: - Attachment: HIVE-19449.1.patch > Create minimized uber jar for hive streaming module > --- > > Key: HIVE-19449 > URL: https://issues.apache.org/jira/browse/HIVE-19449 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19449.1.patch > > > Hive streaming API depends on several hive modules (common, serde, ql, orc, > standalone-metastore etc). Users of the API has to include all the > dependencies in the classpath for it to work correctly. Provide a uber jar > with minimal set of dependencies that are required to make use of new > streaming API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19466) Update constraint violation error message
[ https://issues.apache.org/jira/browse/HIVE-19466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469635#comment-16469635 ] Hive QA commented on HIVE-19466: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922535/HIVE-19466.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 43 failed/errored test(s), 13543 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95)
[jira] [Commented] (HIVE-19466) Update constraint violation error message
[ https://issues.apache.org/jira/browse/HIVE-19466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469559#comment-16469559 ] Hive QA commented on HIVE-19466: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 55s{color} | {color:blue} ql in master has 2321 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10788/dev-support/hive-personality.sh | | git revision | master / 8ac6257 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10788/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10788/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Update constraint violation error message > - > > Key: HIVE-19466 > URL: https://issues.apache.org/jira/browse/HIVE-19466 > Project: Hive > Issue Type: Improvement >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19466.1.patch > > > Currently for both CHECK and NOT NULL constraint violation hive throws {{NOT > NULL Constraint violated}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19481) sample10.q returns possibly wrong results for insert-only transactional table
[ https://issues.apache.org/jira/browse/HIVE-19481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom reassigned HIVE-19481: - Assignee: Steve Yeom > sample10.q returns possibly wrong results for insert-only transactional table > - > > Key: HIVE-19481 > URL: https://issues.apache.org/jira/browse/HIVE-19481 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.1.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19464) Upgrade Parquet to 1.10.0
[ https://issues.apache.org/jira/browse/HIVE-19464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469546#comment-16469546 ] Hive QA commented on HIVE-19464: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 19s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2321 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 53s{color} | {color:red} ql in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 53s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 12m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10787/dev-support/hive-personality.sh | | git revision | master / 8ac6257 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-10787/yetus/patch-compile-ql.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-10787/yetus/patch-compile-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-10787/yetus/patch-findbugs-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10787/yetus/patch-asflicense-problems.txt | | modules | C: . ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10787/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade Parquet to 1.10.0 > - > > Key: HIVE-19464 > URL: https://issues.apache.org/jira/browse/HIVE-19464 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19464.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19480) Implement and Incorporate MAPREDUCE-207
[ https://issues.apache.org/jira/browse/HIVE-19480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-19480: --- Affects Version/s: (was: 3.0.0) 1.2.3 > Implement and Incorporate MAPREDUCE-207 > --- > > Key: HIVE-19480 > URL: https://issues.apache.org/jira/browse/HIVE-19480 > Project: Hive > Issue Type: New Feature > Components: HiveServer2 >Affects Versions: 1.2.3 >Reporter: BELUGA BEHR >Priority: Major > > * HiveServer2 has the ability to run many MapReduce jobs in parallel. > * Each MapReduce application calculates the job's file splits at the client > level > * = HiveServer2 loading many file splits at the same time, putting pressure > on memory > {quote}"The client running the job calculates the splits for the job by > calling getSplits(), then sends them to the application master, which uses > their storage locations to schedule map tasks that will process them on the > cluster." > - "Hadoop: The Definitive Guide"{quote} > MAPREDUCE-207 should address this memory pressure by moving split > calculations into ApplicationMaster. Spark and Tez already take this approach. > Once MAPREDUCE-207 is completed, leverage the capability in HiveServer2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19480) Implement and Incorporate MAPREDUCE-207
[ https://issues.apache.org/jira/browse/HIVE-19480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469539#comment-16469539 ] Gopal V commented on HIVE-19480: >From the HiveConf in Hive2 and 3. {code} While MR remains the default engine for historical reasons, it is itself a historical engine and is deprecated in Hive 2 line. It may be removed without further warning. {code} > Implement and Incorporate MAPREDUCE-207 > --- > > Key: HIVE-19480 > URL: https://issues.apache.org/jira/browse/HIVE-19480 > Project: Hive > Issue Type: New Feature > Components: HiveServer2 >Affects Versions: 1.2.3 >Reporter: BELUGA BEHR >Priority: Major > > * HiveServer2 has the ability to run many MapReduce jobs in parallel. > * Each MapReduce application calculates the job's file splits at the client > level > * = HiveServer2 loading many file splits at the same time, putting pressure > on memory > {quote}"The client running the job calculates the splits for the job by > calling getSplits(), then sends them to the application master, which uses > their storage locations to schedule map tasks that will process them on the > cluster." > - "Hadoop: The Definitive Guide"{quote} > MAPREDUCE-207 should address this memory pressure by moving split > calculations into ApplicationMaster. Spark and Tez already take this approach. > Once MAPREDUCE-207 is completed, leverage the capability in HiveServer2. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19479) encoded stream seek is incorrect for 0-length RGs in LLAP IO
[ https://issues.apache.org/jira/browse/HIVE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469536#comment-16469536 ] Prasanth Jayachandran commented on HIVE-19479: -- Since we are removing positions only for isPresent stream. Advancing positions before seek for all other streams introduced by this patch should be ok I guess. +1, pending tests. > encoded stream seek is incorrect for 0-length RGs in LLAP IO > > > Key: HIVE-19479 > URL: https://issues.apache.org/jira/browse/HIVE-19479 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19479.01.patch, HIVE-19479.patch > > > The PositionProvider offset is not updated correctly and an error like this > may happen: > {noformat} > Caused by: java.lang.IllegalArgumentException: Seek in LENGTH to 541 is > outside of the data > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:161) > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:123) > at > org.apache.orc.impl.RunLengthIntegerReaderV2.seek(RunLengthIntegerReaderV2.java:331) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:298) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:258) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.repositionInStreams(OrcEncodedDataConsumer.java:250) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:134) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:62) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19464) Upgrade Parquet to 1.10.0
[ https://issues.apache.org/jira/browse/HIVE-19464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469523#comment-16469523 ] Hive QA commented on HIVE-19464: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922522/HIVE-19464.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10787/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10787/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10787/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: java.io.IOException: Could not create /data/hiveptest/logs/PreCommit-HIVE-Build-10787/succeeded/223_UTBatch_service_6_tests {noformat} This message is automatically generated. ATTACHMENT ID: 12922522 - PreCommit-HIVE-Build > Upgrade Parquet to 1.10.0 > - > > Key: HIVE-19464 > URL: https://issues.apache.org/jira/browse/HIVE-19464 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19464.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19479) encoded stream seek is incorrect for 0-length RGs in LLAP IO
[ https://issues.apache.org/jira/browse/HIVE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19479: Attachment: HIVE-19479.01.patch > encoded stream seek is incorrect for 0-length RGs in LLAP IO > > > Key: HIVE-19479 > URL: https://issues.apache.org/jira/browse/HIVE-19479 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19479.01.patch, HIVE-19479.patch > > > The PositionProvider offset is not updated correctly and an error like this > may happen: > {noformat} > Caused by: java.lang.IllegalArgumentException: Seek in LENGTH to 541 is > outside of the data > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:161) > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:123) > at > org.apache.orc.impl.RunLengthIntegerReaderV2.seek(RunLengthIntegerReaderV2.java:331) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:298) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:258) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.repositionInStreams(OrcEncodedDataConsumer.java:250) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:134) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:62) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-14388) Add number of rows inserted message after insert command in Beeline
[ https://issues.apache.org/jira/browse/HIVE-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-14388: Attachment: HIVE-14388.09.patch > Add number of rows inserted message after insert command in Beeline > --- > > Key: HIVE-14388 > URL: https://issues.apache.org/jira/browse/HIVE-14388 > Project: Hive > Issue Type: Improvement > Components: Beeline >Reporter: Vihang Karajgaonkar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Minor > Attachments: HIVE-14388-WIP.patch, HIVE-14388.02.patch, > HIVE-14388.03.patch, HIVE-14388.05.patch, HIVE-14388.06.patch, > HIVE-14388.07.patch, HIVE-14388.08.patch, HIVE-14388.09.patch > > > Currently, when you run insert command on beeline, it returns a message > saying "No rows affected .." > A better and more intuitive msg would be "xxx rows inserted (26.068 seconds)" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19347) TestTriggersWorkloadManager tests are failing consistently
[ https://issues.apache.org/jira/browse/HIVE-19347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19347: Attachment: HIVE-19347.02.patch > TestTriggersWorkloadManager tests are failing consistently > -- > > Key: HIVE-19347 > URL: https://issues.apache.org/jira/browse/HIVE-19347 > Project: Hive > Issue Type: Sub-task >Reporter: Vineet Garg >Assignee: Matt McCline >Priority: Blocker > Attachments: HIVE-19347.01.patch, HIVE-19347.02.patch > > > Caused by the patch which turned on vectorization. Following tests are > failing due to the patch: > * org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > * > org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] > * > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] > * org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite > 10 sec 14 > * org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps > 7.7 sec 14 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles > 15 sec 14 > * org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 > 17 sec 14 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime > 1.5 sec 14 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill > 20 sec 18 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes > 1.4 sec 18 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent > 2.6 sec 18 > * org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead > {noformat} > Error Message > Expected query to succeed expected null, but was: Error while processing statement: FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 3, > vertexId=vertex_1524884047358_0001_21_01, diagnostics=[Task failed, > taskId=task_1524884047358_0001_21_01_00, diagnostics=[TaskAttempt 0 > failed, info=[Error: Error while running task ( failure ) : > attempt_1524884047358_0001_21_01_00_0:java.lang.RuntimeException: > org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: > java.io.IOException: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) > at > org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) > at > org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) > at > org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at > org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.io.IOException: java.io.IOException: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:80) > at > org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:419) > at > org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267) > ... 15 more > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HIVE-19477) Hiveserver2 in http mode not emitting metric default.General.open_connections
[ https://issues.apache.org/jira/browse/HIVE-19477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HIVE-19477: - Comment: was deleted (was: [~jcamachorodriguez] - Thanks for opening this jira.) > Hiveserver2 in http mode not emitting metric default.General.open_connections > - > > Key: HIVE-19477 > URL: https://issues.apache.org/jira/browse/HIVE-19477 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Dinesh Chitlangia >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-19477.01.patch, HIVE-19477.patch > > > Instances in binary mode are emitting the metric > _default.General.open_connections_ but the instances operating in http mode > are not emitting this metric. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19135: --- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, > HIVE-19135.4-branch-3.patch, HIVE-19135.4.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469444#comment-16469444 ] Vineet Garg commented on HIVE-19135: Pushed to branch-3. Thanks Alan > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, > HIVE-19135.4-branch-3.patch, HIVE-19135.4.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19135) Need tool to allow admins to create catalogs and move existing dbs to catalog during upgrade
[ https://issues.apache.org/jira/browse/HIVE-19135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469441#comment-16469441 ] Vineet Garg commented on HIVE-19135: Sorry I missed this before. +1 for branch-3 > Need tool to allow admins to create catalogs and move existing dbs to catalog > during upgrade > > > Key: HIVE-19135 > URL: https://issues.apache.org/jira/browse/HIVE-19135 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19135.2.patch, HIVE-19135.3.patch, > HIVE-19135.4-branch-3.patch, HIVE-19135.4.patch, HIVE19135.patch > > > As part of upgrading to Hive 3 admins may wish to create new catalogs and > move some existing databases into those catalogs. We can do this by adding > options to schematool. This guarantees that only admins can do these > operations. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id
[ https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469437#comment-16469437 ] Eugene Koifman commented on HIVE-18193: --- LGTM > Migrate existing ACID tables to use write id per table rather than global > transaction id > > > Key: HIVE-18193 > URL: https://issues.apache.org/jira/browse/HIVE-18193 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: Sankar Hariappan >Priority: Blocker > Labels: ACID, Upgrade > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch > > > dependent upon HIVE-18192 > For existing ACID Tables we need to update the table level write id > metatables/sequences so any new operations on these tables works seamlessly > without any conflicting data in existing base/delta files. > 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID. > 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is > set to current value of NEXT_TXN_ID.NTXN_NEXT. > 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID > such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId. > 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in > COMPLETED_TXN_COMPONENTS to store the write id which should be set as > respective values of TC_TXNID and CTC_TXNID from the same row. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19347) TestTriggersWorkloadManager tests are failing consistently
[ https://issues.apache.org/jira/browse/HIVE-19347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469425#comment-16469425 ] Hive QA commented on HIVE-19347: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922514/HIVE-19347.01.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10785/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10785/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10785/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: org.apache.hive.ptest.execution.ssh.SSHExecutionException: RSyncResult [localFile=/data/hiveptest/logs/PreCommit-HIVE-Build-10785/succeeded/253_UTBatch_itests__qtest_5_tests, remoteFile=/home/hiveptest/35.188.142.251-hiveptest-0/logs/, getExitCode()=11, getException()=null, getUser()=hiveptest, getHost()=35.188.142.251, getInstance()=0]: 'Warning: Permanently added '35.188.142.251' (ECDSA) to the list of known hosts. receiving incremental file list ./ TEST-253_UTBatch_itests__qtest_5_tests-TEST-org.apache.hadoop.hive.cli.TestBeeLineDriver.xml 0 0%0.00kB/s0:00:00 118,479 100%3.23MB/s0:00:00 (xfr#1, to-chk=12/14) TEST-253_UTBatch_itests__qtest_5_tests-TEST-org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.xml 0 0%0.00kB/s0:00:00 117,310 100%1.58MB/s0:00:00 (xfr#2, to-chk=11/14) TEST-253_UTBatch_itests__qtest_5_tests-TEST-org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.xml 0 0%0.00kB/s0:00:00 116,259 100%1.56MB/s0:00:00 (xfr#3, to-chk=10/14) TEST-253_UTBatch_itests__qtest_5_tests-TEST-org.apache.hadoop.hive.cli.TestTezPerfCliDriver.xml 0 0%0.00kB/s0:00:00 127,447 100%1.16MB/s0:00:00 (xfr#4, to-chk=9/14) TEST-253_UTBatch_itests__qtest_5_tests-TEST-org.apache.hive.TestDummy.xml 0 0%0.00kB/s0:00:00 116,171 100%1.05MB/s0:00:00 (xfr#5, to-chk=8/14) maven-test.txt 0 0%0.00kB/s0:00:00 59,059 100% 544.10kB/s0:00:00 (xfr#6, to-chk=7/14) logs/ logs/derby.log 0 0%0.00kB/s0:00:00 39,604 100% 361.46kB/s0:00:00 (xfr#7, to-chk=4/14) logs/derby.log-55c94a8c-b830-4ac1-98d5-3821793deae1 0 0%0.00kB/s0:00:00 1,155 100% 10.54kB/s0:00:00 (xfr#8, to-chk=3/14) logs/hive.log 0 0%0.00kB/s0:00:00 45,154,304 25% 42.98MB/s0:00:02 101,711,872 57% 48.48MB/s0:00:01 rsync: write failed on "/data/hiveptest/logs/PreCommit-HIVE-Build-10785/succeeded/253_UTBatch_itests__qtest_5_tests/logs/hive.log": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] Warning: Permanently added '35.188.142.251' (ECDSA) to the list of known hosts. receiving incremental file list logs/ logs/hive.log 0 0%0.00kB/s0:00:00 rsync: write failed on "/data/hiveptest/logs/PreCommit-HIVE-Build-10785/succeeded/253_UTBatch_itests__qtest_5_tests/logs/hive.log": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] Warning: Permanently added '35.188.142.251' (ECDSA) to the list of known hosts. receiving incremental file list logs/ logs/hive.log 0 0%0.00kB/s0:00:00 rsync: write failed on "/data/hiveptest/logs/PreCommit-HIVE-Build-10785/succeeded/253_UTBatch_itests__qtest_5_tests/logs/hive.log": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] Warning: Permanently added '35.188.142.251' (ECDSA) to the list of known hosts. receiving incremental file list logs/ logs/hive.log 0 0%0.00kB/s0:00:00 rsync: write failed on "/data/hiveptest/logs/PreCommit-HIVE-Build-10785/succeeded/253_UTBatch_itests__qtest_5_tests/logs/hive.log": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] Warning: Permanently added '35.188.142.251' (ECDSA) to the list of known hosts. receiving incremental file list logs/ logs/hive.log 0 0%0.00kB/s0:00:00 rsync: write failed on "/data/hiveptest/logs/PreCommit-HIVE-Build-10785/succeeded/253_UTBatch_itests__qtest_5_tests/logs/hive.log": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1] ' {noformat} This message is automatically
[jira] [Commented] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs
[ https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469407#comment-16469407 ] Sahil Takiar commented on HIVE-18533: - [~lirui] attached an updated patch that uses {{FutureTask}} instead of a custom {{Future}}. Agree that this makes the code simpler. > Add option to use InProcessLauncher to submit spark jobs > > > Key: HIVE-18533 > URL: https://issues.apache.org/jira/browse/HIVE-18533 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18533.1.patch, HIVE-18533.2.patch, > HIVE-18533.3.patch, HIVE-18533.4.patch, HIVE-18533.5.patch, > HIVE-18533.6.patch, HIVE-18533.7.patch, HIVE-18533.8.patch, > HIVE-18533.9.patch, HIVE-18533.91.patch, HIVE-18533.94.patch, > HIVE-18831.93.patch > > > See discussion in HIVE-16484 for details. > I think this will help with reducing the amount of time it takes to open a > HoS session + debuggability (no need launch a separate process to run a Spark > app). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19389) Schematool: For Hive's Information Schema, use embedded HS2 as default
[ https://issues.apache.org/jira/browse/HIVE-19389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-19389: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Committed. Thanks [~daijy] > Schematool: For Hive's Information Schema, use embedded HS2 as default > -- > > Key: HIVE-19389 > URL: https://issues.apache.org/jira/browse/HIVE-19389 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0, 3.1.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19389.1.patch, HIVE-19389.2.patch, > HIVE-19389.2.patch > > > Currently, for initializing/upgrading Hive's information schema, we require a > full jdbc url (for HS2). It will be good to have it connect using embedded > HS2 by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18533) Add option to use InProcessLauncher to submit spark jobs
[ https://issues.apache.org/jira/browse/HIVE-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18533: Attachment: HIVE-18533.94.patch > Add option to use InProcessLauncher to submit spark jobs > > > Key: HIVE-18533 > URL: https://issues.apache.org/jira/browse/HIVE-18533 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18533.1.patch, HIVE-18533.2.patch, > HIVE-18533.3.patch, HIVE-18533.4.patch, HIVE-18533.5.patch, > HIVE-18533.6.patch, HIVE-18533.7.patch, HIVE-18533.8.patch, > HIVE-18533.9.patch, HIVE-18533.91.patch, HIVE-18533.94.patch, > HIVE-18831.93.patch > > > See discussion in HIVE-16484 for details. > I think this will help with reducing the amount of time it takes to open a > HoS session + debuggability (no need launch a separate process to run a Spark > app). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19477) Hiveserver2 in http mode not emitting metric default.General.open_connections
[ https://issues.apache.org/jira/browse/HIVE-19477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469393#comment-16469393 ] Vaibhav Gumashta commented on HIVE-19477: - +1 > Hiveserver2 in http mode not emitting metric default.General.open_connections > - > > Key: HIVE-19477 > URL: https://issues.apache.org/jira/browse/HIVE-19477 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Dinesh Chitlangia >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-19477.01.patch, HIVE-19477.patch > > > Instances in binary mode are emitting the metric > _default.General.open_connections_ but the instances operating in http mode > are not emitting this metric. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19433) HiveJoinPushTransitivePredicatesRule hangs
[ https://issues.apache.org/jira/browse/HIVE-19433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469392#comment-16469392 ] Jesus Camacho Rodriguez commented on HIVE-19433: +1 pending tests > HiveJoinPushTransitivePredicatesRule hangs > -- > > Key: HIVE-19433 > URL: https://issues.apache.org/jira/browse/HIVE-19433 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19433.1.patch, HIVE-19433.2.patch > > > *Reproducer* > {code:sql} > CREATE TABLE `table1`( >`idp_warehouse_id` bigint, >`idp_audit_id` bigint, >`idp_effective_date` date, >`idp_end_date` date, >`idp_delete_date` date, >`pruid` varchar(32), >`prid` bigint, >`prtimesheetid` bigint, >`prassignmentid` bigint, >`prchargecodeid` bigint, >`prtypecodeid` bigint, >`prsequence` bigint, >`prmodby` varchar(96), >`prmodtime` timestamp, >`prrmexported` bigint, >`prrmckdel` bigint, >`slice_status` int, >`role_id` bigint, >`user_lov1` varchar(30), >`user_lov2` varchar(30), >`incident_id` bigint, >`incident_investment_id` bigint, >`odf_ss_actuals` bigint, >`practsum` decimal(38,20)); > CREATE TABLE `table2`( >`idp_warehouse_id` bigint, >`idp_audit_id` bigint, >`idp_effective_date` date, >`idp_end_date` date, >`idp_delete_date` date, >`pruid` varchar(32), >`prid` bigint, >`prtimesheetid` bigint, >`prassignmentid` bigint, >`prchargecodeid` bigint, >`prtypecodeid` bigint, >`prsequence` bigint, >`prmodby` varchar(96), >`prmodtime` timestamp, >`prrmexported` bigint, >`prrmckdel` bigint, >`slice_status` int, >`role_id` bigint, >`user_lov1` varchar(30), >`user_lov2` varchar(30), >`incident_id` bigint, >`incident_investment_id` bigint, >`odf_ss_actuals` bigint, >`practsum` decimal(38,20)); > explain SELECT s.idp_warehouse_id AS source_warehouse_id > FROMtable1 s > JOIN >table2 d > ON ( > s.prid = d.prid ) > JOIN > table2 e > ON > s.prid = e.prid > WHERE > concat( > CASE > WHEN s.prid IS NULL THEN 1 > ELSE s.prid > END,',', > CASE > WHEN s.prtimesheetid IS NULL THEN 1 > ELSE s.prtimesheetid > END,',', > CASE > WHEN s.prassignmentid IS NULL THEN 1 > ELSE s.prassignmentid > END,',', > CASE > WHEN s.prchargecodeid IS NULL THEN 1 > ELSE s.prchargecodeid > END,',', > CASE > WHEN (s.prtypecodeid) IS NULL THEN '' > ELSE s.prtypecodeid > END,',', > CASE > WHEN s.practsum IS NULL THEN 1 > ELSE s.practsum > END,',', > CASE > WHEN s.prsequence IS NULL THEN 1 > ELSE s.prsequence > END,',', > CASE > WHEN length(s.prmodby) IS NULL THEN '' > ELSE s.prmodby > END,',', > CASE > WHEN s.prmodtime IS NULL THEN > cast(from_unixtime(unix_timestamp('2017-12-08','-MM-dd') ) AS timestamp) > ELSE s.prmodtime > END,',', > CASE > WHEN s.prrmexported IS NULL THEN 1 > ELSE s.prrmexported > END,',', > CASE > WHEN s.prrmckdel IS NULL THEN 1 > ELSE s.prrmckdel > END,',', > CASE > WHEN s.slice_status IS NULL THEN 1 >
[jira] [Updated] (HIVE-19477) Hiveserver2 in http mode not emitting metric default.General.open_connections
[ https://issues.apache.org/jira/browse/HIVE-19477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19477: --- Attachment: HIVE-19477.01.patch > Hiveserver2 in http mode not emitting metric default.General.open_connections > - > > Key: HIVE-19477 > URL: https://issues.apache.org/jira/browse/HIVE-19477 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Dinesh Chitlangia >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-19477.01.patch, HIVE-19477.patch > > > Instances in binary mode are emitting the metric > _default.General.open_connections_ but the instances operating in http mode > are not emitting this metric. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19374) Parse and process ALTER TABLE SET OWNER command syntax
[ https://issues.apache.org/jira/browse/HIVE-19374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469363#comment-16469363 ] Sergio Peña commented on HIVE-19374: Thanks, I will make sure to add the missing header file once the tests pass. > Parse and process ALTER TABLE SET OWNER command syntax > -- > > Key: HIVE-19374 > URL: https://issues.apache.org/jira/browse/HIVE-19374 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Sergio Peña >Assignee: Sergio Peña >Priority: Major > Attachments: HIVE-19374.1.patch, HIVE-19374.2.patch > > > Subtask that parses the new alter table set owner syntax and implements code > to call HMS to change the owner of a table to a user or a role. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19471) bucket_map_join_tez1 and bucket_map_join_tez2 are failing
[ https://issues.apache.org/jira/browse/HIVE-19471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469361#comment-16469361 ] Vineet Garg commented on HIVE-19471: +1 pending tests for branch-3. > bucket_map_join_tez1 and bucket_map_join_tez2 are failing > -- > > Key: HIVE-19471 > URL: https://issues.apache.org/jira/browse/HIVE-19471 > Project: Hive > Issue Type: Sub-task >Reporter: Vineet Garg >Assignee: Deepak Jaiswal >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19471.01-branch-3.patch, HIVE-19471.1.patch > > > https://builds.apache.org/job/PreCommit-HIVE-Build/10766/testReport/ > TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] > TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2] > Both are failing. Probably need golden file update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19347) TestTriggersWorkloadManager tests are failing consistently
[ https://issues.apache.org/jira/browse/HIVE-19347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469358#comment-16469358 ] Hive QA commented on HIVE-19347: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 56s{color} | {color:blue} ql in master has 2321 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s{color} | {color:red} ql: The patch generated 2 new + 20 unchanged - 1 fixed = 22 total (was 21) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10785/dev-support/hive-personality.sh | | git revision | master / 72eff12 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10785/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10785/yetus/patch-asflicense-problems.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10785/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > TestTriggersWorkloadManager tests are failing consistently > -- > > Key: HIVE-19347 > URL: https://issues.apache.org/jira/browse/HIVE-19347 > Project: Hive > Issue Type: Sub-task >Reporter: Vineet Garg >Assignee: Matt McCline >Priority: Blocker > Attachments: HIVE-19347.01.patch > > > Caused by the patch which turned on vectorization. Following tests are > failing due to the patch: > * org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData > * > org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1] > * > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] > * org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite > 10 sec 14 > * org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps > 7.7 sec 14 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles > 15 sec 14 > * org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 > 17 sec 14 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime > 1.5 sec 14 > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime > * > org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill >
[jira] [Commented] (HIVE-19477) Hiveserver2 in http mode not emitting metric default.General.open_connections
[ https://issues.apache.org/jira/browse/HIVE-19477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469359#comment-16469359 ] Vaibhav Gumashta commented on HIVE-19477: - [~jcamachorodriguez] Patch looks good, but the LOG message should be change to reflect that we are capturing metrics for HS2 open/close connections and not JDO. > Hiveserver2 in http mode not emitting metric default.General.open_connections > - > > Key: HIVE-19477 > URL: https://issues.apache.org/jira/browse/HIVE-19477 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Dinesh Chitlangia >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-19477.patch > > > Instances in binary mode are emitting the metric > _default.General.open_connections_ but the instances operating in http mode > are not emitting this metric. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Priority: Blocker (was: Critical) > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19384) Vectorization: IfExprTimestamp* do not handle NULLs correctly
[ https://issues.apache.org/jira/browse/HIVE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19384: Fix Version/s: 3.1.0 3.0.0 > Vectorization: IfExprTimestamp* do not handle NULLs correctly > - > > Key: HIVE-19384 > URL: https://issues.apache.org/jira/browse/HIVE-19384 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19384.01.patch, HIVE-19384.02.patch > > > HIVE-18622: "Vectorization: IF Statements, Comparisons, and more do not > handle NULLs correctly" didn't quite fix the IfExprTimestamp* classes > right > {noformat} > // Carefully handle NULLs... > outputColVector.noNulls = false;{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19479) encoded stream seek is incorrect for 0-length RGs in LLAP IO
[ https://issues.apache.org/jira/browse/HIVE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469348#comment-16469348 ] Sergey Shelukhin commented on HIVE-19479: - [~prasanth_j] can you take a look? > encoded stream seek is incorrect for 0-length RGs in LLAP IO > > > Key: HIVE-19479 > URL: https://issues.apache.org/jira/browse/HIVE-19479 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19479.patch > > > The PositionProvider offset is not updated correctly and an error like this > may happen: > {noformat} > Caused by: java.lang.IllegalArgumentException: Seek in LENGTH to 541 is > outside of the data > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:161) > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:123) > at > org.apache.orc.impl.RunLengthIntegerReaderV2.seek(RunLengthIntegerReaderV2.java:331) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:298) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:258) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.repositionInStreams(OrcEncodedDataConsumer.java:250) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:134) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:62) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19389) Schematool: For Hive's Information Schema, use embedded HS2 as default
[ https://issues.apache.org/jira/browse/HIVE-19389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469344#comment-16469344 ] Vaibhav Gumashta commented on HIVE-19389: - Test failures unrelated, will commit. > Schematool: For Hive's Information Schema, use embedded HS2 as default > -- > > Key: HIVE-19389 > URL: https://issues.apache.org/jira/browse/HIVE-19389 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0, 3.1.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-19389.1.patch, HIVE-19389.2.patch, > HIVE-19389.2.patch > > > Currently, for initializing/upgrading Hive's information schema, we require a > full jdbc url (for HS2). It will be good to have it connect using embedded > HS2 by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19479) encoded stream seek is incorrect for 0-length RGs in LLAP IO
[ https://issues.apache.org/jira/browse/HIVE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19479: Description: The PositionProvider offset is not updated correctly and an error like this may happen: {noformat} Caused by: java.lang.IllegalArgumentException: Seek in LENGTH to 541 is outside of the data at org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:161) at org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:123) at org.apache.orc.impl.RunLengthIntegerReaderV2.seek(RunLengthIntegerReaderV2.java:331) at org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:298) at org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:258) at org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.repositionInStreams(OrcEncodedDataConsumer.java:250) at org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:134) at org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:62) {noformat} > encoded stream seek is incorrect for 0-length RGs in LLAP IO > > > Key: HIVE-19479 > URL: https://issues.apache.org/jira/browse/HIVE-19479 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19479.patch > > > The PositionProvider offset is not updated correctly and an error like this > may happen: > {noformat} > Caused by: java.lang.IllegalArgumentException: Seek in LENGTH to 541 is > outside of the data > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:161) > at > org.apache.orc.impl.InStream$UncompressedStream.seek(InStream.java:123) > at > org.apache.orc.impl.RunLengthIntegerReaderV2.seek(RunLengthIntegerReaderV2.java:331) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:298) > at > org.apache.hadoop.hive.ql.io.orc.encoded.EncodedTreeReaderFactory$StringStreamReader.seek(EncodedTreeReaderFactory.java:258) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.repositionInStreams(OrcEncodedDataConsumer.java:250) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:134) > at > org.apache.hadoop.hive.llap.io.decode.OrcEncodedDataConsumer.decodeBatch(OrcEncodedDataConsumer.java:62) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19479) encoded stream seek is incorrect for 0-length RGs in LLAP IO
[ https://issues.apache.org/jira/browse/HIVE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19479: --- > encoded stream seek is incorrect for 0-length RGs in LLAP IO > > > Key: HIVE-19479 > URL: https://issues.apache.org/jira/browse/HIVE-19479 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19479.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19479) encoded stream seek is incorrect for 0-length RGs in LLAP IO
[ https://issues.apache.org/jira/browse/HIVE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19479: Attachment: HIVE-19479.patch > encoded stream seek is incorrect for 0-length RGs in LLAP IO > > > Key: HIVE-19479 > URL: https://issues.apache.org/jira/browse/HIVE-19479 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19479.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19479) encoded stream seek is incorrect for 0-length RGs in LLAP IO
[ https://issues.apache.org/jira/browse/HIVE-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19479: Status: Patch Available (was: Open) > encoded stream seek is incorrect for 0-length RGs in LLAP IO > > > Key: HIVE-19479 > URL: https://issues.apache.org/jira/browse/HIVE-19479 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19479.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19453) Extend Load Data statement to take Input file format and Serde as parameters
[ https://issues.apache.org/jira/browse/HIVE-19453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19453: -- Attachment: HIVE-19453.01-branch-3.patch > Extend Load Data statement to take Input file format and Serde as parameters > > > Key: HIVE-19453 > URL: https://issues.apache.org/jira/browse/HIVE-19453 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19453.01-branch-3.patch, HIVE-19453.1.patch > > > Extend the load data statement to take the inputformat of the source files > and the serde to interpret it as parameter. For eg, > > load data local inpath > '../../data/files/load_data_job/partitions/load_data_2_partitions.txt' INTO > TABLE srcbucket_mapjoin > INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' > SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19471) bucket_map_join_tez1 and bucket_map_join_tez2 are failing
[ https://issues.apache.org/jira/browse/HIVE-19471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469305#comment-16469305 ] Jason Dere commented on HIVE-19471: --- +1 from me, bucket_map_join_tez2 matches with master besides the extra tests I added for HIVE-19336. > bucket_map_join_tez1 and bucket_map_join_tez2 are failing > -- > > Key: HIVE-19471 > URL: https://issues.apache.org/jira/browse/HIVE-19471 > Project: Hive > Issue Type: Sub-task >Reporter: Vineet Garg >Assignee: Deepak Jaiswal >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19471.01-branch-3.patch, HIVE-19471.1.patch > > > https://builds.apache.org/job/PreCommit-HIVE-Build/10766/testReport/ > TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] > TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2] > Both are failing. Probably need golden file update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19453) Extend Load Data statement to take Input file format and Serde as parameters
[ https://issues.apache.org/jira/browse/HIVE-19453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469298#comment-16469298 ] Deepak Jaiswal commented on HIVE-19453: --- Thanks for the review [~jdere]. Committed to master. Now preparing for branch-3 > Extend Load Data statement to take Input file format and Serde as parameters > > > Key: HIVE-19453 > URL: https://issues.apache.org/jira/browse/HIVE-19453 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19453.1.patch > > > Extend the load data statement to take the inputformat of the source files > and the serde to interpret it as parameter. For eg, > > load data local inpath > '../../data/files/load_data_job/partitions/load_data_2_partitions.txt' INTO > TABLE srcbucket_mapjoin > INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' > SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19332) Disable compute.query.using.stats for external table
[ https://issues.apache.org/jira/browse/HIVE-19332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469301#comment-16469301 ] Hive QA commented on HIVE-19332: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12922511/HIVE-19332.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 46 failed/errored test(s), 13544 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95)