[jira] [Commented] (HIVE-18652) Print Spark metrics on console
[ https://issues.apache.org/jira/browse/HIVE-18652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497610#comment-16497610 ] Hive QA commented on HIVE-18652: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 31s{color} | {color:blue} ql in master has 2278 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 22s{color} | {color:blue} spark-client in master has 15 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 48s{color} | {color:red} ql in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s{color} | {color:red} ql: The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 8s{color} | {color:red} spark-client: The patch generated 5 new + 15 unchanged - 0 fixed = 20 total (was 15) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11403/dev-support/hive-personality.sh | | git revision | master / 06807bc | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-11403/yetus/patch-mvninstall-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11403/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11403/yetus/diff-checkstyle-spark-client.txt | | modules | C: itests/hive-unit ql spark-client U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11403/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Print Spark metrics on console > -- > > Key: HIVE-18652 > URL: https://issues.apache.org/jira/browse/HIVE-18652 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18652.1.patch, HIVE-18652.2.patch, > HIVE-18652.3.patch, HIVE-18652.4.patch, HIVE-18652.5.patch, >
[jira] [Updated] (HIVE-19756) Insert request with UNION ALL and lateral view explode
[ https://issues.apache.org/jira/browse/HIVE-19756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Frédéric ESCANDELL updated HIVE-19756: -- Description: Hi, While executing this code snippet, no data was inserted in the final table t3. By replacing UNION ALL by UNION or removing the "lateral view explode" the code works properly. {code:sql} DROP table t1; DROP table t2; DROP table t3; CREATE TABLE t1(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), named_struct('v','y'))) tmp; CREATE TABLE t2(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), named_struct('v','w'))) tmp; DROP view v1; DROP table t3; CREATE VIEW v1 (cle,valeur) AS select base.cle,val.v from (select cle,valeur from t1) as base lateral view explode(base.valeur) a as val union all select base1.cle,val.v from (select cle,valeur from t2) as base1 lateral view explode(base1.valeur) a as val; CREATE TABLE t3(cle string,valeur string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; insert into t3 select * from v1; {code} was: Hi, While executing this code snippet, no data is inserted in the final table t3. By replacing UNION ALL by UNION or the "lateral view explode" the code works properly. {code:sql} DROP table t1; DROP table t2; DROP table t3; CREATE TABLE t1(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), named_struct('v','y'))) tmp; CREATE TABLE t2(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), named_struct('v','w'))) tmp; DROP view v1; DROP table t3; CREATE VIEW v1 (cle,valeur) AS select base.cle,val.v from (select cle,valeur from t1) as base lateral view explode(base.valeur) a as val union all select base1.cle,val.v from (select cle,valeur from t2) as base1 lateral view explode(base1.valeur) a as val; CREATE TABLE t3(cle string,valeur string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; insert into t3 select * from v1; {code} > Insert request with UNION ALL and lateral view explode > -- > > Key: HIVE-19756 > URL: https://issues.apache.org/jira/browse/HIVE-19756 > Project: Hive > Issue Type: Bug > Environment: HDP 2.6.4 >Reporter: Frédéric ESCANDELL >Priority: Major > > Hi, > While executing this code snippet, no data was inserted in the final table t3. > By replacing UNION ALL by UNION or removing the "lateral view explode" the > code works properly. > > {code:sql} > DROP table t1; > DROP table t2; > DROP table t3; > CREATE TABLE t1(cle string,valeur array>) > ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS > INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; > INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), > named_struct('v','y'))) tmp; > CREATE TABLE t2(cle string,valeur array>) > ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS > INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; > INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), > named_struct('v','w'))) tmp; > DROP view v1; > DROP table t3; > CREATE VIEW v1 (cle,valeur) > AS > select base.cle,val.v from (select cle,valeur from t1) as base > lateral view explode(base.valeur) a as val > union all > select
[jira] [Commented] (HIVE-19399) Down cast from int to tinyint generating incorrect value for vectorization
[ https://issues.apache.org/jira/browse/HIVE-19399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497598#comment-16497598 ] Vihang Karajgaonkar commented on HIVE-19399: Hi [~jerrychenhf] Can you please repeat the test by setting {{hive.vectorized.use.checked.expressions}} to true? > Down cast from int to tinyint generating incorrect value for vectorization > -- > > Key: HIVE-19399 > URL: https://issues.apache.org/jira/browse/HIVE-19399 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 3.1.0 >Reporter: Haifeng Chen >Priority: Major > > The following sql scripts generating different result for vectorization > disabled and enabled (both for ORC and for parquet). > drop table test_schema; > create table test_schema (f int) stored as parquet; > insert into test_schema values ('9'); > select cast(f as tinyint) + 1 from test_schema; > For non-vectorization, the result is -96 while for vectorization mode, it is > 10 > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19708) Repl copy retrying with cm path even if the failure is due to network issue
[ https://issues.apache.org/jira/browse/HIVE-19708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497580#comment-16497580 ] Hive QA commented on HIVE-19708: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925845/HIVE-19708.04.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11402/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11402/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11402/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12925845/HIVE-19708.04.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12925845 - PreCommit-HIVE-Build > Repl copy retrying with cm path even if the failure is due to network issue > --- > > Key: HIVE-19708 > URL: https://issues.apache.org/jira/browse/HIVE-19708 > Project: Hive > Issue Type: Task > Components: Hive, HiveServer2, repl >Affects Versions: 3.1.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-19708.01.patch, HIVE-19708.02.patch, > HIVE-19708.04.patch > > > * During repl load > ** for filesystem based copying of file if the copy fails due to a > connection error to source Name Node, we should recreate the filesystem > object. > ** the retry logic for local file copy should be triggered using the > original source file path ( and not the CM root path ) since failure can be > due to network issues between DFSClient and NN. > * When listing files in tables / partition to include them in _files, we > should add retry logic when failure occurs. FileSystem object here also > should be recreated since the existing one might be in inconsistent state. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-14388) Add number of rows inserted message after insert command in Beeline
[ https://issues.apache.org/jira/browse/HIVE-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497578#comment-16497578 ] Hive QA commented on HIVE-14388: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925589/HIVE-14388.12.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14438 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11401/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11401/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11401/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12925589 - PreCommit-HIVE-Build > Add number of rows inserted message after insert command in Beeline > --- > > Key: HIVE-14388 > URL: https://issues.apache.org/jira/browse/HIVE-14388 > Project: Hive > Issue Type: Improvement > Components: Beeline >Reporter: Vihang Karajgaonkar >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Minor > Attachments: HIVE-14388-WIP.patch, HIVE-14388.02.patch, > HIVE-14388.03.patch, HIVE-14388.05.patch, HIVE-14388.06.patch, > HIVE-14388.07.patch, HIVE-14388.08.patch, HIVE-14388.09.patch, > HIVE-14388.10.patch, HIVE-14388.12.patch > > > Currently, when you run insert command on beeline, it returns a message > saying "No rows affected .." > A better and more intuitive msg would be "xxx rows inserted (26.068 seconds)" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19602) Refactor inplace progress code in Hive-on-spark progress monitor to use ProgressMonitor instance
[ https://issues.apache.org/jira/browse/HIVE-19602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497567#comment-16497567 ] Bharathkrishna Guruvayoor Murali commented on HIVE-19602: - [~stakiar] Thanks for the review. I actually made a change in the patch. Can you please take a look at that. Also submitting the patch for pre-commit tests. > Refactor inplace progress code in Hive-on-spark progress monitor to use > ProgressMonitor instance > > > Key: HIVE-19602 > URL: https://issues.apache.org/jira/browse/HIVE-19602 > Project: Hive > Issue Type: Bug >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19602.3.patch > > > We can refactor the HOS inplace progress monitor code > (SparkJobMonitor#printStatusInPlace) to use InplaceUpdate#render. > We can create an instance of ProgressMonitor and use it to show the progress. > This would be similar to : > [https://github.com/apache/hive/blob/0b6bea89f74b607299ad944b37e4b62c711aaa69/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java#L181] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19602) Refactor inplace progress code in Hive-on-spark progress monitor to use ProgressMonitor instance
[ https://issues.apache.org/jira/browse/HIVE-19602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19602: Status: Patch Available (was: Open) > Refactor inplace progress code in Hive-on-spark progress monitor to use > ProgressMonitor instance > > > Key: HIVE-19602 > URL: https://issues.apache.org/jira/browse/HIVE-19602 > Project: Hive > Issue Type: Bug >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19602.3.patch > > > We can refactor the HOS inplace progress monitor code > (SparkJobMonitor#printStatusInPlace) to use InplaceUpdate#render. > We can create an instance of ProgressMonitor and use it to show the progress. > This would be similar to : > [https://github.com/apache/hive/blob/0b6bea89f74b607299ad944b37e4b62c711aaa69/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java#L181] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19602) Refactor inplace progress code in Hive-on-spark progress monitor to use ProgressMonitor instance
[ https://issues.apache.org/jira/browse/HIVE-19602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19602: Attachment: HIVE-19602.3.patch > Refactor inplace progress code in Hive-on-spark progress monitor to use > ProgressMonitor instance > > > Key: HIVE-19602 > URL: https://issues.apache.org/jira/browse/HIVE-19602 > Project: Hive > Issue Type: Bug >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19602.3.patch > > > We can refactor the HOS inplace progress monitor code > (SparkJobMonitor#printStatusInPlace) to use InplaceUpdate#render. > We can create an instance of ProgressMonitor and use it to show the progress. > This would be similar to : > [https://github.com/apache/hive/blob/0b6bea89f74b607299ad944b37e4b62c711aaa69/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/monitoring/RenderStrategy.java#L181] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-14388) Add number of rows inserted message after insert command in Beeline
[ https://issues.apache.org/jira/browse/HIVE-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497549#comment-16497549 ] Hive QA commented on HIVE-14388: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 28s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 27s{color} | {color:blue} beeline in master has 69 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 26s{color} | {color:blue} jdbc in master has 17 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 33s{color} | {color:blue} ql in master has 2278 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} service in master has 49 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 5s{color} | {color:green} The patch service-rpc passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch beeline passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} The patch hive-unit passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} The patch jdbc passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 36s{color} | {color:green} ql: The patch generated 0 new + 206 unchanged - 13 fixed = 206 total (was 219) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch service passed checkstyle {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11401/dev-support/hive-personality.sh | | git revision | master / 06807bc | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace |
[jira] [Commented] (HIVE-19598) Add Acid V1 to V2 upgrade module
[ https://issues.apache.org/jira/browse/HIVE-19598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497541#comment-16497541 ] Eugene Koifman commented on HIVE-19598: --- [~vgarg], HIVE-19598.01-branch-3.patch is attached and sanity checked > Add Acid V1 to V2 upgrade module > > > Key: HIVE-19598 > URL: https://issues.apache.org/jira/browse/HIVE-19598 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 4.0.0 > > Attachments: HIVE-19598.01-branch-3.patch, HIVE-19598.02.patch, > HIVE-19598.05.patch, HIVE-19598.06.patch > > > The on-disk layout for full acid (transactional) tables has changed 3.0. > Any transactional table that has any update/delete events in any deltas that > have not been Major compacted, must go through a Major compaction before > upgrading to 3.0. No more update/delete/merge should be run after/during > major compaction. > Not doing so will result in data corruption/loss. > > Need to create a utility tool to help with this process. HIVE-19233 started > this but it needs more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19598) Add Acid V1 to V2 upgrade module
[ https://issues.apache.org/jira/browse/HIVE-19598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-19598: -- Attachment: HIVE-19598.01-branch-3.patch > Add Acid V1 to V2 upgrade module > > > Key: HIVE-19598 > URL: https://issues.apache.org/jira/browse/HIVE-19598 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 4.0.0 > > Attachments: HIVE-19598.01-branch-3.patch, HIVE-19598.02.patch, > HIVE-19598.05.patch, HIVE-19598.06.patch > > > The on-disk layout for full acid (transactional) tables has changed 3.0. > Any transactional table that has any update/delete events in any deltas that > have not been Major compacted, must go through a Major compaction before > upgrading to 3.0. No more update/delete/merge should be run after/during > major compaction. > Not doing so will result in data corruption/loss. > > Need to create a utility tool to help with this process. HIVE-19233 started > this but it needs more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19370) Issue: ADD Months function on timestamp datatype fields in hive
[ https://issues.apache.org/jira/browse/HIVE-19370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497539#comment-16497539 ] Bharathkrishna Guruvayoor Murali commented on HIVE-19370: - Test run green. [~stakiar_impala_496e] Can you please push this patch. Thanks in advance. > Issue: ADD Months function on timestamp datatype fields in hive > --- > > Key: HIVE-19370 > URL: https://issues.apache.org/jira/browse/HIVE-19370 > Project: Hive > Issue Type: Bug >Reporter: Amit Chauhan >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19370.01.patch, HIVE-19370.02.patch, > HIVE-19370.03.patch > > > *Issue:* > while using ADD_Months function on a timestamp datatype column the output > omits the time part[HH:MM:SS] part from output. > which should not be the case. > *query:* EMAIL_FAILURE_DTMZ is of datatype timestamp in hive. > hive> select CUSTOMER_ID,EMAIL_FAILURE_DTMZ,ADD_MONTHS (EMAIL_FAILURE_DTMZ , > 1) from TABLE1 where CUSTOMER_ID=125674937; > OK > 125674937 2015-12-09 12:25:53 2016-01-09 > *hiver version :* > hive> !hive --version; > Hive 1.2.1000.2.5.6.0-40 > > can you please help if somehow I can get below as output: > > 125674937 2015-12-09 12:25:53 2016-01-09 12:25:53 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19370) Issue: ADD Months function on timestamp datatype fields in hive
[ https://issues.apache.org/jira/browse/HIVE-19370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497532#comment-16497532 ] Hive QA commented on HIVE-19370: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925588/HIVE-19370.03.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14443 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11400/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11400/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11400/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12925588 - PreCommit-HIVE-Build > Issue: ADD Months function on timestamp datatype fields in hive > --- > > Key: HIVE-19370 > URL: https://issues.apache.org/jira/browse/HIVE-19370 > Project: Hive > Issue Type: Bug >Reporter: Amit Chauhan >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19370.01.patch, HIVE-19370.02.patch, > HIVE-19370.03.patch > > > *Issue:* > while using ADD_Months function on a timestamp datatype column the output > omits the time part[HH:MM:SS] part from output. > which should not be the case. > *query:* EMAIL_FAILURE_DTMZ is of datatype timestamp in hive. > hive> select CUSTOMER_ID,EMAIL_FAILURE_DTMZ,ADD_MONTHS (EMAIL_FAILURE_DTMZ , > 1) from TABLE1 where CUSTOMER_ID=125674937; > OK > 125674937 2015-12-09 12:25:53 2016-01-09 > *hiver version :* > hive> !hive --version; > Hive 1.2.1000.2.5.6.0-40 > > can you please help if somehow I can get below as output: > > 125674937 2015-12-09 12:25:53 2016-01-09 12:25:53 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19370) Issue: ADD Months function on timestamp datatype fields in hive
[ https://issues.apache.org/jira/browse/HIVE-19370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497508#comment-16497508 ] Hive QA commented on HIVE-19370: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 46s{color} | {color:blue} ql in master has 2278 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11400/dev-support/hive-personality.sh | | git revision | master / 06807bc | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11400/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Issue: ADD Months function on timestamp datatype fields in hive > --- > > Key: HIVE-19370 > URL: https://issues.apache.org/jira/browse/HIVE-19370 > Project: Hive > Issue Type: Bug >Reporter: Amit Chauhan >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19370.01.patch, HIVE-19370.02.patch, > HIVE-19370.03.patch > > > *Issue:* > while using ADD_Months function on a timestamp datatype column the output > omits the time part[HH:MM:SS] part from output. > which should not be the case. > *query:* EMAIL_FAILURE_DTMZ is of datatype timestamp in hive. > hive> select CUSTOMER_ID,EMAIL_FAILURE_DTMZ,ADD_MONTHS (EMAIL_FAILURE_DTMZ , > 1) from TABLE1 where CUSTOMER_ID=125674937; > OK > 125674937 2015-12-09 12:25:53 2016-01-09 > *hiver version :* > hive> !hive --version; > Hive 1.2.1000.2.5.6.0-40 > > can you please help if somehow I can get below as output: > > 125674937 2015-12-09 12:25:53 2016-01-09 12:25:53 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19688) Make catalogs updatable
[ https://issues.apache.org/jira/browse/HIVE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497496#comment-16497496 ] Hive QA commented on HIVE-19688: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925585/HIVE-19688.1take2.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14445 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11399/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11399/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11399/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12925585 - PreCommit-HIVE-Build > Make catalogs updatable > --- > > Key: HIVE-19688 > URL: https://issues.apache.org/jira/browse/HIVE-19688 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Attachments: HIVE-19688.1take2.patch, HIVE-19688.patch > > > The initial changes for catalogs did not include an ability to alter > catalogs. We need to add that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19109) Vectorization: Enabling vectorization causes TestCliDriver delete_orig_table.q to produce Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-19109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497489#comment-16497489 ] Teddy Choi commented on HIVE-19109: --- +1 pending tests. > Vectorization: Enabling vectorization causes TestCliDriver > delete_orig_table.q to produce Wrong Results > --- > > Key: HIVE-19109 > URL: https://issues.apache.org/jira/browse/HIVE-19109 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19109.01.patch > > > Found in vectorization enable by default experiment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19498) Vectorization: CAST expressions produce wrong results
[ https://issues.apache.org/jira/browse/HIVE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19498: Resolution: Fixed Status: Resolved (was: Patch Available) > Vectorization: CAST expressions produce wrong results > - > > Key: HIVE-19498 > URL: https://issues.apache.org/jira/browse/HIVE-19498 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19498.01.patch, HIVE-19498.02.patch, > HIVE-19498.03.patch, HIVE-19498.04.patch, HIVE-19498.05-branch-3.patch, > HIVE-19498.05.patch > > > Wrong results for: > DATE --> BOOLEAN > DOUBLE --> DECIMAL > STRING|CHAR|VARCHAR --> DECIMAL > TIMESTAMP --> LONG -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19629) Enable Decimal64 reader after orc version upgrade
[ https://issues.apache.org/jira/browse/HIVE-19629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497487#comment-16497487 ] Matt McCline commented on HIVE-19629: - I don't completely follow why fixDecimalDataTypePhysicalVariations is necessary, but I do not want to block your progress. +1 LGTM tests pending. > Enable Decimal64 reader after orc version upgrade > - > > Key: HIVE-19629 > URL: https://issues.apache.org/jira/browse/HIVE-19629 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19629.1.patch, HIVE-19629.2.patch, > HIVE-19629.3.patch, HIVE-19629.4.patch, HIVE-19629.5.patch, > HIVE-19629.6.patch, HIVE-19629.7.patch > > > ORC 1.5.0 supports new fast decimal 64 reader. New VRB has to be created for > making use of decimal 64 column vectors. Also LLAP IO will need a new reader > to reader from long stream to decimal 64. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19529) Vectorization: Date/Timestamp NULL issues
[ https://issues.apache.org/jira/browse/HIVE-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19529: Fix Version/s: 4.0.0 > Vectorization: Date/Timestamp NULL issues > - > > Key: HIVE-19529 > URL: https://issues.apache.org/jira/browse/HIVE-19529 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-19529.06-branch-3.patch, HIVE-19529.06.patch > > > Wrong results found for: > date_add/date_sub > UT areas: > date_add/date_sub > datediff > to_date > interval_year_month + interval_year_month > interval_day_time + interval_day_time > interval_day_time + timestamp > timestamp + interval_day_time > date + interval_day_time > interval_day_time + date > interval_year_month + date > date + interval_year_month > interval_year_month + interval_year_month > timestamp + interval_year_month > date - date > interval_year_month - interval_year_month > interval_day_time - interval_day_time > timestamp - interval_day_time > timestamp - timestamp > date - timestamp > timestamp - date > date - interval_day_time > date - interval_year_month > timestamp - interval_year_month -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19529) Vectorization: Date/Timestamp NULL issues
[ https://issues.apache.org/jira/browse/HIVE-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497484#comment-16497484 ] Matt McCline commented on HIVE-19529: - A branch-3 patch has been submitted to Hive QA. > Vectorization: Date/Timestamp NULL issues > - > > Key: HIVE-19529 > URL: https://issues.apache.org/jira/browse/HIVE-19529 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-19529.06-branch-3.patch, HIVE-19529.06.patch > > > Wrong results found for: > date_add/date_sub > UT areas: > date_add/date_sub > datediff > to_date > interval_year_month + interval_year_month > interval_day_time + interval_day_time > interval_day_time + timestamp > timestamp + interval_day_time > date + interval_day_time > interval_day_time + date > interval_year_month + date > date + interval_year_month > interval_year_month + interval_year_month > timestamp + interval_year_month > date - date > interval_year_month - interval_year_month > interval_day_time - interval_day_time > timestamp - interval_day_time > timestamp - timestamp > date - timestamp > timestamp - date > date - interval_day_time > date - interval_year_month > timestamp - interval_year_month -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19529) Vectorization: Date/Timestamp NULL issues
[ https://issues.apache.org/jira/browse/HIVE-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497483#comment-16497483 ] Matt McCline commented on HIVE-19529: - Committed to master. > Vectorization: Date/Timestamp NULL issues > - > > Key: HIVE-19529 > URL: https://issues.apache.org/jira/browse/HIVE-19529 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-19529.06-branch-3.patch, HIVE-19529.06.patch > > > Wrong results found for: > date_add/date_sub > UT areas: > date_add/date_sub > datediff > to_date > interval_year_month + interval_year_month > interval_day_time + interval_day_time > interval_day_time + timestamp > timestamp + interval_day_time > date + interval_day_time > interval_day_time + date > interval_year_month + date > date + interval_year_month > interval_year_month + interval_year_month > timestamp + interval_year_month > date - date > interval_year_month - interval_year_month > interval_day_time - interval_day_time > timestamp - interval_day_time > timestamp - timestamp > date - timestamp > timestamp - date > date - interval_day_time > date - interval_year_month > timestamp - interval_year_month -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19488) Enable CM root based on db parameter, identifying a db as source of replication.
[ https://issues.apache.org/jira/browse/HIVE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-19488: --- Attachment: HIVE-19488.09-branch-3.patch > Enable CM root based on db parameter, identifying a db as source of > replication. > > > Key: HIVE-19488 > URL: https://issues.apache.org/jira/browse/HIVE-19488 > Project: Hive > Issue Type: Task > Components: Hive, HiveServer2, repl >Affects Versions: 3.1.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-19488.01.patch, HIVE-19488.02.patch, > HIVE-19488.03.patch, HIVE-19488.04.patch, HIVE-19488.05.patch, > HIVE-19488.06.patch, HIVE-19488.07.patch, HIVE-19488.08-branch-3.patch, > HIVE-19488.08.patch, HIVE-19488.09-branch-3.patch > > > * add a parameter at db level to identify if its a source of replication. > user should set this. > * Enable CM root only for databases that are a source of a replication > policy, for other db's skip the CM root functionality. > * prevent database drop if the parameter indicating its source of a > replication, is set. > * as an upgrade to this version, user should set the property on all > existing database policies, in affect. > * the parameter should be of the form . – repl.source.for : List < policy > ids > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19688) Make catalogs updatable
[ https://issues.apache.org/jira/browse/HIVE-19688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497451#comment-16497451 ] Hive QA commented on HIVE-19688: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} beeline in master has 69 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 52s{color} | {color:blue} standalone-metastore in master has 214 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s{color} | {color:red} beeline: The patch generated 4 new + 73 unchanged - 0 fixed = 77 total (was 73) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 25s{color} | {color:red} standalone-metastore: The patch generated 13 new + 950 unchanged - 0 fixed = 963 total (was 950) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 24 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s{color} | {color:red} beeline generated 1 new + 69 unchanged - 0 fixed = 70 total (was 69) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:beeline | | | org.apache.hive.beeline.HiveSchemaTool.alterCatalog(String, String, String) passes a nonconstant String to an execute or addBatch method on an SQL statement At HiveSchemaTool.java:nonconstant String to an execute or addBatch method on an SQL statement At HiveSchemaTool.java:[line 987] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11399/dev-support/hive-personality.sh | | git revision | master / 06807bc | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11399/yetus/diff-checkstyle-beeline.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11399/yetus/diff-checkstyle-standalone-metastore.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-11399/yetus/whitespace-eol.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-11399/yetus/new-findbugs-beeline.html | | modules | C: beeline itests/hive-unit standalone-metastore U: . | | Console output |
[jira] [Commented] (HIVE-19597) TestWorkloadManager sometimes hangs
[ https://issues.apache.org/jira/browse/HIVE-19597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497436#comment-16497436 ] Sergey Shelukhin commented on HIVE-19597: - Looks like the test event can get overwritten and never be triggered. [~prasanth_j] can you take a look? I'm also adding some logging to make it easier to debug. > TestWorkloadManager sometimes hangs > --- > > Key: HIVE-19597 > URL: https://issues.apache.org/jira/browse/HIVE-19597 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19597.patch > > > Seems like the tests randomly get stuck after the lines like > {noformat} > 2018-05-17T01:54:27,111 INFO [Workload management master] > tez.WorkloadManager: Processing current events > 2018-05-17T01:54:27,603 INFO [TriggerValidator] > tez.PerPoolTriggerValidatorRunnable: Creating trigger validator for pool: llap > 2018-05-17T01:54:37,090 DEBUG [Thread-28] conf.HiveConf: Found metastore URI > of null > {noformat} > Then they get killed by timeout. Happened in the same manner, to random tests > in a few separate runs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19597) TestWorkloadManager sometimes hangs
[ https://issues.apache.org/jira/browse/HIVE-19597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19597: Status: Patch Available (was: Open) > TestWorkloadManager sometimes hangs > --- > > Key: HIVE-19597 > URL: https://issues.apache.org/jira/browse/HIVE-19597 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19597.patch > > > Seems like the tests randomly get stuck after the lines like > {noformat} > 2018-05-17T01:54:27,111 INFO [Workload management master] > tez.WorkloadManager: Processing current events > 2018-05-17T01:54:27,603 INFO [TriggerValidator] > tez.PerPoolTriggerValidatorRunnable: Creating trigger validator for pool: llap > 2018-05-17T01:54:37,090 DEBUG [Thread-28] conf.HiveConf: Found metastore URI > of null > {noformat} > Then they get killed by timeout. Happened in the same manner, to random tests > in a few separate runs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19597) TestWorkloadManager sometimes hangs
[ https://issues.apache.org/jira/browse/HIVE-19597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19597: Attachment: HIVE-19597.patch > TestWorkloadManager sometimes hangs > --- > > Key: HIVE-19597 > URL: https://issues.apache.org/jira/browse/HIVE-19597 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19597.patch > > > Seems like the tests randomly get stuck after the lines like > {noformat} > 2018-05-17T01:54:27,111 INFO [Workload management master] > tez.WorkloadManager: Processing current events > 2018-05-17T01:54:27,603 INFO [TriggerValidator] > tez.PerPoolTriggerValidatorRunnable: Creating trigger validator for pool: llap > 2018-05-17T01:54:37,090 DEBUG [Thread-28] conf.HiveConf: Found metastore URI > of null > {noformat} > Then they get killed by timeout. Happened in the same manner, to random tests > in a few separate runs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19416) Create single version transactional table metastore statistics for aggregation queries
[ https://issues.apache.org/jira/browse/HIVE-19416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16496815#comment-16496815 ] Steve Yeom edited comment on HIVE-19416 at 6/1/18 1:06 AM: --- 1. MetaStore Schema Changes for the patch 04 of HIVE-19417 1.1 New table UPD_TXNS A record is created per transaction write per table. UPDATE_ID: primary key column, generated by Datanucleus when specified at "package.jdo" as datastore-identity . TBL_ID: TBLS.TBL_ID referencing column. A foreign key is created for this column referencing TBLS.TBL_ID. STATE: this is deleted for the next patch version. TXN_ID: Transaction id of the transaction to insert the row. WRITEID_LIST: valid writeIdList for the table of the transaction 1.2 Modification for TBLS and PARTITIONS tables A new column TXN_ID: transaction id of the UPD_TXNS. was (Author: steveyeom2017): 1. MetaStore Schema Changes for the patch 04 of HIVE-19147 1.1 New table UPD_TXNS A record is created per transaction write per table. UPDATE_ID: primary key column, generated by Datanucleus when specified at "package.jdo" as datastore-identity . TBL_ID: TBLS.TBL_ID referencing column. A foreign key is created for this column referencing TBLS.TBL_ID. STATE: this is deleted for the next patch version. TXN_ID: Transaction id of the transaction to insert the row. WRITEID_LIST: valid writeIdList for the table of the transaction 1.2 Modification for TBLS and PARTITIONS tables A new column TXN_ID: transaction id of the UPD_TXNS. > Create single version transactional table metastore statistics for > aggregation queries > -- > > Key: HIVE-19416 > URL: https://issues.apache.org/jira/browse/HIVE-19416 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > > The system should use only statistics for aggregation queries like count on > transactional tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19488) Enable CM root based on db parameter, identifying a db as source of replication.
[ https://issues.apache.org/jira/browse/HIVE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497422#comment-16497422 ] Hive QA commented on HIVE-19488: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925586/HIVE-19488.08-branch-3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11398/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11398/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11398/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-06-01 00:58:16.995 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-11398/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z branch-3 ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-06-01 00:58:16.998 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 06807bc HIVE-19699: Re-enable TestReOptimization (Zoltan Haindrich, reviewed by Jesus Camacho Rodriguez) + git clean -f -d + git checkout branch-3 Switched to branch 'branch-3' Your branch is behind 'origin/branch-3' by 5 commits, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/branch-3 HEAD is now at 34bf9fd HIVE-19699: Re-enable TestReOptimization (Zoltan Haindrich, reviewed by Jesus Camacho Rodriguez) + git merge --ff-only origin/branch-3 Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-06-01 00:58:18.673 + rm -rf ../yetus_PreCommit-HIVE-Build-11398 + mkdir ../yetus_PreCommit-HIVE-Build-11398 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-11398 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11398/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestReplChangeManager.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestCopyUtils.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationOnHDFSEncryptedZones.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenarios.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcidTables.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcrossInstances.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveAuthorizerCheckInvocation.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCleanerWithReplication.java: does not exist in index error: a/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcWithMiniHS2.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/MetaDataExportListener.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java: does not exist in index error: a/ql/src/test/queries/clientnegative/repl_dump_requires_admin.q: does not exist in index error: a/ql/src/test/queries/clientnegative/repl_load_requires_admin.q: does not exist
[jira] [Commented] (HIVE-19046) Refactor the common parts of the HiveMetastore add_partition_core and add_partitions_pspec_core methods
[ https://issues.apache.org/jira/browse/HIVE-19046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497419#comment-16497419 ] Hive QA commented on HIVE-19046: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925577/HIVE-19046.4.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14450 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema (batchId=195) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11397/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11397/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11397/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12925577 - PreCommit-HIVE-Build > Refactor the common parts of the HiveMetastore add_partition_core and > add_partitions_pspec_core methods > --- > > Key: HIVE-19046 > URL: https://issues.apache.org/jira/browse/HIVE-19046 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Minor > Attachments: HIVE-19046.1.patch, HIVE-19046.2.patch, > HIVE-19046.3.patch, HIVE-19046.4.patch > > > This is a follow-up Jira of the > [HIVE-18696|https://issues.apache.org/jira/browse/HIVE-18696] > [review|https://reviews.apache.org/r/65716/]. > The biggest part of these methods use the same code. It would make sense to > move this code part to a common method. > This code is almost the same in the two methods: > {code} > List> partFutures = Lists.newArrayList(); > final Table table = tbl; > for (final Partition part : parts) { > if (!part.getTableName().equals(tblName) || > !part.getDbName().equals(dbName)) { > throw new MetaException("Partition does not belong to target > table " > + dbName + "." + tblName + ": " + part); > } > boolean shouldAdd = startAddPartition(ms, part, ifNotExists); > if (!shouldAdd) { > existingParts.add(part); > LOG.info("Not adding partition " + part + " as it already > exists"); > continue; > } > final UserGroupInformation ugi; > try { > ugi = UserGroupInformation.getCurrentUser(); > } catch (IOException e) { > throw new RuntimeException(e); > } > partFutures.add(threadPool.submit(new Callable() { > @Override > public Partition call() throws Exception { > ugi.doAs(new PrivilegedExceptionAction() { > @Override > public Object run() throws Exception { > try { > boolean madeDir = createLocationForAddedPartition(table, > part); > if (addedPartitions.put(new PartValEqWrapper(part), > madeDir) != null) { > // Technically, for ifNotExists case, we could insert > one and discard the other > // because the first one now "exists", but it seems > better to report the problem > // upstream as such a command doesn't make sense. > throw new MetaException("Duplicate partitions in the > list: " + part); > } > initializeAddedPartition(table, part, madeDir); > } catch (MetaException e) { > throw new IOException(e.getMessage(), e); > } > return null; > } > }); > return part; > } > })); > } > try { > for (Future partFuture : partFutures) { > Partition part = partFuture.get(); > if (part != null) { > newParts.add(part); > } > } > } catch (InterruptedException | ExecutionException e) { > // cancel other tasks > for (Future partFuture : partFutures) { > partFuture.cancel(true); > } > throw new MetaException(e.getMessage()); > } > {code} -- This message was sent by Atlassian JIRA
[jira] [Commented] (HIVE-19046) Refactor the common parts of the HiveMetastore add_partition_core and add_partitions_pspec_core methods
[ https://issues.apache.org/jira/browse/HIVE-19046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497363#comment-16497363 ] Hive QA commented on HIVE-19046: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 59s{color} | {color:blue} standalone-metastore in master has 214 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} standalone-metastore: The patch generated 3 new + 368 unchanged - 1 fixed = 371 total (was 369) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11397/dev-support/hive-personality.sh | | git revision | master / 06807bc | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11397/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11397/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Refactor the common parts of the HiveMetastore add_partition_core and > add_partitions_pspec_core methods > --- > > Key: HIVE-19046 > URL: https://issues.apache.org/jira/browse/HIVE-19046 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Minor > Attachments: HIVE-19046.1.patch, HIVE-19046.2.patch, > HIVE-19046.3.patch, HIVE-19046.4.patch > > > This is a follow-up Jira of the > [HIVE-18696|https://issues.apache.org/jira/browse/HIVE-18696] > [review|https://reviews.apache.org/r/65716/]. > The biggest part of these methods use the same code. It would make sense to > move this code part to a common method. > This code is almost the same in the two methods: > {code} > List> partFutures = Lists.newArrayList(); > final Table table = tbl; > for (final Partition part : parts) { > if (!part.getTableName().equals(tblName) || > !part.getDbName().equals(dbName)) { > throw new MetaException("Partition does not belong to target > table " > + dbName + "." + tblName + ": " + part); > } > boolean shouldAdd = startAddPartition(ms, part, ifNotExists); > if (!shouldAdd) { >
[jira] [Updated] (HIVE-19758) Set hadoop.version=3.1.0 in standalone-metastore
[ https://issues.apache.org/jira/browse/HIVE-19758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-19758: -- Attachment: HIVE-19758.1.patch Status: Patch Available (was: Open) > Set hadoop.version=3.1.0 in standalone-metastore > > > Key: HIVE-19758 > URL: https://issues.apache.org/jira/browse/HIVE-19758 > Project: Hive > Issue Type: Sub-task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-19758.1.patch > > > When HIVE-19243 set hadoop.version=3.1.0 it did not change the value used in > standalone-metastore which still uses 3.0.0-beta1. > At the moment standalone-metastore is still a module of hive and so this can > suck in the wrong code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19758) Set hadoop.version=3.1.0 in standalone-metastore
[ https://issues.apache.org/jira/browse/HIVE-19758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-19758: -- Issue Type: Sub-task (was: Task) Parent: HIVE-18116 > Set hadoop.version=3.1.0 in standalone-metastore > > > Key: HIVE-19758 > URL: https://issues.apache.org/jira/browse/HIVE-19758 > Project: Hive > Issue Type: Sub-task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > > When HIVE-19243 set hadoop.version=3.1.0 it did not change the value used in > standalone-metastore which still uses 3.0.0-beta1. > At the moment standalone-metastore is still a module of hive and so this can > suck in the wrong code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19758) Set hadoop.version=3.1.0 in standalone-metastore
[ https://issues.apache.org/jira/browse/HIVE-19758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman reassigned HIVE-19758: - > Set hadoop.version=3.1.0 in standalone-metastore > > > Key: HIVE-19758 > URL: https://issues.apache.org/jira/browse/HIVE-19758 > Project: Hive > Issue Type: Task >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Major > > When HIVE-19243 set hadoop.version=3.1.0 it did not change the value used in > standalone-metastore which still uses 3.0.0-beta1. > At the moment standalone-metastore is still a module of hive and so this can > suck in the wrong code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19558) HiveAuthorizationProviderBase gets catalog name from config rather than db object
[ https://issues.apache.org/jira/browse/HIVE-19558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497348#comment-16497348 ] Hive QA commented on HIVE-19558: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925993/HIVE-19558.1take8.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14427 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testSyntheticComplexSchema[5] (batchId=198) org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag[4] (batchId=198) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11396/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11396/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11396/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12925993 - PreCommit-HIVE-Build > HiveAuthorizationProviderBase gets catalog name from config rather than db > object > - > > Key: HIVE-19558 > URL: https://issues.apache.org/jira/browse/HIVE-19558 > Project: Hive > Issue Type: Bug > Components: Authorization >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Fix For: 3.0.1 > > Attachments: HIVE-19558.1take2.patch, HIVE-19558.1take3.patch, > HIVE-19558.1take4.patch, HIVE-19558.1take5.patch, HIVE-19558.1take6.patch, > HIVE-19558.1take7.patch, HIVE-19558.1take8.patch, HIVE-19558.patch > > > HiveAuthorizationProviderBase.getDatabase uses just the database name to > fetch the database, relying on getDefaultCatalog() to fetch the catalog name > from the conf file. This does not work when the client has passed in an > object for a different catalog. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19584) Dictionary encoding for string types
[ https://issues.apache.org/jira/browse/HIVE-19584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497341#comment-16497341 ] Teddy Choi commented on HIVE-19584: --- Retrying the HiveQA test. Patch #4 is same with #3. > Dictionary encoding for string types > > > Key: HIVE-19584 > URL: https://issues.apache.org/jira/browse/HIVE-19584 > Project: Hive > Issue Type: Sub-task >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-19584.1.patch, HIVE-19584.2.patch, > HIVE-19584.3.patch, HIVE-19584.4.patch > > > Apache Arrow supports dictionary encoding for some data types. So implement > dictionary encoding for string types in Arrow SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19584) Dictionary encoding for string types
[ https://issues.apache.org/jira/browse/HIVE-19584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-19584: -- Attachment: (was: HIVE-19584.3.patch) > Dictionary encoding for string types > > > Key: HIVE-19584 > URL: https://issues.apache.org/jira/browse/HIVE-19584 > Project: Hive > Issue Type: Sub-task >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-19584.1.patch, HIVE-19584.2.patch, > HIVE-19584.3.patch, HIVE-19584.4.patch > > > Apache Arrow supports dictionary encoding for some data types. So implement > dictionary encoding for string types in Arrow SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19584) Dictionary encoding for string types
[ https://issues.apache.org/jira/browse/HIVE-19584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-19584: -- Attachment: HIVE-19584.4.patch > Dictionary encoding for string types > > > Key: HIVE-19584 > URL: https://issues.apache.org/jira/browse/HIVE-19584 > Project: Hive > Issue Type: Sub-task >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-19584.1.patch, HIVE-19584.2.patch, > HIVE-19584.3.patch, HIVE-19584.4.patch > > > Apache Arrow supports dictionary encoding for some data types. So implement > dictionary encoding for string types in Arrow SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19584) Dictionary encoding for string types
[ https://issues.apache.org/jira/browse/HIVE-19584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Teddy Choi updated HIVE-19584: -- Attachment: HIVE-19584.3.patch > Dictionary encoding for string types > > > Key: HIVE-19584 > URL: https://issues.apache.org/jira/browse/HIVE-19584 > Project: Hive > Issue Type: Sub-task >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-19584.1.patch, HIVE-19584.2.patch, > HIVE-19584.3.patch, HIVE-19584.3.patch > > > Apache Arrow supports dictionary encoding for some data types. So implement > dictionary encoding for string types in Arrow SerDe. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19571) Ability to run multiple pre-commit jobs on a ptest server
[ https://issues.apache.org/jira/browse/HIVE-19571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497319#comment-16497319 ] Deepak Jaiswal commented on HIVE-19571: --- [~stakiar] thanks for working on it. I would like to lend my hand in this effort. > Ability to run multiple pre-commit jobs on a ptest server > - > > Key: HIVE-19571 > URL: https://issues.apache.org/jira/browse/HIVE-19571 > Project: Hive > Issue Type: Sub-task > Components: Testing Infrastructure >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-17317.WIP.1.patch > > > I've been taking a look at the Disk, Network, and CPU usage of the GCE > instances we run ptest on, and it doesn't look like we are fully utilizing > the machines. The resource usage is very up and down. > During each ptest execution, there is a large chunk of time (~20 min) where > its just the Jenkins job that is doing any work (checking out github repos, > building code, figuring out test batches, etc.). During this time, the ptest > nodes are mostly idle - the CPU and Disk I/O are almost zero. > Even when ptest is running, I think some of resources are under-utilized. > Network and disk resource spike at the beginning of the job, probably because > ptest is distributing resources to each machine, each slave is downloading > jars, etc. However, after that, when the actual tests run, there is almost 0 > network activity (which makes sense since tests runs on a single node). For > disk usage, there is activity, but not nearly as high as when the setup phase > was occuring. CPU usage fluctuates between 40-80%. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19203) Thread-Safety Issue in HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-19203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19203: - Attachment: HIVE-19203.1.patch Status: Patch Available (was: Open) Changed nextSerialNum to AtomicInteger. > Thread-Safety Issue in HiveMetaStore > > > Key: HIVE-19203 > URL: https://issues.apache.org/jira/browse/HIVE-19203 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19203.1.patch > > > [https://github.com/apache/hive/blob/550d1e1196b7c801c572092db974a459aac6c249/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L345-L351] > {code:java} > private static int nextSerialNum = 0; > private static ThreadLocal threadLocalId = new > ThreadLocal() { > @Override > protected Integer initialValue() { > return nextSerialNum++; > } > };{code} > > {{nextSerialNum}} needs to be an atomic value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19669) Upgrade ORC to 1.5.1
[ https://issues.apache.org/jira/browse/HIVE-19669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19669: --- Resolution: Fixed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Pushed to master, branch-3. Cc [~vgarg] > Upgrade ORC to 1.5.1 > > > Key: HIVE-19669 > URL: https://issues.apache.org/jira/browse/HIVE-19669 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19669.01.patch, HIVE-19669.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18973) Make transaction system work with catalogs
[ https://issues.apache.org/jira/browse/HIVE-18973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-18973: -- Status: Open (was: Patch Available) Forgot to do the client side of this. Withdrawing the patch until I do that part as well. > Make transaction system work with catalogs > -- > > Key: HIVE-18973 > URL: https://issues.apache.org/jira/browse/HIVE-18973 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Labels: pull-request-available > Attachments: HIVE-18973.patch > > > The transaction tables need to understand catalogs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19699) Re-enable TestReOptimization
[ https://issues.apache.org/jira/browse/HIVE-19699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19699: --- Resolution: Fixed Fix Version/s: 4.0.0 3.1.0 Status: Resolved (was: Patch Available) Pushed to master, branch-3. Cc [~vgarg] > Re-enable TestReOptimization > > > Key: HIVE-19699 > URL: https://issues.apache.org/jira/browse/HIVE-19699 > Project: Hive > Issue Type: Test > Components: Physical Optimizer >Affects Versions: 3.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Zoltan Haindrich >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19699.01.patch, HIVE-19699.01.patch > > > https://builds.apache.org/job/PreCommit-HIVE-Build/11180/testReport/junit/org.apache.hadoop.hive.ql.plan.mapping/TestReOptimization/testStatCachingMetaStore/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19558) HiveAuthorizationProviderBase gets catalog name from config rather than db object
[ https://issues.apache.org/jira/browse/HIVE-19558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497300#comment-16497300 ] Hive QA commented on HIVE-19558: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 51s{color} | {color:blue} ql in master has 2278 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 1 new + 300 unchanged - 0 fixed = 301 total (was 300) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-11396/dev-support/hive-personality.sh | | git revision | master / 6c78eda | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-11396/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11396/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveAuthorizationProviderBase gets catalog name from config rather than db > object > - > > Key: HIVE-19558 > URL: https://issues.apache.org/jira/browse/HIVE-19558 > Project: Hive > Issue Type: Bug > Components: Authorization >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Fix For: 3.0.1 > > Attachments: HIVE-19558.1take2.patch, HIVE-19558.1take3.patch, > HIVE-19558.1take4.patch, HIVE-19558.1take5.patch, HIVE-19558.1take6.patch, > HIVE-19558.1take7.patch, HIVE-19558.1take8.patch, HIVE-19558.patch > > > HiveAuthorizationProviderBase.getDatabase uses just the database name to > fetch the database, relying on getDefaultCatalog() to fetch the catalog name > from the conf file. This does not work when the client has passed in an > object for a different catalog. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19743) hive is not pushing predicate down to HBaseStorageHandler if hive key mapped with hbase is stored as varchar
[ https://issues.apache.org/jira/browse/HIVE-19743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497291#comment-16497291 ] Gopal V commented on HIVE-19743: Yes, that is correct because varchar() != string, does it work right when you cast the constants to the right type? {code} hive> explain select count(1) from foo where y = cast('1' as varchar(10)); OK Plan optimized by CBO. Vertex dependency in root stage Reducer 2 <- Map 1 (CUSTOM_SIMPLE_EDGE) Stage-0 Fetch Operator limit:-1 Stage-1 Reducer 2 vectorized, llap File Output Operator [FS_14] Group By Operator [GBY_13] (rows=1 width=102) Output:["_col0"],aggregations:["count(VALUE._col0)"] <-Map 1 [CUSTOM_SIMPLE_EDGE] vectorized, llap PARTITION_ONLY_SHUFFLE [RS_12] Group By Operator [GBY_11] (rows=1 width=102) Output:["_col0"],aggregations:["count()"] Select Operator [SEL_10] (rows=1 width=94) Filter Operator [FIL_9] (rows=1 width=94) predicate:(y = '1') TableScan [TS_0] (rows=1 width=94) default@foo,foo,Tbl:COMPLETE,Col:NONE,Output:["y"] Time taken: 0.177 seconds, Fetched: 23 row(s) {code} > hive is not pushing predicate down to HBaseStorageHandler if hive key mapped > with hbase is stored as varchar > > > Key: HIVE-19743 > URL: https://issues.apache.org/jira/browse/HIVE-19743 > Project: Hive > Issue Type: Bug > Components: HBase Handler, Hive >Affects Versions: 2.1.0 > Environment: java8,centos7 >Reporter: Rajkumar Singh >Priority: Major > > Steps to Reproduce: > {code} > //hbase table > create 'mytable', 'cf' > put 'mytable', 'ABCDEF|GHIJK|ijj123kl-mn4o-4pq5-678r-st90123u0v4', > 'cf:message', 'hello world' > put 'mytable', 'ABCDEF1|GHIJK1|ijj123kl-mn4o-4pq5-678r-st90123u0v41', > 'cf:foo', 0x0 > // hive table with key stored as varchar > show create table hbase_table_4; > +---+--+ > | createtab_stmt | > +---+--+ > | CREATE EXTERNAL TABLE `hbase_table_4`( | > | `hbase_key` varchar(80) COMMENT 'from deserializer', | > | `value` string COMMENT 'from deserializer', | > | `value1` string COMMENT 'from deserializer') | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.hbase.HBaseSerDe' | > | STORED BY | > | 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' | > | WITH SERDEPROPERTIES ( | > | 'hbase.columns.mapping'=':key,cf:foo,cf:message', | > | 'serialization.format'='1') | > | TBLPROPERTIES ( | > | 'COLUMN_STATS_ACCURATE'='\{\"BASIC_STATS\":\"true\"}', | > | 'hbase.table.name'='mytable', | > | 'numFiles'='0', | > | 'numRows'='0', | > | 'rawDataSize'='0', | > | 'totalSize'='0', | > | 'transient_lastDdlTime'='1527708430') | > +---+--+ > > // hive table key stored as string > CREATE EXTERNAL TABLE `hbase_table_5`( | > | `hbase_key` string COMMENT 'from deserializer', | > | `value` string COMMENT 'from deserializer', | > | `value1` string COMMENT 'from deserializer') | > | ROW FORMAT SERDE | > | 'org.apache.hadoop.hive.hbase.HBaseSerDe' | > | STORED BY | > | 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' | > | WITH SERDEPROPERTIES ( | > | 'hbase.columns.mapping'=':key,cf:foo,cf:message', | > | 'serialization.format'='1') | > | TBLPROPERTIES ( | > | 'COLUMN_STATS_ACCURATE'='\{\"BASIC_STATS\":\"true\"}', | > | 'hbase.table.name'='mytable', | > | 'numFiles'='0', | > | 'numRows'='0', | > | 'rawDataSize'='0', | > | 'totalSize'='0', | > | 'transient_lastDdlTime'='1527708520') | > > Explain Plan > explain select * from
[jira] [Commented] (HIVE-19669) Upgrade ORC to 1.5.1
[ https://issues.apache.org/jira/browse/HIVE-19669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497288#comment-16497288 ] Prasanth Jayachandran commented on HIVE-19669: -- +1 > Upgrade ORC to 1.5.1 > > > Key: HIVE-19669 > URL: https://issues.apache.org/jira/browse/HIVE-19669 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19669.01.patch, HIVE-19669.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19757) hive.version.shortname should be 4.0
[ https://issues.apache.org/jira/browse/HIVE-19757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-19757: -- > hive.version.shortname should be 4.0 > > > Key: HIVE-19757 > URL: https://issues.apache.org/jira/browse/HIVE-19757 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Minor > > pom.xml still points to > {{3.1.0}} which causes > issues with schemaTool init scripts -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19382) Acquire locks before generating valid transaction list for some operations
[ https://issues.apache.org/jira/browse/HIVE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19382: --- Attachment: HIVE-19382.05.patch > Acquire locks before generating valid transaction list for some operations > -- > > Key: HIVE-19382 > URL: https://issues.apache.org/jira/browse/HIVE-19382 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19382.01.patch, HIVE-19382.02.patch, > HIVE-19382.03.patch, HIVE-19382.04.patch, HIVE-19382.05.patch, > HIVE-19382.patch > > > To ensure correctness, in particular for operations that require exclusive > ({{INSERT OVERWRITE}}) and semishared ({{UPDATE}}/{{DELETE}}) locks. > This is a temporary fix till lock acquisition is moved before analyze in > HIVE-18948. > With this fix, system proceed as follows. The driver will acquire the > snapshot, compile the query wrt that snapshot, and then, it will acquire > locks. If snapshot is still valid, it will continue as usual. But if snapshot > is not valid anymore, it will recompile the query. > This is easier to implement than full solution described in HIVE-18948 > because we do not need to move the logic to extract the read/write entities > from a query before compilation (actually while parsing). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19382) Acquire locks before generating valid transaction list for some operations
[ https://issues.apache.org/jira/browse/HIVE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19382: --- Attachment: (was: HIVE-19382.05.patch) > Acquire locks before generating valid transaction list for some operations > -- > > Key: HIVE-19382 > URL: https://issues.apache.org/jira/browse/HIVE-19382 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19382.01.patch, HIVE-19382.02.patch, > HIVE-19382.03.patch, HIVE-19382.04.patch, HIVE-19382.05.patch, > HIVE-19382.patch > > > To ensure correctness, in particular for operations that require exclusive > ({{INSERT OVERWRITE}}) and semishared ({{UPDATE}}/{{DELETE}}) locks. > This is a temporary fix till lock acquisition is moved before analyze in > HIVE-18948. > With this fix, system proceed as follows. The driver will acquire the > snapshot, compile the query wrt that snapshot, and then, it will acquire > locks. If snapshot is still valid, it will continue as usual. But if snapshot > is not valid anymore, it will recompile the query. > This is easier to implement than full solution described in HIVE-18948 > because we do not need to move the logic to extract the read/write entities > from a query before compilation (actually while parsing). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19382) Acquire locks before generating valid transaction list for some operations
[ https://issues.apache.org/jira/browse/HIVE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19382: --- Attachment: HIVE-19382.05.patch > Acquire locks before generating valid transaction list for some operations > -- > > Key: HIVE-19382 > URL: https://issues.apache.org/jira/browse/HIVE-19382 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19382.01.patch, HIVE-19382.02.patch, > HIVE-19382.03.patch, HIVE-19382.04.patch, HIVE-19382.05.patch, > HIVE-19382.patch > > > To ensure correctness, in particular for operations that require exclusive > ({{INSERT OVERWRITE}}) and semishared ({{UPDATE}}/{{DELETE}}) locks. > This is a temporary fix till lock acquisition is moved before analyze in > HIVE-18948. > With this fix, system proceed as follows. The driver will acquire the > snapshot, compile the query wrt that snapshot, and then, it will acquire > locks. If snapshot is still valid, it will continue as usual. But if snapshot > is not valid anymore, it will recompile the query. > This is easier to implement than full solution described in HIVE-18948 > because we do not need to move the logic to extract the read/write entities > from a query before compilation (actually while parsing). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19731) Change staging tmp directory used by TestHCatLoaderComplexSchema
[ https://issues.apache.org/jira/browse/HIVE-19731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497273#comment-16497273 ] Hive QA commented on HIVE-19731: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925567/HIVE-19731.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14427 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testExceptions (batchId=304) org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=307) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11395/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11395/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11395/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12925567 - PreCommit-HIVE-Build > Change staging tmp directory used by TestHCatLoaderComplexSchema > > > Key: HIVE-19731 > URL: https://issues.apache.org/jira/browse/HIVE-19731 > Project: Hive > Issue Type: Test > Components: Test >Affects Versions: 3.1.0, 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19731.patch > > > Another one that is set to default and hence is flaky. > https://builds.apache.org/job/PreCommit-HIVE-Build/11321/testReport/org.apache.hive.hcatalog.pig/TestHCatLoaderComplexSchema/testSyntheticComplexSchema_3_/ > {noformat} > org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access > ‘/tmp/hadoop/mapred/staging/hiveptest985275899/.staging/job_local985275899_0088’: > No such file or directory > at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.run(Shell.java:902) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.execCommand(Shell.java:1321) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.execCommand(Shell.java:1303) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:840) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:508) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:511) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:727) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:658) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:172) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:133) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_102] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_102] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336) >
[jira] [Updated] (HIVE-13981) Operation.toSQLException eats full exception stack
[ https://issues.apache.org/jira/browse/HIVE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13981: Status: Patch Available (was: Open) > Operation.toSQLException eats full exception stack > -- > > Key: HIVE-13981 > URL: https://issues.apache.org/jira/browse/HIVE-13981 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-13981.03.patch, HIVE-13981.1.patch, > HIVE-13981.2.patch > > > Operation.toSQLException eats half of the exception stack and make debug > hard. For example, we saw an exception: > {code} > org.apache.hive.service.cli.HiveSQL Exception : Error while compiling > statement: FAILED : NullPointer Exception null > at org.apache.hive.service.cli.operation.Operation.toSQL Exception > (Operation.java:336) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:113) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:182) > at org.apache.hive.service.cli.operation.Operation.run(Operation.java:278) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:421) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:408) > at > org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:276) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:505) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang. NullPointer Exception > {code} > The real stack causing the NPE is lost. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-13981) Operation.toSQLException eats full exception stack
[ https://issues.apache.org/jira/browse/HIVE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-13981: --- Assignee: Daniel Dai (was: Sergey Shelukhin) > Operation.toSQLException eats full exception stack > -- > > Key: HIVE-13981 > URL: https://issues.apache.org/jira/browse/HIVE-13981 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-13981.03.patch, HIVE-13981.1.patch, > HIVE-13981.2.patch > > > Operation.toSQLException eats half of the exception stack and make debug > hard. For example, we saw an exception: > {code} > org.apache.hive.service.cli.HiveSQL Exception : Error while compiling > statement: FAILED : NullPointer Exception null > at org.apache.hive.service.cli.operation.Operation.toSQL Exception > (Operation.java:336) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:113) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:182) > at org.apache.hive.service.cli.operation.Operation.run(Operation.java:278) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:421) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:408) > at > org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:276) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:505) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang. NullPointer Exception > {code} > The real stack causing the NPE is lost. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-13981) Operation.toSQLException eats full exception stack
[ https://issues.apache.org/jira/browse/HIVE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13981: Attachment: HIVE-13981.03.patch > Operation.toSQLException eats full exception stack > -- > > Key: HIVE-13981 > URL: https://issues.apache.org/jira/browse/HIVE-13981 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-13981.03.patch, HIVE-13981.1.patch, > HIVE-13981.2.patch > > > Operation.toSQLException eats half of the exception stack and make debug > hard. For example, we saw an exception: > {code} > org.apache.hive.service.cli.HiveSQL Exception : Error while compiling > statement: FAILED : NullPointer Exception null > at org.apache.hive.service.cli.operation.Operation.toSQL Exception > (Operation.java:336) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:113) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:182) > at org.apache.hive.service.cli.operation.Operation.run(Operation.java:278) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:421) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:408) > at > org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:276) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:505) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang. NullPointer Exception > {code} > The real stack causing the NPE is lost. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-13981) Operation.toSQLException eats full exception stack
[ https://issues.apache.org/jira/browse/HIVE-13981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-13981: --- Assignee: Sergey Shelukhin (was: Daniel Dai) > Operation.toSQLException eats full exception stack > -- > > Key: HIVE-13981 > URL: https://issues.apache.org/jira/browse/HIVE-13981 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-13981.1.patch, HIVE-13981.2.patch > > > Operation.toSQLException eats half of the exception stack and make debug > hard. For example, we saw an exception: > {code} > org.apache.hive.service.cli.HiveSQL Exception : Error while compiling > statement: FAILED : NullPointer Exception null > at org.apache.hive.service.cli.operation.Operation.toSQL Exception > (Operation.java:336) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:113) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:182) > at org.apache.hive.service.cli.operation.Operation.run(Operation.java:278) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:421) > at > org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:408) > at > org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:276) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:505) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:562) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang. NullPointer Exception > {code} > The real stack causing the NPE is lost. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18875) Enable SMB Join by default in Tez
[ https://issues.apache.org/jira/browse/HIVE-18875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18875: -- Attachment: HIVE-18875.11.patch > Enable SMB Join by default in Tez > - > > Key: HIVE-18875 > URL: https://issues.apache.org/jira/browse/HIVE-18875 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18875.1.patch, HIVE-18875.10.patch, > HIVE-18875.11.patch, HIVE-18875.2.patch, HIVE-18875.3.patch, > HIVE-18875.4.patch, HIVE-18875.5.patch, HIVE-18875.6.patch, > HIVE-18875.7.patch, HIVE-18875.8.patch, HIVE-18875.9.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19747) "GRANT ALL TO USER" failed with NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-19747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497013#comment-16497013 ] Aihua Xu edited comment on HIVE-19747 at 5/31/18 10:22 PM: --- Thanks [~Rajkumar Singh] . I'm checking the syntax here https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization {noformat} GRANT role_name [, role_name] ... TO principal_specification [, principal_specification] ... [ WITH ADMIN OPTION ]; principal_specification : USER user | ROLE role {noformat} and also in authorization_8.q qfile, {noformat} set hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider; set hive.security.authorization.enabled=true; GRANT ALL TO USER hive_test_user; {noformat} Do we support such syntax? Should we give syntax error for {{grant all to user abc}} then? was (Author: aihuaxu): Thanks [~Rajkumar Singh] . I'm checking the syntax here https://cwiki.apache.org/confluence/display/Hive/SQL+Standard+Based+Hive+Authorization {noformat} GRANT role_name [, role_name] ... TO principal_specification [, principal_specification] ... [ WITH ADMIN OPTION ]; principal_specification : USER user | ROLE role {noformat} and also in authorization_8.q qfile, {noformat} set hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider; set hive.security.authorization.enabled=true; GRANT ALL TO USER hive_test_user; {noformat} Do we support such syntax? Should we give syntax error for {{grant all to user abc}} then? In the qtest qu > "GRANT ALL TO USER" failed with NullPointerException > > > Key: HIVE-19747 > URL: https://issues.apache.org/jira/browse/HIVE-19747 > Project: Hive > Issue Type: Bug > Components: Authorization >Affects Versions: 2.1.0 >Reporter: Aihua Xu >Priority: Minor > > If you issue the command 'grant all to user abc', you will see the following > NPE exception. Seems the type in hivePrivObject is not initialized. > {noformat} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLAuthorizationUtils.isOwner(SQLAuthorizationUtils.java:265) > at > org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLAuthorizationUtils.getPrivilegesFromMetaStore(SQLAuthorizationUtils.java:212) > at > org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.GrantPrivAuthUtils.checkRequiredPrivileges(GrantPrivAuthUtils.java:64) > at > org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.GrantPrivAuthUtils.authorize(GrantPrivAuthUtils.java:50) > at > org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAccessController.grantPrivileges(SQLStdHiveAccessController.java:179) > at > org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAccessControllerWrapper.grantPrivileges(SQLStdHiveAccessControllerWrapper.java:70) > at > org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAuthorizerImpl.grantPrivileges(HiveAuthorizerImpl.java:48) > at > org.apache.hadoop.hive.ql.exec.DDLTask.grantOrRevokePrivileges(DDLTask.java:1123 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19663) refactor LLAP IO report generation
[ https://issues.apache.org/jira/browse/HIVE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497256#comment-16497256 ] Sergey Shelukhin commented on HIVE-19663: - [~prasanth_j] can you take a look? thnx > refactor LLAP IO report generation > -- > > Key: HIVE-19663 > URL: https://issues.apache.org/jira/browse/HIVE-19663 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19663.patch > > > Follow-up from HIVE-19642. > Instead of each component calling some other component in a chain, all the > parts of the state dump should be called in one place to avoid weird > dependencies/sequences that need to be accounted for to generate the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19663) refactor LLAP IO report generation
[ https://issues.apache.org/jira/browse/HIVE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19663: Attachment: HIVE-19663.patch > refactor LLAP IO report generation > -- > > Key: HIVE-19663 > URL: https://issues.apache.org/jira/browse/HIVE-19663 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19663.patch > > > Follow-up from HIVE-19642. > Instead of each component calling some other component in a chain, all the > parts of the state dump should be called in one place to avoid weird > dependencies/sequences that need to be accounted for to generate the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19663) refactor LLAP IO report generation
[ https://issues.apache.org/jira/browse/HIVE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19663: Status: Patch Available (was: Open) > refactor LLAP IO report generation > -- > > Key: HIVE-19663 > URL: https://issues.apache.org/jira/browse/HIVE-19663 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19663.patch > > > Follow-up from HIVE-19642. > Instead of each component calling some other component in a chain, all the > parts of the state dump should be called in one place to avoid weird > dependencies/sequences that need to be accounted for to generate the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19731) Change staging tmp directory used by TestHCatLoaderComplexSchema
[ https://issues.apache.org/jira/browse/HIVE-19731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19731: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Change staging tmp directory used by TestHCatLoaderComplexSchema > > > Key: HIVE-19731 > URL: https://issues.apache.org/jira/browse/HIVE-19731 > Project: Hive > Issue Type: Test > Components: Test >Affects Versions: 3.1.0, 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19731.patch > > > Another one that is set to default and hence is flaky. > https://builds.apache.org/job/PreCommit-HIVE-Build/11321/testReport/org.apache.hive.hcatalog.pig/TestHCatLoaderComplexSchema/testSyntheticComplexSchema_3_/ > {noformat} > org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access > ‘/tmp/hadoop/mapred/staging/hiveptest985275899/.staging/job_local985275899_0088’: > No such file or directory > at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.run(Shell.java:902) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.execCommand(Shell.java:1321) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.execCommand(Shell.java:1303) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:840) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:508) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:511) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:727) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:658) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:172) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:133) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_102] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_102] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336) > [hadoop-mapreduce-client-core-3.1.0.jar:?] > at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_102] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_102] > at > org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128) > [pig-0.16.0-h2.jar:?] > at > org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194) > [pig-0.16.0-h2.jar:?] > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_102] > at > org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276) > [pig-0.16.0-h2.jar:?] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19752) PerfLogger integration for critical Hive-on-S3 paths
[ https://issues.apache.org/jira/browse/HIVE-19752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19752: Status: Patch Available (was: Open) > PerfLogger integration for critical Hive-on-S3 paths > > > Key: HIVE-19752 > URL: https://issues.apache.org/jira/browse/HIVE-19752 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19752.1.patch > > > There are several areas where Hive performs a lot of S3 operations, it would > be good to add PerfLogger statements around this so we can measure how long > they take. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19752) PerfLogger integration for critical Hive-on-S3 paths
[ https://issues.apache.org/jira/browse/HIVE-19752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19752: Attachment: HIVE-19752.1.patch > PerfLogger integration for critical Hive-on-S3 paths > > > Key: HIVE-19752 > URL: https://issues.apache.org/jira/browse/HIVE-19752 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19752.1.patch > > > There are several areas where Hive performs a lot of S3 operations, it would > be good to add PerfLogger statements around this so we can measure how long > they take. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19731) Change staging tmp directory used by TestHCatLoaderComplexSchema
[ https://issues.apache.org/jira/browse/HIVE-19731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497221#comment-16497221 ] Hive QA commented on HIVE-19731: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 19s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-11395/patches/PreCommit-HIVE-Build-11395.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-11395/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Change staging tmp directory used by TestHCatLoaderComplexSchema > > > Key: HIVE-19731 > URL: https://issues.apache.org/jira/browse/HIVE-19731 > Project: Hive > Issue Type: Test > Components: Test >Affects Versions: 3.1.0, 4.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 3.1.0, 4.0.0 > > Attachments: HIVE-19731.patch > > > Another one that is set to default and hence is flaky. > https://builds.apache.org/job/PreCommit-HIVE-Build/11321/testReport/org.apache.hive.hcatalog.pig/TestHCatLoaderComplexSchema/testSyntheticComplexSchema_3_/ > {noformat} > org.apache.hadoop.util.Shell$ExitCodeException: chmod: cannot access > ‘/tmp/hadoop/mapred/staging/hiveptest985275899/.staging/job_local985275899_0088’: > No such file or directory > at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.run(Shell.java:902) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.execCommand(Shell.java:1321) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.util.Shell.execCommand(Shell.java:1303) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:840) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:508) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:489) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:511) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:727) > ~[hadoop-common-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:658) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:172) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:133) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:102) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:197) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at java.security.AccessController.doPrivileged(Native Method) > ~[?:1.8.0_102] > at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_102] > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > ~[hadoop-common-3.1.0.jar:?] > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > ~[hadoop-mapreduce-client-core-3.1.0.jar:?] > at > org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336) > [hadoop-mapreduce-client-core-3.1.0.jar:?] > at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) ~[?:?] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_102] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_102] > at > org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128) > [pig-0.16.0-h2.jar:?] > at >
[jira] [Commented] (HIVE-19755) insertsel_fail.q.out needs to be updated on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497224#comment-16497224 ] Jesus Camacho Rodriguez commented on HIVE-19755: +1 > insertsel_fail.q.out needs to be updated on branch-3 > > > Key: HIVE-19755 > URL: https://issues.apache.org/jira/browse/HIVE-19755 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19755.1-branch-3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-19754) vector_decimal_2 failing on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg resolved HIVE-19754. Resolution: Fixed Fix Version/s: 3.1.0 > vector_decimal_2 failing on branch-3 > > > Key: HIVE-19754 > URL: https://issues.apache.org/jira/browse/HIVE-19754 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19754.1.patch-branch-3.patch > > > caused by HIVE-19108. This needs golden file update only on branch-3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18583) Enable DateRangeRules
[ https://issues.apache.org/jira/browse/HIVE-18583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497216#comment-16497216 ] Hive QA commented on HIVE-18583: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925574/HIVE-18583.3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11394/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11394/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11394/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-05-31 21:31:27.963 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-11394/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-05-31 21:31:27.966 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 338a2d4..6c78eda master -> origin/master + git reset --hard HEAD HEAD is now at 338a2d4 HIVE-19529: Vectorization: Date/Timestamp NULL issues (Matt McCline, reviewed by Teddy Choi) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 6c78eda HIVE-19598 : Add Acid V1 to V2 upgrade module (Eugene Koifman via Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-05-31 21:31:29.217 + rm -rf ../yetus_PreCommit-HIVE-Build-11394 + mkdir ../yetus_PreCommit-HIVE-Build-11394 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-11394 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11394/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch fatal: git apply: bad git-diff - inconsistent old filename on line 121 error: patch failed: ql/src/test/queries/clientpositive/druidmini_extractTime.q:1 Falling back to three-way merge... Applied patch to 'ql/src/test/queries/clientpositive/druidmini_extractTime.q' with conflicts. Going to apply patch with: git apply -p1 error: patch failed: ql/src/test/queries/clientpositive/druidmini_extractTime.q:1 Falling back to three-way merge... Applied patch to 'ql/src/test/queries/clientpositive/druidmini_extractTime.q' with conflicts. U ql/src/test/queries/clientpositive/druidmini_extractTime.q + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-11394 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12925574 - PreCommit-HIVE-Build > Enable DateRangeRules > -- > > Key: HIVE-18583 > URL: https://issues.apache.org/jira/browse/HIVE-18583 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-18583.2.patch, HIVE-18583.3.patch, HIVE-18583.patch > > > Enable DateRangeRules to translate druid filters to date ranges. > Need calcite version to upgrade to 0.16.0 before merging this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19669) Upgrade ORC to 1.5.1
[ https://issues.apache.org/jira/browse/HIVE-19669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497212#comment-16497212 ] Hive QA commented on HIVE-19669: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12925582/HIVE-19669.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14429 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/11393/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11393/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11393/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12925582 - PreCommit-HIVE-Build > Upgrade ORC to 1.5.1 > > > Key: HIVE-19669 > URL: https://issues.apache.org/jira/browse/HIVE-19669 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19669.01.patch, HIVE-19669.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19663) refactor LLAP IO report generation
[ https://issues.apache.org/jira/browse/HIVE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19663: --- Assignee: Sergey Shelukhin > refactor LLAP IO report generation > -- > > Key: HIVE-19663 > URL: https://issues.apache.org/jira/browse/HIVE-19663 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > > Follow-up from HIVE-19642. > Instead of each component calling some other component in a chain, all the > parts of the state dump should be called in one place to avoid weird > dependencies/sequences that need to be accounted for to generate the report. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19756) Insert request with UNION ALL and lateral view explode
[ https://issues.apache.org/jira/browse/HIVE-19756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Frédéric ESCANDELL updated HIVE-19756: -- Description: Hi, While executing this code snippet, no data is inserted in the final table t3. By replacing UNION ALL by UNION or the "lateral view explode" the code works properly. {code:sql} DROP table t1; DROP table t2; DROP table t3; CREATE TABLE t1(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), named_struct('v','y'))) tmp; CREATE TABLE t2(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), named_struct('v','w'))) tmp; DROP view v1; DROP table t3; CREATE VIEW v1 (cle,valeur) AS select base.cle,val.v from (select cle,valeur from t1) as base lateral view explode(base.valeur) a as val union all select base1.cle,val.v from (select cle,valeur from t2) as base1 lateral view explode(base1.valeur) a as val; CREATE TABLE t3(cle string,valeur string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; insert into t3 select * from v1; {code} was: Hi, While executing this code snippet, no data is inserted in the final table t3. By replacing UNION ALL by UNION or the "lateral view explode" the code works properly. {code:sql} DROP table t1; DROP table t2; DROP table t3; CREATE TABLE t1(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), named_struct('v','y'))) tmp; CREATE TABLE t2(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), named_struct('v','w'))) tmp; DROP view v1; DROP table t3; CREATE VIEW v1 (cle,valeur) AS select base.cle,val.v from (select cle,valeur from t1) as base lateral view explode(base.valeur) a as val union select base1.cle,val.v from (select cle,valeur from t2) as base1 lateral view explode(base1.valeur) a as val; CREATE TABLE t3(cle string,valeur string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; insert into t3 select * from v1; {code} > Insert request with UNION ALL and lateral view explode > -- > > Key: HIVE-19756 > URL: https://issues.apache.org/jira/browse/HIVE-19756 > Project: Hive > Issue Type: Bug > Environment: HDP 2.6.4 >Reporter: Frédéric ESCANDELL >Priority: Major > > Hi, > While executing this code snippet, no data is inserted in the final table t3. > By replacing UNION ALL by UNION or the "lateral view explode" the code works > properly. > > > {code:sql} > DROP table t1; > DROP table t2; > DROP table t3; > CREATE TABLE t1(cle string,valeur array>) > ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS > INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; > INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), > named_struct('v','y'))) tmp; > CREATE TABLE t2(cle string,valeur array>) > ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS > INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; > INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), > named_struct('v','w'))) tmp; > DROP view v1; > DROP table t3; > CREATE VIEW v1 (cle,valeur) > AS > select base.cle,val.v from (select cle,valeur from t1) as base > lateral view explode(base.valeur) a as val > union all > select base1.cle,val.v from
[jira] [Updated] (HIVE-19756) Insert request with UNION ALL and lateral view explode
[ https://issues.apache.org/jira/browse/HIVE-19756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Frédéric ESCANDELL updated HIVE-19756: -- Description: Hi, While executing this code snippet, no data is inserted in the final table t3. By replacing UNION ALL by UNION or the "lateral view explode" the code works properly. {code:sql} DROP table t1; DROP table t2; DROP table t3; CREATE TABLE t1(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), named_struct('v','y'))) tmp; CREATE TABLE t2(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), named_struct('v','w'))) tmp; DROP view v1; DROP table t3; CREATE VIEW v1 (cle,valeur) AS select base.cle,val.v from (select cle,valeur from t1) as base lateral view explode(base.valeur) a as val union all select base1.cle,val.v from (select cle,valeur from t2) as base1 lateral view explode(base1.valeur) a as val; CREATE TABLE t3(cle string,valeur string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; insert into t3 select * from v1; {code} was: Hi, While executing this code snippet, no data is inserted in the final table t3. By replacing UNION ALL by UNION or the "lateral view explode" the code works properly. {code:sql} DROP table t1; DROP table t2; DROP table t3; CREATE TABLE t1(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), named_struct('v','y'))) tmp; CREATE TABLE t2(cle string,valeur array>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), named_struct('v','w'))) tmp; DROP view v1; DROP table t3; CREATE VIEW v1 (cle,valeur) AS select base.cle,val.v from (select cle,valeur from t1) as base lateral view explode(base.valeur) a as val union all select base1.cle,val.v from (select cle,valeur from t2) as base1 lateral view explode(base1.valeur) a as val; CREATE TABLE t3(cle string,valeur string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; insert into t3 select * from v1; {code} > Insert request with UNION ALL and lateral view explode > -- > > Key: HIVE-19756 > URL: https://issues.apache.org/jira/browse/HIVE-19756 > Project: Hive > Issue Type: Bug > Environment: HDP 2.6.4 >Reporter: Frédéric ESCANDELL >Priority: Major > > Hi, > While executing this code snippet, no data is inserted in the final table t3. > By replacing UNION ALL by UNION or the "lateral view explode" the code works > properly. > > {code:sql} > DROP table t1; > DROP table t2; > DROP table t3; > CREATE TABLE t1(cle string,valeur array>) > ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS > INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; > INSERT INTO table t1 select * from (select "a",array(named_struct('v','x'), > named_struct('v','y'))) tmp; > CREATE TABLE t2(cle string,valeur array>) > ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS > INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'; > INSERT INTO table t2 select * from (select "b",array(named_struct('v','z'), > named_struct('v','w'))) tmp; > DROP view v1; > DROP table t3; > CREATE VIEW v1 (cle,valeur) > AS > select base.cle,val.v from (select cle,valeur from t1) as base > lateral view explode(base.valeur) a as val > union all > select base1.cle,val.v from (select
[jira] [Updated] (HIVE-19378) "hive.lock.numretries" Is Misleading
[ https://issues.apache.org/jira/browse/HIVE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alice Fan updated HIVE-19378: - Attachment: HIVE-19378.1.patch Status: Patch Available (was: In Progress) > "hive.lock.numretries" Is Misleading > > > Key: HIVE-19378 > URL: https://issues.apache.org/jira/browse/HIVE-19378 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Alice Fan >Priority: Minor > Attachments: HIVE-19378.1.patch > > > Configuration 'hive.lock.numretries' is confusing. It's not actually a > 'retry' count, it's the total number of attempt to try: > > {code:java|title=ZooKeeperHiveLockManager.java} > do { > lastException = null; > tryNum++; > try { > if (tryNum > 1) { > Thread.sleep(sleepTime); > prepareRetry(); > } > ret = lockPrimitive(key, mode, keepAlive, parentCreated, > conflictingLocks); > ... > } while (tryNum < numRetriesForLock); > {code} > So, from this code you can see that on the first loop, {{tryNum}} is set to > 1, in which case, if the configuration num*retries* is set to 1, there will > be one attempt total. With a *retry* value of 1, I would assume one initial > attempt and one additional retry. Please change to: > {code} > while (tryNum <= numRetriesForLock); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19558) HiveAuthorizationProviderBase gets catalog name from config rather than db object
[ https://issues.apache.org/jira/browse/HIVE-19558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497161#comment-16497161 ] Alan Gates commented on HIVE-19558: --- Looks like patch1take7 didn't picked up by the build system. Posted a take8. > HiveAuthorizationProviderBase gets catalog name from config rather than db > object > - > > Key: HIVE-19558 > URL: https://issues.apache.org/jira/browse/HIVE-19558 > Project: Hive > Issue Type: Bug > Components: Authorization >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Fix For: 3.0.1 > > Attachments: HIVE-19558.1take2.patch, HIVE-19558.1take3.patch, > HIVE-19558.1take4.patch, HIVE-19558.1take5.patch, HIVE-19558.1take6.patch, > HIVE-19558.1take7.patch, HIVE-19558.1take8.patch, HIVE-19558.patch > > > HiveAuthorizationProviderBase.getDatabase uses just the database name to > fetch the database, relying on getDefaultCatalog() to fetch the catalog name > from the conf file. This does not work when the client has passed in an > object for a different catalog. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19558) HiveAuthorizationProviderBase gets catalog name from config rather than db object
[ https://issues.apache.org/jira/browse/HIVE-19558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-19558: -- Attachment: HIVE-19558.1take8.patch > HiveAuthorizationProviderBase gets catalog name from config rather than db > object > - > > Key: HIVE-19558 > URL: https://issues.apache.org/jira/browse/HIVE-19558 > Project: Hive > Issue Type: Bug > Components: Authorization >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Fix For: 3.0.1 > > Attachments: HIVE-19558.1take2.patch, HIVE-19558.1take3.patch, > HIVE-19558.1take4.patch, HIVE-19558.1take5.patch, HIVE-19558.1take6.patch, > HIVE-19558.1take7.patch, HIVE-19558.1take8.patch, HIVE-19558.patch > > > HiveAuthorizationProviderBase.getDatabase uses just the database name to > fetch the database, relying on getDefaultCatalog() to fetch the catalog name > from the conf file. This does not work when the client has passed in an > object for a different catalog. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19755) insertsel_fail.q.out needs to be updated on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19755: --- Attachment: HIVE-19755.1-branch-3.patch > insertsel_fail.q.out needs to be updated on branch-3 > > > Key: HIVE-19755 > URL: https://issues.apache.org/jira/browse/HIVE-19755 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19755.1-branch-3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19755) insertsel_fail.q.out needs to be updated on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497149#comment-16497149 ] Vineet Garg commented on HIVE-19755: [~jcamachorodriguez] [~ashutoshc] can you take a look? > insertsel_fail.q.out needs to be updated on branch-3 > > > Key: HIVE-19755 > URL: https://issues.apache.org/jira/browse/HIVE-19755 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19755.1-branch-3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19755) insertsel_fail.q.out needs to be updated on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-19755: -- > insertsel_fail.q.out needs to be updated on branch-3 > > > Key: HIVE-19755 > URL: https://issues.apache.org/jira/browse/HIVE-19755 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19748) Add appropriate null checks to DecimalColumnStatsAggregator
[ https://issues.apache.org/jira/browse/HIVE-19748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497141#comment-16497141 ] Vaibhav Gumashta commented on HIVE-19748: - cc [~daijy] [~thejas] > Add appropriate null checks to DecimalColumnStatsAggregator > --- > > Key: HIVE-19748 > URL: https://issues.apache.org/jira/browse/HIVE-19748 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-19748.1.patch > > > In some of our internal testing, we noticed that calls to > MetaStoreUtils.decimalToDoublee(Decimal decimal) from within > DecimalColumnStatsAggregator end up passing null Decimal values to the method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19748) Add appropriate null checks to DecimalColumnStatsAggregator
[ https://issues.apache.org/jira/browse/HIVE-19748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-19748: Status: Patch Available (was: Open) > Add appropriate null checks to DecimalColumnStatsAggregator > --- > > Key: HIVE-19748 > URL: https://issues.apache.org/jira/browse/HIVE-19748 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-19748.1.patch > > > In some of our internal testing, we noticed that calls to > MetaStoreUtils.decimalToDoublee(Decimal decimal) from within > DecimalColumnStatsAggregator end up passing null Decimal values to the method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19748) Add appropriate null checks to DecimalColumnStatsAggregator
[ https://issues.apache.org/jira/browse/HIVE-19748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-19748: Attachment: HIVE-19748.1.patch > Add appropriate null checks to DecimalColumnStatsAggregator > --- > > Key: HIVE-19748 > URL: https://issues.apache.org/jira/browse/HIVE-19748 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-19748.1.patch > > > In some of our internal testing, we noticed that calls to > MetaStoreUtils.decimalToDoublee(Decimal decimal) from within > DecimalColumnStatsAggregator end up passing null Decimal values to the method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19754) vector_decimal_2 failing on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497136#comment-16497136 ] Jesus Camacho Rodriguez commented on HIVE-19754: +1 > vector_decimal_2 failing on branch-3 > > > Key: HIVE-19754 > URL: https://issues.apache.org/jira/browse/HIVE-19754 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19754.1.patch-branch-3.patch > > > caused by HIVE-19108. This needs golden file update only on branch-3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19598) Add Acid V1 to V2 upgrade module
[ https://issues.apache.org/jira/browse/HIVE-19598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19598: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Eugene! [~vgarg] Can we have this on branch-3 as well since its very useful for users to help them migrate to Acid V2. > Add Acid V1 to V2 upgrade module > > > Key: HIVE-19598 > URL: https://issues.apache.org/jira/browse/HIVE-19598 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 4.0.0 > > Attachments: HIVE-19598.02.patch, HIVE-19598.05.patch, > HIVE-19598.06.patch > > > The on-disk layout for full acid (transactional) tables has changed 3.0. > Any transactional table that has any update/delete events in any deltas that > have not been Major compacted, must go through a Major compaction before > upgrading to 3.0. No more update/delete/merge should be run after/during > major compaction. > Not doing so will result in data corruption/loss. > > Need to create a utility tool to help with this process. HIVE-19233 started > this but it needs more work. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19644) change WM syntax to avoid conflicts with identifiers starting with a number
[ https://issues.apache.org/jira/browse/HIVE-19644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19644: Attachment: (was: HIVE-19690.03.patch) > change WM syntax to avoid conflicts with identifiers starting with a number > --- > > Key: HIVE-19644 > URL: https://issues.apache.org/jira/browse/HIVE-19644 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19644.01.patch, HIVE-19644.02.patch, > HIVE-19644.03.patch, HIVE-19644.patch > > > Time/etc literals conflict with non-ANSI query column names starting with a > number that were previously supported without quotes (e.g. 30days). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19644) change WM syntax to avoid conflicts with identifiers starting with a number
[ https://issues.apache.org/jira/browse/HIVE-19644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497132#comment-16497132 ] Sergey Shelukhin commented on HIVE-19644: - I hate HiveQA > change WM syntax to avoid conflicts with identifiers starting with a number > --- > > Key: HIVE-19644 > URL: https://issues.apache.org/jira/browse/HIVE-19644 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19644.01.patch, HIVE-19644.02.patch, > HIVE-19644.03.patch, HIVE-19644.patch > > > Time/etc literals conflict with non-ANSI query column names starting with a > number that were previously supported without quotes (e.g. 30days). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19644) change WM syntax to avoid conflicts with identifiers starting with a number
[ https://issues.apache.org/jira/browse/HIVE-19644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19644: Attachment: HIVE-19644.03.patch > change WM syntax to avoid conflicts with identifiers starting with a number > --- > > Key: HIVE-19644 > URL: https://issues.apache.org/jira/browse/HIVE-19644 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19644.01.patch, HIVE-19644.02.patch, > HIVE-19644.03.patch, HIVE-19644.patch > > > Time/etc literals conflict with non-ANSI query column names starting with a > number that were previously supported without quotes (e.g. 30days). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19754) vector_decimal_2 failing on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497134#comment-16497134 ] Vineet Garg commented on HIVE-19754: [~jcamachorodriguez] [~ashutoshc] Can you take a look? > vector_decimal_2 failing on branch-3 > > > Key: HIVE-19754 > URL: https://issues.apache.org/jira/browse/HIVE-19754 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19754.1.patch-branch-3.patch > > > caused by HIVE-19108. This needs golden file update only on branch-3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19754) vector_decimal_2 failing on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19754: --- Attachment: HIVE-19754.1.patch-branch-3.patch > vector_decimal_2 failing on branch-3 > > > Key: HIVE-19754 > URL: https://issues.apache.org/jira/browse/HIVE-19754 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19754.1.patch-branch-3.patch > > > caused by HIVE-19108. This needs golden file update only on branch-3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19644) change WM syntax to avoid conflicts with identifiers starting with a number
[ https://issues.apache.org/jira/browse/HIVE-19644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19644: Attachment: HIVE-19690.03.patch > change WM syntax to avoid conflicts with identifiers starting with a number > --- > > Key: HIVE-19644 > URL: https://issues.apache.org/jira/browse/HIVE-19644 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19644.01.patch, HIVE-19644.02.patch, > HIVE-19644.03.patch, HIVE-19644.patch > > > Time/etc literals conflict with non-ANSI query column names starting with a > number that were previously supported without quotes (e.g. 30days). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19109) Vectorization: Enabling vectorization causes TestCliDriver delete_orig_table.q to produce Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-19109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19109: Attachment: HIVE-19109.01.patch > Vectorization: Enabling vectorization causes TestCliDriver > delete_orig_table.q to produce Wrong Results > --- > > Key: HIVE-19109 > URL: https://issues.apache.org/jira/browse/HIVE-19109 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19109.01.patch > > > Found in vectorization enable by default experiment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19643) MM table conversion doesn't need full ACID structure checks
[ https://issues.apache.org/jira/browse/HIVE-19643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19643: Attachment: HIVE-19643.05.patch > MM table conversion doesn't need full ACID structure checks > --- > > Key: HIVE-19643 > URL: https://issues.apache.org/jira/browse/HIVE-19643 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Jason Dere >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19643.01.patch, HIVE-19643.02.patch, > HIVE-19643.03.patch, HIVE-19643.04.patch, HIVE-19643.05.patch, > HIVE-19643.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19109) Vectorization: Enabling vectorization causes TestCliDriver delete_orig_table.q to produce Wrong Results
[ https://issues.apache.org/jira/browse/HIVE-19109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19109: Status: Patch Available (was: Open) > Vectorization: Enabling vectorization causes TestCliDriver > delete_orig_table.q to produce Wrong Results > --- > > Key: HIVE-19109 > URL: https://issues.apache.org/jira/browse/HIVE-19109 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 3.0.0 >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-19109.01.patch > > > Found in vectorization enable by default experiment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19754) vector_decimal_2 failing on branch-3
[ https://issues.apache.org/jira/browse/HIVE-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reassigned HIVE-19754: -- > vector_decimal_2 failing on branch-3 > > > Key: HIVE-19754 > URL: https://issues.apache.org/jira/browse/HIVE-19754 > Project: Hive > Issue Type: Bug >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > > caused by HIVE-19108. This needs golden file update only on branch-3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19750) Initialize NEXT_WRITE_ID. NWI_NEXT on converting an existing table to full acid
[ https://issues.apache.org/jira/browse/HIVE-19750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-19750: -- Status: Patch Available (was: Open) > Initialize NEXT_WRITE_ID. NWI_NEXT on converting an existing table to full > acid > --- > > Key: HIVE-19750 > URL: https://issues.apache.org/jira/browse/HIVE-19750 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Fix For: 3.1.0 > > Attachments: HIVE-19750.01.patch > > > Need to set this to a reasonably high value the the table. > This will reserve a range of write IDs that will be treated by the system as > committed. > This is needed so that we can assign unique ROW__IDs to each row in files > that already exist in the table. For example, if the value is initialized to > the number of files currently in the table, we can think of each file as > written by a separate transaction and thus a free to assign bucketProperty > (BucketCodec) of ROW_ID in whichever way is convenient. > it's guaranteed that all rows get unique ROW_IDs this way. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19690) multi-insert query with multiple GBY, and distinct in only some branches can produce incorrect results
[ https://issues.apache.org/jira/browse/HIVE-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19690: Attachment: HIVE-19690.03.patch > multi-insert query with multiple GBY, and distinct in only some branches can > produce incorrect results > -- > > Key: HIVE-19690 > URL: https://issues.apache.org/jira/browse/HIVE-19690 > Project: Hive > Issue Type: Bug >Reporter: Riju Trivedi >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19690.01.patch, HIVE-19690.02.patch, > HIVE-19690.03.patch, HIVE-19690.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19720) backport multiple MM commits to branch-3
[ https://issues.apache.org/jira/browse/HIVE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19720: Attachment: HIVE-19720.03-branch-3.patch > backport multiple MM commits to branch-3 > > > Key: HIVE-19720 > URL: https://issues.apache.org/jira/browse/HIVE-19720 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19720.01-branch-3.patch, > HIVE-19720.02-branch-3.patch, HIVE-19720.03-branch-3.patch > > > To avoid chained test runs of branch-3 backporting one by one, I will run > HiveQA on an epic combined patch, then commit patches w/proper commit > separation via cherry-pick: > 0930aec69b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) > 99a2b8bd6b HIVE-19312 : MM tables don't work with BucketizedHIF (Sergey > Shelukhin, reviewed by Gunther Hagleitner) ADDENDUM > 7ebcdeb951 HIVE-17657 : export/import for MM tables is broken (Sergey > Shelukhin, reviewed by Eugene Koifman) > 8db979f1ff (part not previously backported) HIVE-19476: Fix failures in > TestReplicationScenariosAcidTables, TestReplicationOnHDFSEncryptedZones and > TestCopyUtils (Sankar Hariappan, reviewed by Sergey Shelukhin) > f4352e5339 HIVE-19258 : add originals support to MM tables (and make the > conversion a metadata only operation) (Sergey Shelukhin, reviewed by Jason > Dere) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19418) add background stats updater similar to compactor
[ https://issues.apache.org/jira/browse/HIVE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19418: Attachment: (was: HIVE-19418.04.patch) > add background stats updater similar to compactor > - > > Key: HIVE-19418 > URL: https://issues.apache.org/jira/browse/HIVE-19418 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19418.01.patch, HIVE-19418.02.patch, > HIVE-19418.03.patch, HIVE-19418.04.patch, HIVE-19418.05.patch, > HIVE-19418.patch > > > There's a JIRA HIVE-19416 to add snapshot version to stats for MM/ACID tables > to make them usable in a transaction without breaking ACID (for metadata-only > optimization). However, stats for ACID tables can still become unusable if > e.g. two parallel inserts run - neither sees the data written by the other, > so after both finish, the snapshots on either set of stats won't match the > current snapshot and the stats will be unusable. > Additionally, for ACID and non-ACID tables alike, a lot of the stats, with > some exceptions like numRows, cannot be aggregated (i.e. you cannot combine > ndvs from two inserts), and for ACID even less can be aggregated (you cannot > derive min/max if some rows are deleted but you don't scan the rest of the > dataset). > Therefore we will add background logic to metastore (similar to, and > partially inside, the ACID compactor) to update stats. > It will have 3 modes of operation. > 1) Off. > 2) Update only the stats that exist but are out of date (generating stats can > be expensive, so if the user is only analyzing a subset of tables it should > be able to only update that subset). We can simply look at existing stats and > only analyze for the relevant partitions and columns. > 3) On: 2 + create stats for all tables and columns missing stats. > There will also be a table parameter to skip stats update. > In phase 1, the process will operate outside of compactor, and run analyze > command on the table. The analyze command will automatically save the stats > with ACID snapshot information if needed, based on HIVE-19416, so we don't > need to do any special state management and this will work for all table > types. However it's also more expensive. > In phase 2, we can explore adding stats collection during MM compaction that > uses a temp table. If we don't have open writers during major compaction (so > we overwrite all of the data), the temp table stats can simply be copied over > to the main table with correct snapshot information, saving us a table scan. > In phase 3, we can add custom stats collection logic to full ACID compactor > that is not query based, the same way as we'd do for (2). Alternatively we > can wait for ACID compactor to become query based and just reuse (2). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19558) HiveAuthorizationProviderBase gets catalog name from config rather than db object
[ https://issues.apache.org/jira/browse/HIVE-19558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497123#comment-16497123 ] Dongjoon Hyun commented on HIVE-19558: -- Gentle ping, [~alangates] > HiveAuthorizationProviderBase gets catalog name from config rather than db > object > - > > Key: HIVE-19558 > URL: https://issues.apache.org/jira/browse/HIVE-19558 > Project: Hive > Issue Type: Bug > Components: Authorization >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Major > Fix For: 3.0.1 > > Attachments: HIVE-19558.1take2.patch, HIVE-19558.1take3.patch, > HIVE-19558.1take4.patch, HIVE-19558.1take5.patch, HIVE-19558.1take6.patch, > HIVE-19558.1take7.patch, HIVE-19558.patch > > > HiveAuthorizationProviderBase.getDatabase uses just the database name to > fetch the database, relying on getDefaultCatalog() to fetch the catalog name > from the conf file. This does not work when the client has passed in an > object for a different catalog. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19418) add background stats updater similar to compactor
[ https://issues.apache.org/jira/browse/HIVE-19418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19418: Attachment: HIVE-19418.05.patch > add background stats updater similar to compactor > - > > Key: HIVE-19418 > URL: https://issues.apache.org/jira/browse/HIVE-19418 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19418.01.patch, HIVE-19418.02.patch, > HIVE-19418.03.patch, HIVE-19418.04.patch, HIVE-19418.04.patch, > HIVE-19418.05.patch, HIVE-19418.patch > > > There's a JIRA HIVE-19416 to add snapshot version to stats for MM/ACID tables > to make them usable in a transaction without breaking ACID (for metadata-only > optimization). However, stats for ACID tables can still become unusable if > e.g. two parallel inserts run - neither sees the data written by the other, > so after both finish, the snapshots on either set of stats won't match the > current snapshot and the stats will be unusable. > Additionally, for ACID and non-ACID tables alike, a lot of the stats, with > some exceptions like numRows, cannot be aggregated (i.e. you cannot combine > ndvs from two inserts), and for ACID even less can be aggregated (you cannot > derive min/max if some rows are deleted but you don't scan the rest of the > dataset). > Therefore we will add background logic to metastore (similar to, and > partially inside, the ACID compactor) to update stats. > It will have 3 modes of operation. > 1) Off. > 2) Update only the stats that exist but are out of date (generating stats can > be expensive, so if the user is only analyzing a subset of tables it should > be able to only update that subset). We can simply look at existing stats and > only analyze for the relevant partitions and columns. > 3) On: 2 + create stats for all tables and columns missing stats. > There will also be a table parameter to skip stats update. > In phase 1, the process will operate outside of compactor, and run analyze > command on the table. The analyze command will automatically save the stats > with ACID snapshot information if needed, based on HIVE-19416, so we don't > need to do any special state management and this will work for all table > types. However it's also more expensive. > In phase 2, we can explore adding stats collection during MM compaction that > uses a temp table. If we don't have open writers during major compaction (so > we overwrite all of the data), the temp table stats can simply be copied over > to the main table with correct snapshot information, saving us a table scan. > In phase 3, we can add custom stats collection logic to full ACID compactor > that is not query based, the same way as we'd do for (2). Alternatively we > can wait for ACID compactor to become query based and just reuse (2). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19753) Strict managed tables mode in Hive
[ https://issues.apache.org/jira/browse/HIVE-19753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reassigned HIVE-19753: - > Strict managed tables mode in Hive > -- > > Key: HIVE-19753 > URL: https://issues.apache.org/jira/browse/HIVE-19753 > Project: Hive > Issue Type: Bug >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > Create a mode in Hive which enforces that all managed tables are > transactional (both full or insert-only tables allowed). Non-transactional > tables, as well as non-native tables, must be created as external tables when > this mode is enabled. > The idea would be that in strict managed tables mode all of the data written > to managed tables would have been done through Hive. > The mode would be enabled using config setting hive.strict.managed.tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19605) TAB_COL_STATS table has no index on db/table name
[ https://issues.apache.org/jira/browse/HIVE-19605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497085#comment-16497085 ] Yongzhi Chen commented on HIVE-19605: - The patch LGTM +1 > TAB_COL_STATS table has no index on db/table name > - > > Key: HIVE-19605 > URL: https://issues.apache.org/jira/browse/HIVE-19605 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Todd Lipcon >Assignee: Vihang Karajgaonkar >Priority: Major > Attachments: HIVE-19605.01.patch > > > The TAB_COL_STATS table is missing an index on (CAT_NAME, DB_NAME, > TABLE_NAME). The getTableColumnStatistics call queries based on this tuple. > This makes those queries take a significant amount of time in large > metastores since they do a full table scan. -- This message was sent by Atlassian JIRA (v7.6.3#76005)