[jira] [Commented] (HIVE-20194) HiveMetastoreClient should use reflection to instantiate embedded HMS instance
[ https://issues.apache.org/jira/browse/HIVE-20194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575773#comment-16575773 ] Hive QA commented on HIVE-20194: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 14s{color} | {color:red} metastore-server in master failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 14s{color} | {color:red} metastore-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13141/dev-support/hive-personality.sh | | git revision | master / 6286bbc | | Default Java | 1.8.0_111 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13141/yetus/branch-findbugs-standalone-metastore_metastore-server.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13141/yetus/patch-findbugs-standalone-metastore_metastore-server.txt | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13141/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveMetastoreClient should use reflection to instantiate embedded HMS instance > -- > > Key: HIVE-20194 > URL: https://issues.apache.org/jira/browse/HIVE-20194 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 4.0.0, 3.2.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-20194.01.patch, HIVE-20194.02.patch, > HIVE-20194.03.patch, HIVE-20194.04.patch > > > When HiveMetastoreClient is used in embedded mode, it instantiates metastore > server. Since we want to separate client and server code we can no longer > instantiate the class directly but need to use reflection for that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20340) Druid Needs Explicit CASTs from Timestamp to STRING when the output of timestamp function is used as String
[ https://issues.apache.org/jira/browse/HIVE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575762#comment-16575762 ] Hive QA commented on HIVE-20340: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935043/HIVE-20340.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14874 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13140/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13140/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13140/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935043 - PreCommit-HIVE-Build > Druid Needs Explicit CASTs from Timestamp to STRING when the output of > timestamp function is used as String > --- > > Key: HIVE-20340 > URL: https://issues.apache.org/jira/browse/HIVE-20340 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-20340.1.patch, HIVE-20340.2.patch, > HIVE-20340.3.patch > > > Druid time expressions return numeric values in form of ms (instead of > formatted timestamp). Because of this expressions/function which expects its > argument as string type ended up returning different values for time > expressions input. > e.g. > {code} > SELECT SUBSTRING(to_date(datetime0),4) FROM tableau_orc.calcs; > | 4-07-25 | > SELECT SUBSTRING(to_date(datetime0),4) FROM druid_tableau.calcs; > | 002240 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM tableau_orc.calcs; > | 2004-07-17 00:00:00 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM druid_tableau.calcs; > | 109045440 00:00:00 | > {code} > Druid needs explicit cast to make this work -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20340) Druid Needs Explicit CASTs from Timestamp to STRING when the output of timestamp function is used as String
[ https://issues.apache.org/jira/browse/HIVE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575753#comment-16575753 ] Hive QA commented on HIVE-20340: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 48s{color} | {color:blue} itests/util in master has 52 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 6 new + 137 unchanged - 0 fixed = 143 total (was 137) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13140/dev-support/hive-personality.sh | | git revision | master / 3dc736f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13140/yetus/diff-checkstyle-ql.txt | | modules | C: itests/util ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13140/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Druid Needs Explicit CASTs from Timestamp to STRING when the output of > timestamp function is used as String > --- > > Key: HIVE-20340 > URL: https://issues.apache.org/jira/browse/HIVE-20340 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-20340.1.patch, HIVE-20340.2.patch, > HIVE-20340.3.patch > > > Druid time expressions return numeric values in form of ms (instead of > formatted timestamp). Because of this expressions/function which expects its > argument as string type ended up returning different values for time > expressions input. > e.g. > {code} > SELECT SUBSTRING(to_date(datetime0),4) FROM tableau_orc.calcs; > | 4-07-25 | > SELECT SUBSTRING(to_date(datetime0),4) FROM druid_tableau.calcs; > | 002240 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM
[jira] [Updated] (HIVE-16102) Grouping sets do not conform to SQL standard
[ https://issues.apache.org/jira/browse/HIVE-16102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] 翟玉勇 updated HIVE-16102: --- Description: # [~ashutoshc] realized that the implementation of GROUPING__ID in Hive was not returning values as specified by SQL standard and other execution engines. After digging into this, I found out that the implementation was bogus, as internally it was changing between big-endian/little-endian representation of GROUPING__ID indistinctly, and in some cases conversions in both directions were cancelling each other. In the documentation in https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation,+Cube,+Grouping+and+Rollup we can already find the problem, even if we did not spot it at first. {quote} The following query: SELECT key, value, GROUPING__ID, count(\*) from T1 GROUP BY key, value WITH ROLLUP will have the following results. | NULL | NULL | 0 | 6 | | 1 | NULL | 1 | 2 | | 1 | NULL | 3 | 1 | | 1 | 1 | 3 | 1 | ... {quote} Observe that value for GROUPING__ID in first row should be `3`, while for third and fourth rows, it should be `0`. was: [~ashutoshc] realized that the implementation of GROUPING__ID in Hive was not returning values as specified by SQL standard and other execution engines. After digging into this, I found out that the implementation was bogus, as internally it was changing between big-endian/little-endian representation of GROUPING__ID indistinctly, and in some cases conversions in both directions were cancelling each other. In the documentation in https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation,+Cube,+Grouping+and+Rollup we can already find the problem, even if we did not spot it at first. {quote} The following query: SELECT key, value, GROUPING__ID, count(\*) from T1 GROUP BY key, value WITH ROLLUP will have the following results. | NULL | NULL | 0 | 6 | | 1 | NULL | 1 | 2 | | 1 | NULL | 3 | 1 | | 1 | 1 | 3 | 1 | ... {quote} Observe that value for GROUPING__ID in first row should be `3`, while for third and fourth rows, it should be `0`. > Grouping sets do not conform to SQL standard > > > Key: HIVE-16102 > URL: https://issues.apache.org/jira/browse/HIVE-16102 > Project: Hive > Issue Type: Bug > Components: Operators, Parser >Affects Versions: 1.3.0, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Fix For: 2.3.0 > > Attachments: HIVE-16102.01.patch, HIVE-16102.02.patch, > HIVE-16102.patch > > > # [~ashutoshc] realized that the implementation of GROUPING__ID in Hive was > not returning values as specified by SQL standard and other execution engines. > After digging into this, I found out that the implementation was bogus, as > internally it was changing between big-endian/little-endian representation of > GROUPING__ID indistinctly, and in some cases conversions in both directions > were cancelling each other. > In the documentation in > https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation,+Cube,+Grouping+and+Rollup > we can already find the problem, even if we did not spot it at first. > {quote} > The following query: SELECT key, value, GROUPING__ID, count(\*) from T1 GROUP > BY key, value WITH ROLLUP > will have the following results. > | NULL | NULL | 0 | 6 | > | 1 | NULL | 1 | 2 | > | 1 | NULL | 3 | 1 | > | 1 | 1 | 3 | 1 | > ... > {quote} > Observe that value for GROUPING__ID in first row should be `3`, while for > third and fourth rows, it should be `0`. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20316) Skip external table file listing for create table event.
[ https://issues.apache.org/jira/browse/HIVE-20316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20316: Status: Patch Available (was: Open) > Skip external table file listing for create table event. > > > Key: HIVE-20316 > URL: https://issues.apache.org/jira/browse/HIVE-20316 > Project: Hive > Issue Type: Bug > Components: HCatalog, repl >Affects Versions: 3.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 4.0.0 > > Attachments: HIVE-20316.01-branch-3.patch, HIVE-20316.01.patch, > HIVE-20316.02.patch, HIVE-20316.03.patch > > > We are currently skipping external table replication. We shall also skip > listing all the files in create table event generation for external tables. > External tables might have very large number of files, so it would take long > time to list them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20316) Skip external table file listing for create table event.
[ https://issues.apache.org/jira/browse/HIVE-20316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20316: Attachment: HIVE-20316.01-branch-3.patch > Skip external table file listing for create table event. > > > Key: HIVE-20316 > URL: https://issues.apache.org/jira/browse/HIVE-20316 > Project: Hive > Issue Type: Bug > Components: HCatalog, repl >Affects Versions: 3.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 4.0.0 > > Attachments: HIVE-20316.01-branch-3.patch, HIVE-20316.01.patch, > HIVE-20316.02.patch, HIVE-20316.03.patch > > > We are currently skipping external table replication. We shall also skip > listing all the files in create table event generation for external tables. > External tables might have very large number of files, so it would take long > time to list them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20316) Skip external table file listing for create table event.
[ https://issues.apache.org/jira/browse/HIVE-20316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20316: Status: Open (was: Patch Available) > Skip external table file listing for create table event. > > > Key: HIVE-20316 > URL: https://issues.apache.org/jira/browse/HIVE-20316 > Project: Hive > Issue Type: Bug > Components: HCatalog, repl >Affects Versions: 3.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 4.0.0 > > Attachments: HIVE-20316.01.patch, HIVE-20316.02.patch, > HIVE-20316.03.patch > > > We are currently skipping external table replication. We shall also skip > listing all the files in create table event generation for external tables. > External tables might have very large number of files, so it would take long > time to list them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20316) Skip external table file listing for create table event.
[ https://issues.apache.org/jira/browse/HIVE-20316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20316: Fix Version/s: 4.0.0 > Skip external table file listing for create table event. > > > Key: HIVE-20316 > URL: https://issues.apache.org/jira/browse/HIVE-20316 > Project: Hive > Issue Type: Bug > Components: HCatalog, repl >Affects Versions: 3.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 4.0.0 > > Attachments: HIVE-20316.01.patch, HIVE-20316.02.patch, > HIVE-20316.03.patch > > > We are currently skipping external table replication. We shall also skip > listing all the files in create table event generation for external tables. > External tables might have very large number of files, so it would take long > time to list them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20316) Skip external table file listing for create table event.
[ https://issues.apache.org/jira/browse/HIVE-20316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575746#comment-16575746 ] Sankar Hariappan commented on HIVE-20316: - Thanks for the review [~maheshk114] and [~anishek]! 03.patch committed to master. > Skip external table file listing for create table event. > > > Key: HIVE-20316 > URL: https://issues.apache.org/jira/browse/HIVE-20316 > Project: Hive > Issue Type: Bug > Components: HCatalog, repl >Affects Versions: 3.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 4.0.0 > > Attachments: HIVE-20316.01.patch, HIVE-20316.02.patch, > HIVE-20316.03.patch > > > We are currently skipping external table replication. We shall also skip > listing all the files in create table event generation for external tables. > External tables might have very large number of files, so it would take long > time to list them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20264) Bootstrap repl dump with concurrent write and drop of ACID table makes target inconsistent.
[ https://issues.apache.org/jira/browse/HIVE-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20264: Status: Patch Available (was: Open) Flaky test failure. Re-attaching same patch. > Bootstrap repl dump with concurrent write and drop of ACID table makes target > inconsistent. > --- > > Key: HIVE-20264 > URL: https://issues.apache.org/jira/browse/HIVE-20264 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Attachments: HIVE-20264.01.patch, HIVE-20264.02.patch > > > During bootstrap dump of ACID tables, let's consider the below sequence. > - Get lastReplId = last event ID logged. > - Current session (Thread-1), REPL DUMP -> Open txn (Txn1) - Event-10 > - Another session (Thread-2), Open txn (Txn2) - Event-11 > - Thread-2 -> Insert data (T1.D1) to ACID table. - Event-12 > - Thread-2 -> Commit Txn (Txn2) - Event-13 > - Thread-2 -> Drop table (T1) - Event-14 > - Thread-1 -> Dump ACID tables based on current list of tables. So, T1 will > be missing. > - Thread-1 -> Commit Txn (Txn1) > - REPL LOAD from bootstrap dump will skip T1. > - Incremental REPL DUMP will start from Event-10 and hence allocate write id > for table T1 and drop table(T1) is idempotent. So, at target, exist entries > in TXN_TO_WRITE_ID and NEXT_WRITE_ID metastore tables. > - Now, when we create another table at source with same name T1 and > replicate, then it may lead to incorrect data for readers at target on T1. > Couple of proposals: > 1. Make allocate write ID idempotent which is not possible as table doesn't > exist and MM table import may lead to allocate write id before creating > table. So, cannot differentiate these 2 cases. > 2. Make Drop table event to drop entries from TXN_TO_WRITE_ID and > NEXT_WRITE_ID tables irrespective of table exist or not at target. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20264) Bootstrap repl dump with concurrent write and drop of ACID table makes target inconsistent.
[ https://issues.apache.org/jira/browse/HIVE-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20264: Attachment: HIVE-20264.02.patch > Bootstrap repl dump with concurrent write and drop of ACID table makes target > inconsistent. > --- > > Key: HIVE-20264 > URL: https://issues.apache.org/jira/browse/HIVE-20264 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Attachments: HIVE-20264.01.patch, HIVE-20264.02.patch > > > During bootstrap dump of ACID tables, let's consider the below sequence. > - Get lastReplId = last event ID logged. > - Current session (Thread-1), REPL DUMP -> Open txn (Txn1) - Event-10 > - Another session (Thread-2), Open txn (Txn2) - Event-11 > - Thread-2 -> Insert data (T1.D1) to ACID table. - Event-12 > - Thread-2 -> Commit Txn (Txn2) - Event-13 > - Thread-2 -> Drop table (T1) - Event-14 > - Thread-1 -> Dump ACID tables based on current list of tables. So, T1 will > be missing. > - Thread-1 -> Commit Txn (Txn1) > - REPL LOAD from bootstrap dump will skip T1. > - Incremental REPL DUMP will start from Event-10 and hence allocate write id > for table T1 and drop table(T1) is idempotent. So, at target, exist entries > in TXN_TO_WRITE_ID and NEXT_WRITE_ID metastore tables. > - Now, when we create another table at source with same name T1 and > replicate, then it may lead to incorrect data for readers at target on T1. > Couple of proposals: > 1. Make allocate write ID idempotent which is not possible as table doesn't > exist and MM table import may lead to allocate write id before creating > table. So, cannot differentiate these 2 cases. > 2. Make Drop table event to drop entries from TXN_TO_WRITE_ID and > NEXT_WRITE_ID tables irrespective of table exist or not at target. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20264) Bootstrap repl dump with concurrent write and drop of ACID table makes target inconsistent.
[ https://issues.apache.org/jira/browse/HIVE-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20264: Attachment: (was: HIVE-20264.02.patch) > Bootstrap repl dump with concurrent write and drop of ACID table makes target > inconsistent. > --- > > Key: HIVE-20264 > URL: https://issues.apache.org/jira/browse/HIVE-20264 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Attachments: HIVE-20264.01.patch, HIVE-20264.02.patch > > > During bootstrap dump of ACID tables, let's consider the below sequence. > - Get lastReplId = last event ID logged. > - Current session (Thread-1), REPL DUMP -> Open txn (Txn1) - Event-10 > - Another session (Thread-2), Open txn (Txn2) - Event-11 > - Thread-2 -> Insert data (T1.D1) to ACID table. - Event-12 > - Thread-2 -> Commit Txn (Txn2) - Event-13 > - Thread-2 -> Drop table (T1) - Event-14 > - Thread-1 -> Dump ACID tables based on current list of tables. So, T1 will > be missing. > - Thread-1 -> Commit Txn (Txn1) > - REPL LOAD from bootstrap dump will skip T1. > - Incremental REPL DUMP will start from Event-10 and hence allocate write id > for table T1 and drop table(T1) is idempotent. So, at target, exist entries > in TXN_TO_WRITE_ID and NEXT_WRITE_ID metastore tables. > - Now, when we create another table at source with same name T1 and > replicate, then it may lead to incorrect data for readers at target on T1. > Couple of proposals: > 1. Make allocate write ID idempotent which is not possible as table doesn't > exist and MM table import may lead to allocate write id before creating > table. So, cannot differentiate these 2 cases. > 2. Make Drop table event to drop entries from TXN_TO_WRITE_ID and > NEXT_WRITE_ID tables irrespective of table exist or not at target. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20264) Bootstrap repl dump with concurrent write and drop of ACID table makes target inconsistent.
[ https://issues.apache.org/jira/browse/HIVE-20264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-20264: Status: Open (was: Patch Available) > Bootstrap repl dump with concurrent write and drop of ACID table makes target > inconsistent. > --- > > Key: HIVE-20264 > URL: https://issues.apache.org/jira/browse/HIVE-20264 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Attachments: HIVE-20264.01.patch, HIVE-20264.02.patch > > > During bootstrap dump of ACID tables, let's consider the below sequence. > - Get lastReplId = last event ID logged. > - Current session (Thread-1), REPL DUMP -> Open txn (Txn1) - Event-10 > - Another session (Thread-2), Open txn (Txn2) - Event-11 > - Thread-2 -> Insert data (T1.D1) to ACID table. - Event-12 > - Thread-2 -> Commit Txn (Txn2) - Event-13 > - Thread-2 -> Drop table (T1) - Event-14 > - Thread-1 -> Dump ACID tables based on current list of tables. So, T1 will > be missing. > - Thread-1 -> Commit Txn (Txn1) > - REPL LOAD from bootstrap dump will skip T1. > - Incremental REPL DUMP will start from Event-10 and hence allocate write id > for table T1 and drop table(T1) is idempotent. So, at target, exist entries > in TXN_TO_WRITE_ID and NEXT_WRITE_ID metastore tables. > - Now, when we create another table at source with same name T1 and > replicate, then it may lead to incorrect data for readers at target on T1. > Couple of proposals: > 1. Make allocate write ID idempotent which is not possible as table doesn't > exist and MM table import may lead to allocate write id before creating > table. So, cannot differentiate these 2 cases. > 2. Make Drop table event to drop entries from TXN_TO_WRITE_ID and > NEXT_WRITE_ID tables irrespective of table exist or not at target. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20331) Query with union all, lateral view and Join fails with "cannot find parent in the child operator"
[ https://issues.apache.org/jira/browse/HIVE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575741#comment-16575741 ] Hive QA commented on HIVE-20331: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935042/HIVE-20331.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14874 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13139/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13139/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13139/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935042 - PreCommit-HIVE-Build > Query with union all, lateral view and Join fails with "cannot find parent in > the child operator" > - > > Key: HIVE-20331 > URL: https://issues.apache.org/jira/browse/HIVE-20331 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20331.1.patch > > > The following query with Union, Lateral view and Join will fail during > execution with the exception below. > {noformat} > create table t1(col1 int); > SELECT 1 AS `col1` > FROM t1 > UNION ALL > SELECT 2 AS `col1` > FROM > (SELECT col1 > FROM t1 > ) x1 > JOIN > (SELECT col1 > FROM > (SELECT > Row_Number() over (PARTITION BY col1 ORDER BY col1) AS `col1` > FROM t1 > ) x2 lateral VIEW explode(map(10,1))`mapObj` AS `col2`, `col3` > ) `expdObj` > {noformat} > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive internal > error: cannot find parent in the child operator! > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:509) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > {noformat} > After debugging, seems we have issues in GenMRFileSink1 class in which we are > setting incorrect aliasToWork to the MapWork. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20331) Query with union all, lateral view and Join fails with "cannot find parent in the child operator"
[ https://issues.apache.org/jira/browse/HIVE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575730#comment-16575730 ] Hive QA commented on HIVE-20331: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 6s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13139/dev-support/hive-personality.sh | | git revision | master / 3dc736f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13139/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Query with union all, lateral view and Join fails with "cannot find parent in > the child operator" > - > > Key: HIVE-20331 > URL: https://issues.apache.org/jira/browse/HIVE-20331 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20331.1.patch > > > The following query with Union, Lateral view and Join will fail during > execution with the exception below. > {noformat} > create table t1(col1 int); > SELECT 1 AS `col1` > FROM t1 > UNION ALL > SELECT 2 AS `col1` > FROM > (SELECT col1 > FROM t1 > ) x1 > JOIN > (SELECT col1 > FROM > (SELECT > Row_Number() over (PARTITION BY col1 ORDER BY col1) AS `col1` > FROM t1 > ) x2 lateral VIEW explode(map(10,1))`mapObj` AS `col2`, `col3` > ) `expdObj` > {noformat} > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive internal > error: cannot find parent in the child operator! > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:509) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > {noformat} > After debugging,
[jira] [Commented] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575709#comment-16575709 ] Hive QA commented on HIVE-20354: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935038/HIVE-20354.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13138/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13138/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13138/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12935038/HIVE-20354.2.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12935038 - PreCommit-HIVE-Build > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.1.patch, HIVE-20354.2.patch > > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20355) Clean up parameter of HiveConnection.setSchema
[ https://issues.apache.org/jira/browse/HIVE-20355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575708#comment-16575708 ] Hive QA commented on HIVE-20355: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935021/HIVE-20355.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14873 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13137/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13137/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13137/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935021 - PreCommit-HIVE-Build > Clean up parameter of HiveConnection.setSchema > -- > > Key: HIVE-20355 > URL: https://issues.apache.org/jira/browse/HIVE-20355 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20355.1.patch > > > Not immediately exploitable, as HS2 only allow one statement a time. But in > future, we may support multiple statement in HiveStatement, so better to > clean up the database parameter to avoid potential sql injection. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20355) Clean up parameter of HiveConnection.setSchema
[ https://issues.apache.org/jira/browse/HIVE-20355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575684#comment-16575684 ] Hive QA commented on HIVE-20355: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} jdbc in master has 17 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13137/dev-support/hive-personality.sh | | git revision | master / 3dc736f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: jdbc U: jdbc | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13137/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Clean up parameter of HiveConnection.setSchema > -- > > Key: HIVE-20355 > URL: https://issues.apache.org/jira/browse/HIVE-20355 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20355.1.patch > > > Not immediately exploitable, as HS2 only allow one statement a time. But in > future, we may support multiple statement in HiveStatement, so better to > clean up the database parameter to avoid potential sql injection. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20352) Vectorization: Support grouping function
[ https://issues.apache.org/jira/browse/HIVE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575675#comment-16575675 ] Hive QA commented on HIVE-20352: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935017/HIVE-20352.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14873 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13136/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13136/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13136/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935017 - PreCommit-HIVE-Build > Vectorization: Support grouping function > > > Key: HIVE-20352 > URL: https://issues.apache.org/jira/browse/HIVE-20352 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20352.01.patch > > > Support native vectorization for grouping function (part of Grouping Sets) so > we don't need to use VectorUDFAdaptor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20352) Vectorization: Support grouping function
[ https://issues.apache.org/jira/browse/HIVE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575647#comment-16575647 ] Hive QA commented on HIVE-20352: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 10s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} ql: The patch generated 2 new + 372 unchanged - 0 fixed = 374 total (was 372) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13136/dev-support/hive-personality.sh | | git revision | master / 3dc736f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13136/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13136/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: Support grouping function > > > Key: HIVE-20352 > URL: https://issues.apache.org/jira/browse/HIVE-20352 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20352.01.patch > > > Support native vectorization for grouping function (part of Grouping Sets) so > we don't need to use VectorUDFAdaptor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575630#comment-16575630 ] Hive QA commented on HIVE-20225: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 43s{color} | {color:blue} serde in master has 195 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 5s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} serde: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s{color} | {color:red} root: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s{color} | {color:red} ql: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 35s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13135/dev-support/hive-personality.sh | | git revision | master / 3dc736f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13135/yetus/diff-checkstyle-serde.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13135/yetus/diff-checkstyle-root.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13135/yetus/diff-checkstyle-ql.txt | | modules | C: serde . ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13135/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.11.patch, HIVE-20225.2.patch, HIVE-20225.3.patch, >
[jira] [Commented] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575624#comment-16575624 ] Hive QA commented on HIVE-20225: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935019/HIVE-20225.11.patch {color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14889 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_ts] (batchId=193) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13135/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13135/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13135/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12935019 - PreCommit-HIVE-Build > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.11.patch, HIVE-20225.2.patch, HIVE-20225.3.patch, > HIVE-20225.4.patch, HIVE-20225.5-branch-2.patch, HIVE-20225.6.patch, > HIVE-20225.7.patch, HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19902) Provide Metastore micro-benchmarks
[ https://issues.apache.org/jira/browse/HIVE-19902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-19902: -- Attachment: HIVE-19902.06.patch > Provide Metastore micro-benchmarks > -- > > Key: HIVE-19902 > URL: https://issues.apache.org/jira/browse/HIVE-19902 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.1.0, 4.0.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-19902.01.patch, HIVE-19902.02.patch, > HIVE-19902.03.patch, HIVE-19902.04.patch, HIVE-19902.05.patch, > HIVE-19902.06.patch > > > It would be very useful to have metastore benchmarks to be able to track perf > issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575585#comment-16575585 ] Hive QA commented on HIVE-20354: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935038/HIVE-20354.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14873 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13134/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13134/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13134/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935038 - PreCommit-HIVE-Build > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.1.patch, HIVE-20354.2.patch > > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20196) Remove MetastoreConf dependency on server-specific classes
[ https://issues.apache.org/jira/browse/HIVE-20196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-20196: -- Attachment: HIVE-20196.03.patch > Remove MetastoreConf dependency on server-specific classes > -- > > Key: HIVE-20196 > URL: https://issues.apache.org/jira/browse/HIVE-20196 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 4.0.0, 3.2.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-20196.01.patch, HIVE-20196.02.patch, > HIVE-20196.03.patch > > > MetastoreConf has knowledge about some server-specific classes. We need to > separate these into a separate server-specific class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20196) Remove MetastoreConf dependency on server-specific classes
[ https://issues.apache.org/jira/browse/HIVE-20196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-20196: -- Summary: Remove MetastoreConf dependency on server-specific classes (was: Separate MetastoreConf into common and server parts) > Remove MetastoreConf dependency on server-specific classes > -- > > Key: HIVE-20196 > URL: https://issues.apache.org/jira/browse/HIVE-20196 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 4.0.0, 3.2.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-20196.01.patch, HIVE-20196.02.patch, > HIVE-20196.03.patch > > > MetastoreConf has knowledge about some server-specific classes. We need to > separate these into a separate server-specific class. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-20353: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks, Nishant! > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-20353.patch > > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575579#comment-16575579 ] Ashutosh Chauhan commented on HIVE-20353: - +1 > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20353.patch > > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575542#comment-16575542 ] Hive QA commented on HIVE-20354: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s{color} | {color:red} ql: The patch generated 2 new + 613 unchanged - 0 fixed = 615 total (was 613) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13134/dev-support/hive-personality.sh | | git revision | master / 6894239 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13134/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13134/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.1.patch, HIVE-20354.2.patch > > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20358) Allow setting variable value from Hive metastore table properties
[ https://issues.apache.org/jira/browse/HIVE-20358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Shao updated HIVE-20358: -- Description: Hive already supports set command as well as variable substitution: {{set start_ds=2018-08-01;}} {{SELECT COUNT( * ) FROM t WHERE ds >= '${hiveconf:start_ds}';}} Or: {{set start_ds='2018-08-01';}} {{SELECT COUNT( * ) FROM t WHERE ds >= ${hiveconf:start_ds};}} This issue propose to extend the set syntax to allow running UDF and a UDF that queries metastore: {{SET ;}} For example: {{set start_ds GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{set start_ds GET_PARTITION_PROPERTY('mydb', 'mytable', 'ds=2018-01-01/hr=12', 'last_modified_time');}} This will allow query workflows like the following: {{set last_run_time GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{INSERT INTO TABLE mytable SELECT * FROM src WHERE src.time > last_run_time;}} was: Hive already supports set command as well as variable substitution: {{set start_ds=2018-08-01;}} {{SELECT COUNT(*) FROM t WHERE ds >= '${hiveconf:start_ds}';}} Or: {{set start_ds='2018-08-01';}} {{SELECT COUNT(*) FROM t WHERE ds >= ${hiveconf:start_ds};}} This issue propose to extend the set syntax to allow running UDF and a UDF that queries metastore: {{SET ;}} For example: {{set start_ds GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{set start_ds GET_PARTITION_PROPERTY('mydb', 'mytable', 'ds=2018-01-01/hr=12', 'last_modified_time');}} This will allow query workflows like the following: {{set last_run_time GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{INSERT INTO TABLE mytable SELECT * FROM src WHERE src.time > last_run_time;}} > Allow setting variable value from Hive metastore table properties > - > > Key: HIVE-20358 > URL: https://issues.apache.org/jira/browse/HIVE-20358 > Project: Hive > Issue Type: New Feature > Components: Query Processor >Reporter: Zheng Shao >Priority: Minor > > Hive already supports set command as well as variable substitution: > {{set start_ds=2018-08-01;}} > {{SELECT COUNT( * ) FROM t WHERE ds >= '${hiveconf:start_ds}';}} > > Or: > {{set start_ds='2018-08-01';}} > {{SELECT COUNT( * ) FROM t WHERE ds >= ${hiveconf:start_ds};}} > > This issue propose to extend the set syntax to allow running UDF and a UDF > that queries metastore: > {{SET ;}} > > For example: > {{set start_ds GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} > {{set start_ds GET_PARTITION_PROPERTY('mydb', 'mytable', > 'ds=2018-01-01/hr=12', 'last_modified_time');}} > > This will allow query workflows like the following: > {{set last_run_time GET_TABLE_PROPERTY('mydb', 'mytable', > 'last_modified_time');}} > {{INSERT INTO TABLE mytable SELECT * FROM src WHERE src.time > > last_run_time;}} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20358) Allow setting variable value from Hive metastore table properties
[ https://issues.apache.org/jira/browse/HIVE-20358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Shao updated HIVE-20358: -- Description: Hive already supports set command as well as variable substitution: {{set start_ds=2018-08-01;}} {{SELECT COUNT(*) FROM t WHERE ds >= '${hiveconf:start_ds}';}} Or: {{set start_ds='2018-08-01';}} {{SELECT COUNT(*) FROM t WHERE ds >= ${hiveconf:start_ds};}} This issue propose to extend the set syntax to allow running UDF and a UDF that queries metastore: {{SET ;}} For example: {{set start_ds GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{set start_ds GET_PARTITION_PROPERTY('mydb', 'mytable', 'ds=2018-01-01/hr=12', 'last_modified_time');}} This will allow query workflows like the following: {{set last_run_time GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{INSERT INTO TABLE mytable SELECT * FROM src WHERE src.time > last_run_time;}} was: Hive already supports set command as well as variable substitution: {{set start_ds=2018-08-01;}} {{SELECT COUNT(*) FROM t WHERE ds >= '${hiveconf:start_ds}';}} Or: {{set start_ds='2018-08-01';}} {{SELECT COUNT(*) FROM t WHERE ds >= ${hiveconf:start_ds};}} This issue propose to extend the set syntax to allow running UDF and a UDF that queries metastore: {{SET ;}} For example: {{set start_ds GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{set start_ds GET_PARTITION_PROPERTY('mydb', 'mytable', 'ds=2018-01-01/hr=12', 'last_modified_time');}} This will allow query workflows like the following: {{set last_run_time GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} {{INSERT INTO TABLE mytable SELECT * FROM src WHERE src.time > last_run_time;}} > Allow setting variable value from Hive metastore table properties > - > > Key: HIVE-20358 > URL: https://issues.apache.org/jira/browse/HIVE-20358 > Project: Hive > Issue Type: New Feature > Components: Query Processor >Reporter: Zheng Shao >Priority: Minor > > Hive already supports set command as well as variable substitution: > {{set start_ds=2018-08-01;}} > {{SELECT COUNT(*) FROM t WHERE ds >= '${hiveconf:start_ds}';}} > > Or: > {{set start_ds='2018-08-01';}} > {{SELECT COUNT(*) FROM t WHERE ds >= ${hiveconf:start_ds};}} > > This issue propose to extend the set syntax to allow running UDF and a UDF > that queries metastore: > {{SET ;}} > > For example: > {{set start_ds GET_TABLE_PROPERTY('mydb', 'mytable', 'last_modified_time');}} > {{set start_ds GET_PARTITION_PROPERTY('mydb', 'mytable', > 'ds=2018-01-01/hr=12', 'last_modified_time');}} > > This will allow query workflows like the following: > {{set last_run_time GET_TABLE_PROPERTY('mydb', 'mytable', > 'last_modified_time');}} > {{INSERT INTO TABLE mytable SELECT * FROM src WHERE src.time > > last_run_time;}} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18244) CachedStore: Fix UT when CachedStore is enabled
[ https://issues.apache.org/jira/browse/HIVE-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575514#comment-16575514 ] Hive QA commented on HIVE-18244: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935008/HIVE-18244.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1108 failed/errored test(s), 14875 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite] (batchId=264) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=192) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.org.apache.hadoop.hive.cli.TestMiniLlapCliDriver (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[bucket6] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[dynamic_semijoin_user_level] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[file_with_header_footer] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_stats] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge10] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge4] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_struct_type_vectorization] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[remote_script] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[schemeAuthority] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[temp_table_external] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[tez_union_dynamic_partition] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[unionDistinct_1] (batchId=153) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_3] (batchId=107) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] (batchId=107) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=107) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[tez-tag] (batchId=107) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[vector_join_part_col_char] (batchId=107) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[vector_non_string_partition] (batchId=107) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.addPartitions (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.alterPartitions (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.createTableWithConstraints (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.databases (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.defaultConstraints (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.dropPartitions (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.functions (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.getPartitions (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.getTableMeta (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.listPartitions (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.notNullConstraint (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.primaryKeyAndForeignKey (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.tablesCreateDropAlterTruncate (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.tablesGetExists (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.tablesList (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultClient.uniqueConstraint (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultSvr.getTableMeta (batchId=227) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultSvr.listPartitions (batchId=227) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultSvr.tablesCreateDropAlterTruncate (batchId=227) org.apache.hadoop.hive.metastore.TestCatalogNonDefaultSvr.tablesList (batchId=227) org.apache.hadoop.hive.metastore.TestCatalogOldClient.getTableMeta (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogOldClient.listPartitions (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogOldClient.tablesCreateDropAlterTruncate (batchId=219) org.apache.hadoop.hive.metastore.TestCatalogOldClient.tablesList (batchId=219)
[jira] [Commented] (HIVE-18244) CachedStore: Fix UT when CachedStore is enabled
[ https://issues.apache.org/jira/browse/HIVE-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575507#comment-16575507 ] Hive QA commented on HIVE-18244: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} metastore-server in master failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 31s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s{color} | {color:red} metastore-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense xml javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13132/dev-support/hive-personality.sh | | git revision | master / 6894239 | | Default Java | 1.8.0_111 | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13132/yetus/branch-findbugs-standalone-metastore_metastore-server.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-13132/yetus/whitespace-eol.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13132/yetus/patch-findbugs-standalone-metastore_metastore-server.txt | | modules | C: . standalone-metastore/metastore-server U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13132/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > CachedStore: Fix UT when CachedStore is enabled > --- > > Key: HIVE-18244 > URL: https://issues.apache.org/jira/browse/HIVE-18244 > Project: Hive > Issue Type: Bug >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18244.1.patch, HIVE-18244.2.patch, > HIVE-18244.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20194) HiveMetastoreClient should use reflection to instantiate embedded HMS instance
[ https://issues.apache.org/jira/browse/HIVE-20194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-20194: -- Attachment: HIVE-20194.04.patch > HiveMetastoreClient should use reflection to instantiate embedded HMS instance > -- > > Key: HIVE-20194 > URL: https://issues.apache.org/jira/browse/HIVE-20194 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Affects Versions: 4.0.0, 3.2.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > Attachments: HIVE-20194.01.patch, HIVE-20194.02.patch, > HIVE-20194.03.patch, HIVE-20194.04.patch > > > When HiveMetastoreClient is used in embedded mode, it instantiates metastore > server. Since we want to separate client and server code we can no longer > instantiate the class directly but need to use reflection for that. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20246) Configurable collecting stats by using DO_NOT_UPDATE_STATS table property
[ https://issues.apache.org/jira/browse/HIVE-20246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575474#comment-16575474 ] Aihua Xu commented on HIVE-20246: - [~afan] If you are removing the DO_NOT_UPDATE_STATS from the table properties, then the following partitions will get stats updated. I notice that for the alter partition calls, we are using MetaStoreUtils.requireCalStats() to check if stats need to be updated. You may take a look if we can reuse MetaStoreUtils.requireCalStats() to check if stats need to be gathered when adding new partitions. If not, maybe we can add a similar helper function? > Configurable collecting stats by using DO_NOT_UPDATE_STATS table property > - > > Key: HIVE-20246 > URL: https://issues.apache.org/jira/browse/HIVE-20246 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: Alice Fan >Assignee: Alice Fan >Priority: Minor > Fix For: 4.0.0 > > Attachments: HIVE-20246.4.patch > > > By default, Hive collects stats when running operations like alter table > partition(s), create table, and create external table. However, collecting > stats requires Metastore lists all files under the table directory and the > file listing operation can be very expensive particularly on filesystems like > S3. > HIVE-18743 made DO_NOT_UPDATE_STATS table property could be selectively > prevent stats collection. > This Jira aims at introducing DO_NOT_UPDATE_STATS table property into the > MetaStoreUtils.updatePartitionStatsFast. By adding this, user can be > selectively prevent stats collection when doing alter table partition(s) > operation at table level. For example, set 'Alter Table S3_Table set > tblproperties('DO_NOT_UPDATE_STATS'='TRUE');' MetaStore will not collect > stats for the specified S3_Table when alter table add partition(key1=val1, > key2=val2); -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575423#comment-16575423 ] Hive QA commented on HIVE-20353: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935004/HIVE-20353.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14872 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13131/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13131/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13131/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935004 - PreCommit-HIVE-Build > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20353.patch > > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-17040) Join elimination in the presence of FK relationship
[ https://issues.apache.org/jira/browse/HIVE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-17040: -- Assignee: Jesus Camacho Rodriguez > Join elimination in the presence of FK relationship > --- > > Key: HIVE-17040 > URL: https://issues.apache.org/jira/browse/HIVE-17040 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > > If the PK/UK table is not filtered, we can safely remove the join. > A simple example: > {code:sql} > SELECT c_current_cdemo_sk > FROM customer, customer_address > ON c_current_addr_sk = ca_address_sk; > {code} > As a Calcite rule, we could implement this rewriting by 1) matching a Project > on top of a Join operator, 2) checking that only columns from the FK are used > in the Project, 3) checking that the join condition matches the FK - PK/UK > relationship, 4) pulling all the predicates from the PK/UK side and checking > that the input is not filtered, and 5) removing the join, possibly adding a > IS NOT NULL condition on the join column from the FK side. > If the PK/UK table is filtered, we should still transform the Join into a > SemiJoin operator. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20332) Materialized views: Introduce heuristic on selectivity over ROW__ID to favour incremental rebuild
[ https://issues.apache.org/jira/browse/HIVE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575417#comment-16575417 ] Jesus Camacho Rodriguez commented on HIVE-20332: [~ashutoshc], this one got a clean run too. Could you take a look? https://reviews.apache.org/r/68261/ Thanks > Materialized views: Introduce heuristic on selectivity over ROW__ID to favour > incremental rebuild > - > > Key: HIVE-20332 > URL: https://issues.apache.org/jira/browse/HIVE-20332 > Project: Hive > Issue Type: Improvement > Components: Materialized views >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20332.01.patch, HIVE-20332.01.patch, > HIVE-20332.patch > > > Currently, we do not expose stats over {{ROW\_\_ID.writeId}} to the optimizer > (this should be fixed by HIVE-20313). Even if we did, we always assume > uniform distribution of the column values, which can easily lead to > overestimations on the number of rows read when we filter on > {{ROW\_\_ID.writeId}} for materialized views (think about a large transaction > for MV creation and then small ones for incremental maintenance). This > overestimation can lead to incremental view maintenance not being triggered > as cost of the incremental plan is overestimated (we think we will read more > rows than we actually do). This could be fixed by introducing histograms that > reflect better the column values distribution. > Till both fixes are implemented, we will use a config variable that will set > the selectivity for filter condition on {{ROW\_\_ID}} during the cost > calculation. Setting that variable to a low value will favour incremental > rebuild over full rebuild. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-20347) hive.optimize.sort.dynamic.partition should work with partitioned CTAS and MV
[ https://issues.apache.org/jira/browse/HIVE-20347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez resolved HIVE-20347. Resolution: Fixed Fix Version/s: 3.2.0 4.0.0 > hive.optimize.sort.dynamic.partition should work with partitioned CTAS and MV > - > > Key: HIVE-20347 > URL: https://issues.apache.org/jira/browse/HIVE-20347 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 4.0.0, 3.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-20347.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20347) hive.optimize.sort.dynamic.partition should work with partitioned CTAS and MV
[ https://issues.apache.org/jira/browse/HIVE-20347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-20347: --- Status: In Progress (was: Patch Available) Pushed to master, branch-3. Thanks for reviewing [~ashutoshc] > hive.optimize.sort.dynamic.partition should work with partitioned CTAS and MV > - > > Key: HIVE-20347 > URL: https://issues.apache.org/jira/browse/HIVE-20347 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 4.0.0, 3.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20347.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20340) Druid Needs Explicit CASTs from Timestamp to STRING when the output of timestamp function is used as String
[ https://issues.apache.org/jira/browse/HIVE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575399#comment-16575399 ] Ashutosh Chauhan commented on HIVE-20340: - +1 pending tests > Druid Needs Explicit CASTs from Timestamp to STRING when the output of > timestamp function is used as String > --- > > Key: HIVE-20340 > URL: https://issues.apache.org/jira/browse/HIVE-20340 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-20340.1.patch, HIVE-20340.2.patch, > HIVE-20340.3.patch > > > Druid time expressions return numeric values in form of ms (instead of > formatted timestamp). Because of this expressions/function which expects its > argument as string type ended up returning different values for time > expressions input. > e.g. > {code} > SELECT SUBSTRING(to_date(datetime0),4) FROM tableau_orc.calcs; > | 4-07-25 | > SELECT SUBSTRING(to_date(datetime0),4) FROM druid_tableau.calcs; > | 002240 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM tableau_orc.calcs; > | 2004-07-17 00:00:00 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM druid_tableau.calcs; > | 109045440 00:00:00 | > {code} > Druid needs explicit cast to make this work -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20340) Druid Needs Explicit CASTs from Timestamp to STRING when the output of timestamp function is used as String
[ https://issues.apache.org/jira/browse/HIVE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-20340: --- Status: Open (was: Patch Available) > Druid Needs Explicit CASTs from Timestamp to STRING when the output of > timestamp function is used as String > --- > > Key: HIVE-20340 > URL: https://issues.apache.org/jira/browse/HIVE-20340 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-20340.1.patch, HIVE-20340.2.patch, > HIVE-20340.3.patch > > > Druid time expressions return numeric values in form of ms (instead of > formatted timestamp). Because of this expressions/function which expects its > argument as string type ended up returning different values for time > expressions input. > e.g. > {code} > SELECT SUBSTRING(to_date(datetime0),4) FROM tableau_orc.calcs; > | 4-07-25 | > SELECT SUBSTRING(to_date(datetime0),4) FROM druid_tableau.calcs; > | 002240 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM tableau_orc.calcs; > | 2004-07-17 00:00:00 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM druid_tableau.calcs; > | 109045440 00:00:00 | > {code} > Druid needs explicit cast to make this work -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20340) Druid Needs Explicit CASTs from Timestamp to STRING when the output of timestamp function is used as String
[ https://issues.apache.org/jira/browse/HIVE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-20340: --- Status: Patch Available (was: Open) > Druid Needs Explicit CASTs from Timestamp to STRING when the output of > timestamp function is used as String > --- > > Key: HIVE-20340 > URL: https://issues.apache.org/jira/browse/HIVE-20340 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-20340.1.patch, HIVE-20340.2.patch, > HIVE-20340.3.patch > > > Druid time expressions return numeric values in form of ms (instead of > formatted timestamp). Because of this expressions/function which expects its > argument as string type ended up returning different values for time > expressions input. > e.g. > {code} > SELECT SUBSTRING(to_date(datetime0),4) FROM tableau_orc.calcs; > | 4-07-25 | > SELECT SUBSTRING(to_date(datetime0),4) FROM druid_tableau.calcs; > | 002240 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM tableau_orc.calcs; > | 2004-07-17 00:00:00 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM druid_tableau.calcs; > | 109045440 00:00:00 | > {code} > Druid needs explicit cast to make this work -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20340) Druid Needs Explicit CASTs from Timestamp to STRING when the output of timestamp function is used as String
[ https://issues.apache.org/jira/browse/HIVE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-20340: --- Attachment: HIVE-20340.3.patch > Druid Needs Explicit CASTs from Timestamp to STRING when the output of > timestamp function is used as String > --- > > Key: HIVE-20340 > URL: https://issues.apache.org/jira/browse/HIVE-20340 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-20340.1.patch, HIVE-20340.2.patch, > HIVE-20340.3.patch > > > Druid time expressions return numeric values in form of ms (instead of > formatted timestamp). Because of this expressions/function which expects its > argument as string type ended up returning different values for time > expressions input. > e.g. > {code} > SELECT SUBSTRING(to_date(datetime0),4) FROM tableau_orc.calcs; > | 4-07-25 | > SELECT SUBSTRING(to_date(datetime0),4) FROM druid_tableau.calcs; > | 002240 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM tableau_orc.calcs; > | 2004-07-17 00:00:00 | > SELECT CONCAT(to_date(datetime0),' 00:00:00') FROM druid_tableau.calcs; > | 109045440 00:00:00 | > {code} > Druid needs explicit cast to make this work -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20351) GenericUDFNamedStruct should constant fold at compile time
[ https://issues.apache.org/jira/browse/HIVE-20351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575390#comment-16575390 ] Gopal V commented on HIVE-20351: {code} create temporary table test(x int) stored as orc; insert into test values(1),(2),(3); select x, to_json(named_struct('Total', 1.0)), named_struct('Total', 1.0) from test order by x; {code} > GenericUDFNamedStruct should constant fold at compile time > -- > > Key: HIVE-20351 > URL: https://issues.apache.org/jira/browse/HIVE-20351 > Project: Hive > Issue Type: Bug >Reporter: Mykhailo Kysliuk >Assignee: Mykhailo Kysliuk >Priority: Minor > Attachments: HIVE-20351.1.patch > > > Reproduced at hive-3.0. > When we run hive query: > {code:java} > select named_struct('Total','Total') from test; > {code} > We could see the ERROR at hiveserver logs: > {code:java} > 2018-05-25T15:18:13,182 ERROR [main] optimizer.ConstantPropagateProcFactory: > Unable to evaluate > org.apache.hadoop.hive.ql.udf.generic.GenericUDFNamedStruct@a0bf272. Return > value unrecoginizable. > {code} > This error is harmless because all results are correct. But named_struct > constant values should be processed correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20331) Query with union all, lateral view and Join fails with "cannot find parent in the child operator"
[ https://issues.apache.org/jira/browse/HIVE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-20331: Attachment: (was: HIVE-20331.1.patch) > Query with union all, lateral view and Join fails with "cannot find parent in > the child operator" > - > > Key: HIVE-20331 > URL: https://issues.apache.org/jira/browse/HIVE-20331 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20331.1.patch > > > The following query with Union, Lateral view and Join will fail during > execution with the exception below. > {noformat} > create table t1(col1 int); > SELECT 1 AS `col1` > FROM t1 > UNION ALL > SELECT 2 AS `col1` > FROM > (SELECT col1 > FROM t1 > ) x1 > JOIN > (SELECT col1 > FROM > (SELECT > Row_Number() over (PARTITION BY col1 ORDER BY col1) AS `col1` > FROM t1 > ) x2 lateral VIEW explode(map(10,1))`mapObj` AS `col2`, `col3` > ) `expdObj` > {noformat} > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive internal > error: cannot find parent in the child operator! > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:509) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > {noformat} > After debugging, seems we have issues in GenMRFileSink1 class in which we are > setting incorrect aliasToWork to the MapWork. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20331) Query with union all, lateral view and Join fails with "cannot find parent in the child operator"
[ https://issues.apache.org/jira/browse/HIVE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-20331: Status: Patch Available (was: In Progress) > Query with union all, lateral view and Join fails with "cannot find parent in > the child operator" > - > > Key: HIVE-20331 > URL: https://issues.apache.org/jira/browse/HIVE-20331 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20331.1.patch > > > The following query with Union, Lateral view and Join will fail during > execution with the exception below. > {noformat} > create table t1(col1 int); > SELECT 1 AS `col1` > FROM t1 > UNION ALL > SELECT 2 AS `col1` > FROM > (SELECT col1 > FROM t1 > ) x1 > JOIN > (SELECT col1 > FROM > (SELECT > Row_Number() over (PARTITION BY col1 ORDER BY col1) AS `col1` > FROM t1 > ) x2 lateral VIEW explode(map(10,1))`mapObj` AS `col2`, `col3` > ) `expdObj` > {noformat} > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive internal > error: cannot find parent in the child operator! > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:509) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > {noformat} > After debugging, seems we have issues in GenMRFileSink1 class in which we are > setting incorrect aliasToWork to the MapWork. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20331) Query with union all, lateral view and Join fails with "cannot find parent in the child operator"
[ https://issues.apache.org/jira/browse/HIVE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575383#comment-16575383 ] Aihua Xu commented on HIVE-20331: - Seems there is a glitch causing the job result not posted. https://builds.apache.org/job/PreCommit-HIVE-Build/13104/. All the tests passed but I will retrigger it. > Query with union all, lateral view and Join fails with "cannot find parent in > the child operator" > - > > Key: HIVE-20331 > URL: https://issues.apache.org/jira/browse/HIVE-20331 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20331.1.patch, HIVE-20331.1.patch > > > The following query with Union, Lateral view and Join will fail during > execution with the exception below. > {noformat} > create table t1(col1 int); > SELECT 1 AS `col1` > FROM t1 > UNION ALL > SELECT 2 AS `col1` > FROM > (SELECT col1 > FROM t1 > ) x1 > JOIN > (SELECT col1 > FROM > (SELECT > Row_Number() over (PARTITION BY col1 ORDER BY col1) AS `col1` > FROM t1 > ) x2 lateral VIEW explode(map(10,1))`mapObj` AS `col2`, `col3` > ) `expdObj` > {noformat} > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive internal > error: cannot find parent in the child operator! > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:509) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > {noformat} > After debugging, seems we have issues in GenMRFileSink1 class in which we are > setting incorrect aliasToWork to the MapWork. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20331) Query with union all, lateral view and Join fails with "cannot find parent in the child operator"
[ https://issues.apache.org/jira/browse/HIVE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-20331: Attachment: HIVE-20331.1.patch > Query with union all, lateral view and Join fails with "cannot find parent in > the child operator" > - > > Key: HIVE-20331 > URL: https://issues.apache.org/jira/browse/HIVE-20331 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20331.1.patch, HIVE-20331.1.patch > > > The following query with Union, Lateral view and Join will fail during > execution with the exception below. > {noformat} > create table t1(col1 int); > SELECT 1 AS `col1` > FROM t1 > UNION ALL > SELECT 2 AS `col1` > FROM > (SELECT col1 > FROM t1 > ) x1 > JOIN > (SELECT col1 > FROM > (SELECT > Row_Number() over (PARTITION BY col1 ORDER BY col1) AS `col1` > FROM t1 > ) x2 lateral VIEW explode(map(10,1))`mapObj` AS `col2`, `col3` > ) `expdObj` > {noformat} > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive internal > error: cannot find parent in the child operator! > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:509) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > {noformat} > After debugging, seems we have issues in GenMRFileSink1 class in which we are > setting incorrect aliasToWork to the MapWork. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20331) Query with union all, lateral view and Join fails with "cannot find parent in the child operator"
[ https://issues.apache.org/jira/browse/HIVE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-20331: Status: In Progress (was: Patch Available) > Query with union all, lateral view and Join fails with "cannot find parent in > the child operator" > - > > Key: HIVE-20331 > URL: https://issues.apache.org/jira/browse/HIVE-20331 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Affects Versions: 2.1.1 >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Attachments: HIVE-20331.1.patch, HIVE-20331.1.patch > > > The following query with Union, Lateral view and Join will fail during > execution with the exception below. > {noformat} > create table t1(col1 int); > SELECT 1 AS `col1` > FROM t1 > UNION ALL > SELECT 2 AS `col1` > FROM > (SELECT col1 > FROM t1 > ) x1 > JOIN > (SELECT col1 > FROM > (SELECT > Row_Number() over (PARTITION BY col1 ORDER BY col1) AS `col1` > FROM t1 > ) x2 lateral VIEW explode(map(10,1))`mapObj` AS `col2`, `col3` > ) `expdObj` > {noformat} > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive internal > error: cannot find parent in the child operator! > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.MapOperator.initializeMapOperator(MapOperator.java:509) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116) > ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] > {noformat} > After debugging, seems we have issues in GenMRFileSink1 class in which we are > setting incorrect aliasToWork to the MapWork. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575375#comment-16575375 ] Hive QA commented on HIVE-20353: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} druid-handler in master has 13 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} druid-handler: The patch generated 5 new + 146 unchanged - 0 fixed = 151 total (was 146) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13131/dev-support/hive-personality.sh | | git revision | master / 873d31f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13131/yetus/diff-checkstyle-druid-handler.txt | | modules | C: druid-handler U: druid-handler | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13131/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20353.patch > > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19767) HiveServer2 should take hiveconf for non Hive properties
[ https://issues.apache.org/jira/browse/HIVE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575368#comment-16575368 ] Hive QA commented on HIVE-19767: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935002/HIVE-19767.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14871 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13130/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13130/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13130/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935002 - PreCommit-HIVE-Build > HiveServer2 should take hiveconf for non Hive properties > > > Key: HIVE-19767 > URL: https://issues.apache.org/jira/browse/HIVE-19767 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.2.2, 3.0.0, 2.3.2 >Reporter: Szehon Ho >Assignee: Szehon Ho >Priority: Major > Attachments: HIVE-19767.2.patch, HIVE-19767.patch > > > The -hiveconf command line option works in HiveServer2 with properties in > HiveConf.java, but not so well with other properties (like mapred properties > or spark properties to control underlying execution engine, or custom > properties understood by custom listeners) > It is inconsistent with HiveCLI. > HiveCLI behavior: > {noformat} > ./bin/hive --hiveconf a=b > hive> set a; > a=b {noformat} > HiveServer2 behavior: > {noformat} > ./bin/hiveserver2 --hiveconf a=b > beeline> set a; > +-+ > | set | > +-+ > | a is undefined | > +-+{noformat} > Although it is possible to set up hive-site.xml or even mapred-site.xml to > fill in the relevant properties, it is more convenient when testing HS2 with > different configuration to be able to use --hiveconf to change on the fly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-20354: -- Attachment: HIVE-20354.2.patch > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.1.patch, HIVE-20354.2.patch > > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20155) Semijoin Reduction : Put all the min-max filters before all the bloom filters
[ https://issues.apache.org/jira/browse/HIVE-20155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-20155: -- Attachment: (was: HIVE-20354.2.patch) > Semijoin Reduction : Put all the min-max filters before all the bloom filters > - > > Key: HIVE-20155 > URL: https://issues.apache.org/jira/browse/HIVE-20155 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > > If there are more than 1 semijoin reduction filters, apply all min-max > filters before any of the bloom filters are applied as bloom filter lookup is > expensive. > > cc [~gopalv] [~jdere] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19767) HiveServer2 should take hiveconf for non Hive properties
[ https://issues.apache.org/jira/browse/HIVE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575314#comment-16575314 ] Hive QA commented on HIVE-19767: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 45s{color} | {color:blue} service in master has 48 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} service: The patch generated 0 new + 35 unchanged - 2 fixed = 35 total (was 37) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13130/dev-support/hive-personality.sh | | git revision | master / 873d31f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: service U: service | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13130/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveServer2 should take hiveconf for non Hive properties > > > Key: HIVE-19767 > URL: https://issues.apache.org/jira/browse/HIVE-19767 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.2.2, 3.0.0, 2.3.2 >Reporter: Szehon Ho >Assignee: Szehon Ho >Priority: Major > Attachments: HIVE-19767.2.patch, HIVE-19767.patch > > > The -hiveconf command line option works in HiveServer2 with properties in > HiveConf.java, but not so well with other properties (like mapred properties > or spark properties to control underlying execution engine, or custom > properties understood by custom listeners) > It is inconsistent with HiveCLI. > HiveCLI behavior: > {noformat} > ./bin/hive --hiveconf a=b > hive> set a; > a=b {noformat} > HiveServer2 behavior: > {noformat} > ./bin/hiveserver2 --hiveconf a=b > beeline> set a; > +-+ > | set | > +-+ > | a is undefined | > +-+{noformat} > Although it is possible to set up hive-site.xml or even mapred-site.xml to > fill in the relevant properties, it is more convenient when testing HS2 with > different configuration to be able to use --hiveconf to change on the fly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20339) Vectorization: Lift unneeded restriction causing some PTF with RANK not to be vectorized
[ https://issues.apache.org/jira/browse/HIVE-20339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575294#comment-16575294 ] Hive QA commented on HIVE-20339: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12935000/HIVE-20339.03.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14872 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13129/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13129/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13129/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12935000 - PreCommit-HIVE-Build > Vectorization: Lift unneeded restriction causing some PTF with RANK not to be > vectorized > > > Key: HIVE-20339 > URL: https://issues.apache.org/jira/browse/HIVE-20339 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20339.01.patch, HIVE-20339.02.patch, > HIVE-20339.03.patch > > > Unnecessary: "PTF operator: More than 1 argument expression of aggregation > function rank" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20155) Semijoin Reduction : Put all the min-max filters before all the bloom filters
[ https://issues.apache.org/jira/browse/HIVE-20155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-20155: -- Attachment: HIVE-20354.2.patch > Semijoin Reduction : Put all the min-max filters before all the bloom filters > - > > Key: HIVE-20155 > URL: https://issues.apache.org/jira/browse/HIVE-20155 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.2.patch > > > If there are more than 1 semijoin reduction filters, apply all min-max > filters before any of the bloom filters are applied as bloom filter lookup is > expensive. > > cc [~gopalv] [~jdere] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20339) Vectorization: Lift unneeded restriction causing some PTF with RANK not to be vectorized
[ https://issues.apache.org/jira/browse/HIVE-20339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575275#comment-16575275 ] Hive QA commented on HIVE-20339: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 48s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 7s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 45s{color} | {color:red} ql: The patch generated 2 new + 405 unchanged - 2 fixed = 407 total (was 407) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13129/dev-support/hive-personality.sh | | git revision | master / 873d31f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13129/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13129/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorization: Lift unneeded restriction causing some PTF with RANK not to be > vectorized > > > Key: HIVE-20339 > URL: https://issues.apache.org/jira/browse/HIVE-20339 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20339.01.patch, HIVE-20339.02.patch, > HIVE-20339.03.patch > > > Unnecessary: "PTF operator: More than 1 argument expression of aggregation > function rank" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19613) GenericUDTFGetSplits should handle fetch task with temp table rewrite
[ https://issues.apache.org/jira/browse/HIVE-19613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575271#comment-16575271 ] Prasanth Jayachandran commented on HIVE-19613: -- [~jmarhuen] make sense. Yeah. I agree it has to be done after replanning the query as well. > GenericUDTFGetSplits should handle fetch task with temp table rewrite > - > > Key: HIVE-19613 > URL: https://issues.apache.org/jira/browse/HIVE-19613 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Eric Wohlstadter >Assignee: Prasanth Jayachandran >Priority: Major > Fix For: 3.1.0, 3.0.1, 4.0.0 > > Attachments: HIVE-19613.1.patch, HIVE-19613.2.patch, > HIVE-19613.3.patch > > > GenericUDTFGetSplits fails for fetch task only queries. Fetch task only > queries can be handled same way as >1 task queries using temp tables. > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Was expecting a > single TezTask. > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.createPlanFragment(GenericUDTFGetSplits.java:262) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:201) > at org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116) > at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:984) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:930) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:917) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:984) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:930) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:492) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:484) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:145) > ... 16 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20347) hive.optimize.sort.dynamic.partition should work with partitioned CTAS and MV
[ https://issues.apache.org/jira/browse/HIVE-20347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575251#comment-16575251 ] Ashutosh Chauhan commented on HIVE-20347: - +1 > hive.optimize.sort.dynamic.partition should work with partitioned CTAS and MV > - > > Key: HIVE-20347 > URL: https://issues.apache.org/jira/browse/HIVE-20347 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 4.0.0, 3.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-20347.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20351) GenericUDFNamedStruct should constant fold at compile time
[ https://issues.apache.org/jira/browse/HIVE-20351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575238#comment-16575238 ] Hive QA commented on HIVE-20351: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12934996/HIVE-20351.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 48 failed/errored test(s), 14872 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=267) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table] (batchId=267) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=267) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_table] (batchId=267) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_coltype] (batchId=27) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5a] (batchId=56) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_into1] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join45] (batchId=21) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join47] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_query_multiskew_2] (batchId=74) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin47] (batchId=63) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[null_cast] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcr] (batchId=63) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=52) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup2] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup3] (batchId=7) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup4] (batchId=79) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pointlookup] (batchId=4) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_47] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stat_estimate_drill] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[structin] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tablevalues] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_inline] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_struct] (batchId=34) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_union] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_bucket] (batchId=27) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_struct_in] (batchId=48) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[insert_into1] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[check_constraint] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_into_default_keyword] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_column_in] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_column_in_single] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_annotate_stats_select] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_bucket] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_struct_in] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_inline] (batchId=180) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_0] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_insert_into_bucketed_table] (batchId=163) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[insert_into1] (batchId=119) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[parquet_vectorization_0] (batchId=116) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[pcr] (batchId=136) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorization_0] (batchId=147) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query91] (batchId=266) org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query91] (batchId=264) {noformat} Test results:
[jira] [Assigned] (HIVE-20357) Introduce initOrUpgradeSchema option to schema tool
[ https://issues.apache.org/jira/browse/HIVE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-20357: - > Introduce initOrUpgradeSchema option to schema tool > --- > > Key: HIVE-20357 > URL: https://issues.apache.org/jira/browse/HIVE-20357 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > > Currently, schematool has two option: initSchema/upgradeSchema. User needs to > use different command line for different action. However, from the schema > version stored in db, we shall able to figure out if there's a need to > init/upgrade, and choose the right action automatically. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20355) Clean up parameter of HiveConnection.setSchema
[ https://issues.apache.org/jira/browse/HIVE-20355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-20355: -- Attachment: HIVE-20355.1.patch > Clean up parameter of HiveConnection.setSchema > -- > > Key: HIVE-20355 > URL: https://issues.apache.org/jira/browse/HIVE-20355 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20355.1.patch > > > Not immediately exploitable, as HS2 only allow one statement a time. But in > future, we may support multiple statement in HiveStatement, so better to > clean up the database parameter to avoid potential sql injection. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20355) Clean up parameter of HiveConnection.setSchema
[ https://issues.apache.org/jira/browse/HIVE-20355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-20355: -- Status: Patch Available (was: Open) > Clean up parameter of HiveConnection.setSchema > -- > > Key: HIVE-20355 > URL: https://issues.apache.org/jira/browse/HIVE-20355 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-20355.1.patch > > > Not immediately exploitable, as HS2 only allow one statement a time. But in > future, we may support multiple statement in HiveStatement, so better to > clean up the database parameter to avoid potential sql injection. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lu Li updated HIVE-20225: - Attachment: HIVE-20225.11.patch > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.11.patch, HIVE-20225.2.patch, HIVE-20225.3.patch, > HIVE-20225.4.patch, HIVE-20225.5-branch-2.patch, HIVE-20225.6.patch, > HIVE-20225.7.patch, HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lu Li updated HIVE-20225: - Status: Patch Available (was: In Progress) > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.11.patch, HIVE-20225.2.patch, HIVE-20225.3.patch, > HIVE-20225.4.patch, HIVE-20225.5-branch-2.patch, HIVE-20225.6.patch, > HIVE-20225.7.patch, HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20355) Clean up parameter of HiveConnection.setSchema
[ https://issues.apache.org/jira/browse/HIVE-20355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-20355: - > Clean up parameter of HiveConnection.setSchema > -- > > Key: HIVE-20355 > URL: https://issues.apache.org/jira/browse/HIVE-20355 > Project: Hive > Issue Type: Bug > Components: JDBC >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > > Not immediately exploitable, as HS2 only allow one statement a time. But in > future, we may support multiple statement in HiveStatement, so better to > clean up the database parameter to avoid potential sql injection. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20352) Vectorization: Support grouping function
[ https://issues.apache.org/jira/browse/HIVE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20352: Status: Patch Available (was: Open) > Vectorization: Support grouping function > > > Key: HIVE-20352 > URL: https://issues.apache.org/jira/browse/HIVE-20352 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20352.01.patch > > > Support native vectorization for grouping function (part of Grouping Sets) so > we don't need to use VectorUDFAdaptor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lu Li updated HIVE-20225: - Status: In Progress (was: Patch Available) > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.2.patch, HIVE-20225.3.patch, HIVE-20225.4.patch, > HIVE-20225.5-branch-2.patch, HIVE-20225.6.patch, HIVE-20225.7.patch, > HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20352) Vectorization: Support grouping function
[ https://issues.apache.org/jira/browse/HIVE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20352: Attachment: HIVE-20352.01.patch > Vectorization: Support grouping function > > > Key: HIVE-20352 > URL: https://issues.apache.org/jira/browse/HIVE-20352 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20352.01.patch > > > Support native vectorization for grouping function (part of Grouping Sets) so > we don't need to use VectorUDFAdaptor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-20225 started by Lu Li. > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.2.patch, HIVE-20225.3.patch, HIVE-20225.4.patch, > HIVE-20225.5-branch-2.patch, HIVE-20225.6.patch, HIVE-20225.7.patch, > HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lu Li updated HIVE-20225: - Status: Patch Available (was: In Progress) > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.2.patch, HIVE-20225.3.patch, HIVE-20225.4.patch, > HIVE-20225.5-branch-2.patch, HIVE-20225.6.patch, HIVE-20225.7.patch, > HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lu Li updated HIVE-20225: - Attachment: HIVE-20225.10.patch > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.10.patch, > HIVE-20225.2.patch, HIVE-20225.3.patch, HIVE-20225.4.patch, > HIVE-20225.5-branch-2.patch, HIVE-20225.6.patch, HIVE-20225.7.patch, > HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lu Li updated HIVE-20225: - Status: In Progress (was: Patch Available) > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.2.patch, > HIVE-20225.3.patch, HIVE-20225.4.patch, HIVE-20225.5-branch-2.patch, > HIVE-20225.6.patch, HIVE-20225.7.patch, HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work stopped] (HIVE-20225) SerDe to support Teradata Binary Format
[ https://issues.apache.org/jira/browse/HIVE-20225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-20225 stopped by Lu Li. > SerDe to support Teradata Binary Format > --- > > Key: HIVE-20225 > URL: https://issues.apache.org/jira/browse/HIVE-20225 > Project: Hive > Issue Type: New Feature > Components: Serializers/Deserializers >Reporter: Lu Li >Assignee: Lu Li >Priority: Major > Attachments: HIVE-20225.1.patch, HIVE-20225.2.patch, > HIVE-20225.3.patch, HIVE-20225.4.patch, HIVE-20225.5-branch-2.patch, > HIVE-20225.6.patch, HIVE-20225.7.patch, HIVE-20225.8.patch, HIVE-20225.9.patch > > > When using TPT/BTEQ to export/import Data from Teradata, Teradata will > generate/require binary files based on the schema. > A Customized SerDe is needed in order to directly read these files from Hive > or write these files in order to load back to TD. > {code:java} > CREATE EXTERNAL TABLE `TABLE1`( > ...) > PARTITIONED BY ( > ...) > ROW FORMAT SERDE > 'org.apache.hadoop.hive.contrib.serde2.TeradataBinarySerde' > STORED AS INPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileInputFormat' > OUTPUTFORMAT > > 'org.apache.hadoop.hive.contrib.fileformat.teradata.TeradataBinaryFileOutputFormat' > LOCATION ...; > SELECT * FROM `TABLE1`;{code} > Problem Statement: > Right now the fast way to export/import data from Teradata is using TPT. > However, the Hive could not directly utilize/generate these binary format > because it doesn't have a SerDe for these files. > Result: > Provided with the SerDe, Hive can operate upon/generate the exported Teradata > Binary Format file transparently -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20351) GenericUDFNamedStruct should constant fold at compile time
[ https://issues.apache.org/jira/browse/HIVE-20351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575198#comment-16575198 ] Hive QA commented on HIVE-20351: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 11s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 2 new + 99 unchanged - 1 fixed = 101 total (was 100) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13128/dev-support/hive-personality.sh | | git revision | master / 873d31f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13128/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13128/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > GenericUDFNamedStruct should constant fold at compile time > -- > > Key: HIVE-20351 > URL: https://issues.apache.org/jira/browse/HIVE-20351 > Project: Hive > Issue Type: Bug >Reporter: Mykhailo Kysliuk >Assignee: Mykhailo Kysliuk >Priority: Minor > Attachments: HIVE-20351.1.patch > > > Reproduced at hive-3.0. > When we run hive query: > {code:java} > select named_struct('Total','Total') from test; > {code} > We could see the ERROR at hiveserver logs: > {code:java} > 2018-05-25T15:18:13,182 ERROR [main] optimizer.ConstantPropagateProcFactory: > Unable to evaluate > org.apache.hadoop.hive.ql.udf.generic.GenericUDFNamedStruct@a0bf272. Return > value unrecoginizable. > {code} > This error is harmless because all results are correct. But named_struct > constant values should be processed correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20351) GenericUDFNamedStruct should constant fold at compile time
[ https://issues.apache.org/jira/browse/HIVE-20351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575190#comment-16575190 ] Mykhailo Kysliuk commented on HIVE-20351: - [~gopalv] Could you please provide some examples of queries to run. This code executes fine with MR engine. What things do I have to check? > GenericUDFNamedStruct should constant fold at compile time > -- > > Key: HIVE-20351 > URL: https://issues.apache.org/jira/browse/HIVE-20351 > Project: Hive > Issue Type: Bug >Reporter: Mykhailo Kysliuk >Assignee: Mykhailo Kysliuk >Priority: Minor > Attachments: HIVE-20351.1.patch > > > Reproduced at hive-3.0. > When we run hive query: > {code:java} > select named_struct('Total','Total') from test; > {code} > We could see the ERROR at hiveserver logs: > {code:java} > 2018-05-25T15:18:13,182 ERROR [main] optimizer.ConstantPropagateProcFactory: > Unable to evaluate > org.apache.hadoop.hive.ql.udf.generic.GenericUDFNamedStruct@a0bf272. Return > value unrecoginizable. > {code} > This error is harmless because all results are correct. But named_struct > constant values should be processed correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-20354: -- Component/s: Transactions > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.1.patch > > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-20354: -- Attachment: HIVE-20354.1.patch > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.1.patch > > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-20354: -- Status: Patch Available (was: Open) > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-20354.1.patch > > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20354) Semijoin hints dont work with merge statements
[ https://issues.apache.org/jira/browse/HIVE-20354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-20354: - > Semijoin hints dont work with merge statements > -- > > Key: HIVE-20354 > URL: https://issues.apache.org/jira/browse/HIVE-20354 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > > When merge statement is rewritten, it ignores any comment in the query which > may include hints like semijoin. > If it is, it should not be ignored. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20351) GenericUDFNamedStruct should constant fold at compile time
[ https://issues.apache.org/jira/browse/HIVE-20351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575177#comment-16575177 ] Gopal V commented on HIVE-20351: For some reason, I left a comment about named_structs in the original commit. {code} && oi instanceof StandardConstantStructObjectInspector) { // do not fold named_struct, only struct() ConstantObjectInspector coi = (ConstantObjectInspector) oi; {code} Can you try running this is in a query (i.e submit via Tez and see) - if I'm not wrong some bit in Kryo breaks. > GenericUDFNamedStruct should constant fold at compile time > -- > > Key: HIVE-20351 > URL: https://issues.apache.org/jira/browse/HIVE-20351 > Project: Hive > Issue Type: Bug >Reporter: Mykhailo Kysliuk >Assignee: Mykhailo Kysliuk >Priority: Minor > Attachments: HIVE-20351.1.patch > > > Reproduced at hive-3.0. > When we run hive query: > {code:java} > select named_struct('Total','Total') from test; > {code} > We could see the ERROR at hiveserver logs: > {code:java} > 2018-05-25T15:18:13,182 ERROR [main] optimizer.ConstantPropagateProcFactory: > Unable to evaluate > org.apache.hadoop.hive.ql.udf.generic.GenericUDFNamedStruct@a0bf272. Return > value unrecoginizable. > {code} > This error is harmless because all results are correct. But named_struct > constant values should be processed correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20351) GenericUDFNamedStruct should constant fold at compile time
[ https://issues.apache.org/jira/browse/HIVE-20351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-20351: --- Description: Reproduced at hive-3.0. When we run hive query: {code:java} select named_struct('Total','Total') from test; {code} We could see the ERROR at hiveserver logs: {code:java} 2018-05-25T15:18:13,182 ERROR [main] optimizer.ConstantPropagateProcFactory: Unable to evaluate org.apache.hadoop.hive.ql.udf.generic.GenericUDFNamedStruct@a0bf272. Return value unrecoginizable. {code} This error is harmless because all results are correct. But named_struct constant values should be processed correctly. was: Reproduced at hive-3.0. When we run hive query: {code:java} select named_struct('Total','Total') from test; {code} We could see the ERROR at hiveserver logs: {code:java} 2018-05-25T15:18:13,182 ERROR [main] optimizer.ConstantPropagateProcFactory: Unable to evaluate org.apache.hadoop.hive.ql.udf.generic.GenericUDFNamedStruct@a0bf272. Return value unrecoginizable. {code} This error is not harmless because all results are correct. But named_struct constant values should be processed correctly. > GenericUDFNamedStruct should constant fold at compile time > -- > > Key: HIVE-20351 > URL: https://issues.apache.org/jira/browse/HIVE-20351 > Project: Hive > Issue Type: Bug >Reporter: Mykhailo Kysliuk >Assignee: Mykhailo Kysliuk >Priority: Minor > Attachments: HIVE-20351.1.patch > > > Reproduced at hive-3.0. > When we run hive query: > {code:java} > select named_struct('Total','Total') from test; > {code} > We could see the ERROR at hiveserver logs: > {code:java} > 2018-05-25T15:18:13,182 ERROR [main] optimizer.ConstantPropagateProcFactory: > Unable to evaluate > org.apache.hadoop.hive.ql.udf.generic.GenericUDFNamedStruct@a0bf272. Return > value unrecoginizable. > {code} > This error is harmless because all results are correct. But named_struct > constant values should be processed correctly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18244) CachedStore: Fix UT when CachedStore is enabled
[ https://issues.apache.org/jira/browse/HIVE-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18244: Status: Open (was: Patch Available) > CachedStore: Fix UT when CachedStore is enabled > --- > > Key: HIVE-18244 > URL: https://issues.apache.org/jira/browse/HIVE-18244 > Project: Hive > Issue Type: Bug >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18244.1.patch, HIVE-18244.2.patch, > HIVE-18244.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18244) CachedStore: Fix UT when CachedStore is enabled
[ https://issues.apache.org/jira/browse/HIVE-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18244: Status: Patch Available (was: Open) > CachedStore: Fix UT when CachedStore is enabled > --- > > Key: HIVE-18244 > URL: https://issues.apache.org/jira/browse/HIVE-18244 > Project: Hive > Issue Type: Bug >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18244.1.patch, HIVE-18244.2.patch, > HIVE-18244.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18244) CachedStore: Fix UT when CachedStore is enabled
[ https://issues.apache.org/jira/browse/HIVE-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575162#comment-16575162 ] Vaibhav Gumashta commented on HIVE-18244: - Removed HIVE-20337 from the patch since that's already committed > CachedStore: Fix UT when CachedStore is enabled > --- > > Key: HIVE-18244 > URL: https://issues.apache.org/jira/browse/HIVE-18244 > Project: Hive > Issue Type: Bug >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18244.1.patch, HIVE-18244.2.patch, > HIVE-18244.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18244) CachedStore: Fix UT when CachedStore is enabled
[ https://issues.apache.org/jira/browse/HIVE-18244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-18244: Attachment: HIVE-18244.2.patch > CachedStore: Fix UT when CachedStore is enabled > --- > > Key: HIVE-18244 > URL: https://issues.apache.org/jira/browse/HIVE-18244 > Project: Hive > Issue Type: Bug >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-18244.1.patch, HIVE-18244.2.patch, > HIVE-18244.2.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20150) TopNKey pushdown
[ https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575144#comment-16575144 ] Hive QA commented on HIVE-20150: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12934989/HIVE-20150.1.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 14872 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[topnkey] (batchId=42) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_topnkey] (batchId=45) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_struct_type_vectorization] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_complex_types_vectorization] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_map_type_vectorization] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[parquet_struct_type_vectorization] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[load_dyn_part1] (batchId=179) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[load_dyn_part2] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[load_dyn_part3] (batchId=159) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/13127/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13127/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13127/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12934989 - PreCommit-HIVE-Build > TopNKey pushdown > > > Key: HIVE-20150 > URL: https://issues.apache.org/jira/browse/HIVE-20150 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Affects Versions: 4.0.0 >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20150.1.patch > > > TopNKey operator is implemented in HIVE-17896, but it needs more work in > pushdown implementation. So this issue covers TopNKey pushdown implementation > with proper tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575131#comment-16575131 ] Nishant Bangarwa commented on HIVE-20353: - +cc [~ashutoshc] please review. > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20353.patch > > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20353: Attachment: HIVE-20353.patch > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20353.patch > > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa updated HIVE-20353: Status: Patch Available (was: Open) > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-20353.patch > > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20343) Hive 3: CTAS does not respect transactional_properties
[ https://issues.apache.org/jira/browse/HIVE-20343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-20343: -- Component/s: (was: Hive) Transactions > Hive 3: CTAS does not respect transactional_properties > -- > > Key: HIVE-20343 > URL: https://issues.apache.org/jira/browse/HIVE-20343 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 > Environment: hive-3 >Reporter: Rajkumar Singh >Priority: Major > > Steps to reproduce: > {code} > create table ctasexampleinsertonly stored as orc TBLPROPERTIES > ("transactional_properties"="insert_only") as select * from testtable limit 1; > {code} > look for transactional_properties which is 'default' not the expected > "insert_only" > {code} > describe formatted ctasexampleinsertonly > > +---++---+ > | col_name| data_type >|comment| > +---++---+ > | # col_name| data_type >| comment | > | name | varchar(8) >| | > | time | double >| | > | | NULL >| NULL | > | # Detailed Table Information | NULL >| NULL | > | Database: | default >| NULL | > | OwnerType:| USER >| NULL | > | Owner:| hive >| NULL | > | CreateTime: | Wed Aug 08 21:35:15 UTC 2018 >| NULL | > | LastAccessTime: | UNKNOWN >| NULL | > | Retention:| 0 >| NULL | > | Location: | > hdfs://xx:8020/warehouse/tablespace/managed/hive/ctasexampleinsertonly > | NULL | > | Table Type: | MANAGED_TABLE >| NULL | > | Table Parameters: | NULL >| NULL | > | | COLUMN_STATS_ACCURATE >| {}| > | | bucketing_version >| 2 | > | | numFiles >| 1 | > | | numRows >| 1 | > | | rawDataSize >| 0 | > | | totalSize >| 754 | > | | transactional >| true | > | | transactional_properties >| default | > | | transient_lastDdlTime >| 1533764115| > | | NULL >| NULL | > | # Storage Information | NULL >| NULL | > | SerDe Library:| org.apache.hadoop.hive.ql.io.orc.OrcSerde >| NULL | > | InputFormat: | > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat| NULL | > | OutputFormat: | > org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat | NULL | > | Compressed: | No >| NULL | > | Num Buckets: | -1 >| NULL | > | Bucket Columns: | [] >| NULL
[jira] [Assigned] (HIVE-20353) Follow redirects when hive connects to a passive druid overlord/coordinator
[ https://issues.apache.org/jira/browse/HIVE-20353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Bangarwa reassigned HIVE-20353: --- > Follow redirects when hive connects to a passive druid overlord/coordinator > --- > > Key: HIVE-20353 > URL: https://issues.apache.org/jira/browse/HIVE-20353 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > > When we have multiple druid coordinators/overlords and hive tries to connect > to a passive one, it will get a redirect. Currently the http client in druid > storage handler does not follow redirects. We need to check if there is a > redirect and follow that for druid overlord/coordinator -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19613) GenericUDTFGetSplits should handle fetch task with temp table rewrite
[ https://issues.apache.org/jira/browse/HIVE-19613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575126#comment-16575126 ] Jaume M commented on HIVE-19613: [~prasanth_j] I'm getting the same exception but [here|https://github.com/apache/hive/blob/873d31f33a061cd38be7de91b208987871fb612e/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFGetSplits.java#L321] (same condition but a few lines below). I think it makes sense that if it was getting in the fixed {{if}} it'll get in this one as well. > GenericUDTFGetSplits should handle fetch task with temp table rewrite > - > > Key: HIVE-19613 > URL: https://issues.apache.org/jira/browse/HIVE-19613 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: Eric Wohlstadter >Assignee: Prasanth Jayachandran >Priority: Major > Fix For: 3.1.0, 3.0.1, 4.0.0 > > Attachments: HIVE-19613.1.patch, HIVE-19613.2.patch, > HIVE-19613.3.patch > > > GenericUDTFGetSplits fails for fetch task only queries. Fetch task only > queries can be handled same way as >1 task queries using temp tables. > {code:java} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Was expecting a > single TezTask. > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.createPlanFragment(GenericUDTFGetSplits.java:262) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:201) > at org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116) > at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:984) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:930) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:917) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:984) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:930) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:492) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:484) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:145) > ... 16 more{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19767) HiveServer2 should take hiveconf for non Hive properties
[ https://issues.apache.org/jira/browse/HIVE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575110#comment-16575110 ] Szehon Ho commented on HIVE-19767: -- OK, I attached another patch removing the (now) redundant code. > HiveServer2 should take hiveconf for non Hive properties > > > Key: HIVE-19767 > URL: https://issues.apache.org/jira/browse/HIVE-19767 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.2.2, 3.0.0, 2.3.2 >Reporter: Szehon Ho >Assignee: Szehon Ho >Priority: Major > Attachments: HIVE-19767.2.patch, HIVE-19767.patch > > > The -hiveconf command line option works in HiveServer2 with properties in > HiveConf.java, but not so well with other properties (like mapred properties > or spark properties to control underlying execution engine, or custom > properties understood by custom listeners) > It is inconsistent with HiveCLI. > HiveCLI behavior: > {noformat} > ./bin/hive --hiveconf a=b > hive> set a; > a=b {noformat} > HiveServer2 behavior: > {noformat} > ./bin/hiveserver2 --hiveconf a=b > beeline> set a; > +-+ > | set | > +-+ > | a is undefined | > +-+{noformat} > Although it is possible to set up hive-site.xml or even mapred-site.xml to > fill in the relevant properties, it is more convenient when testing HS2 with > different configuration to be able to use --hiveconf to change on the fly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19767) HiveServer2 should take hiveconf for non Hive properties
[ https://issues.apache.org/jira/browse/HIVE-19767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-19767: - Attachment: HIVE-19767.2.patch > HiveServer2 should take hiveconf for non Hive properties > > > Key: HIVE-19767 > URL: https://issues.apache.org/jira/browse/HIVE-19767 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.2.2, 3.0.0, 2.3.2 >Reporter: Szehon Ho >Assignee: Szehon Ho >Priority: Major > Attachments: HIVE-19767.2.patch, HIVE-19767.patch > > > The -hiveconf command line option works in HiveServer2 with properties in > HiveConf.java, but not so well with other properties (like mapred properties > or spark properties to control underlying execution engine, or custom > properties understood by custom listeners) > It is inconsistent with HiveCLI. > HiveCLI behavior: > {noformat} > ./bin/hive --hiveconf a=b > hive> set a; > a=b {noformat} > HiveServer2 behavior: > {noformat} > ./bin/hiveserver2 --hiveconf a=b > beeline> set a; > +-+ > | set | > +-+ > | a is undefined | > +-+{noformat} > Although it is possible to set up hive-site.xml or even mapred-site.xml to > fill in the relevant properties, it is more convenient when testing HS2 with > different configuration to be able to use --hiveconf to change on the fly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20150) TopNKey pushdown
[ https://issues.apache.org/jira/browse/HIVE-20150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16575107#comment-16575107 ] Hive QA commented on HIVE-20150: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 9s{color} | {color:blue} ql in master has 2305 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 52 new + 44 unchanged - 0 fixed = 96 total (was 44) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 21s{color} | {color:red} ql generated 1 new + 2305 unchanged - 0 fixed = 2306 total (was 2305) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Switch statement found in org.apache.hadoop.hive.ql.optimizer.TopNKeyPushdownProcessor.pushdown(TopNKeyOperator) where default case is missing At TopNKeyPushdownProcessor.java:where default case is missing At TopNKeyPushdownProcessor.java:[lines 94-108] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13127/dev-support/hive-personality.sh | | git revision | master / 873d31f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13127/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13127/yetus/new-findbugs-ql.html | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13127/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > TopNKey pushdown > > > Key: HIVE-20150 > URL: https://issues.apache.org/jira/browse/HIVE-20150 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Affects Versions: 4.0.0 >Reporter: Teddy Choi >Assignee: Teddy Choi >Priority: Major > Labels: pull-request-available > Attachments: HIVE-20150.1.patch > > > TopNKey operator is implemented in HIVE-17896, but it needs more work in > pushdown implementation. So this issue covers TopNKey pushdown implementation > with proper tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20352) Vectorization: Support grouping function
[ https://issues.apache.org/jira/browse/HIVE-20352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-20352: --- > Vectorization: Support grouping function > > > Key: HIVE-20352 > URL: https://issues.apache.org/jira/browse/HIVE-20352 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > Support native vectorization for grouping function (part of Grouping Sets) so > we don't need to use VectorUDFAdaptor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20339) Vectorization: Lift unneeded restriction causing some PTF with RANK not to be vectorized
[ https://issues.apache.org/jira/browse/HIVE-20339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20339: Status: Patch Available (was: In Progress) > Vectorization: Lift unneeded restriction causing some PTF with RANK not to be > vectorized > > > Key: HIVE-20339 > URL: https://issues.apache.org/jira/browse/HIVE-20339 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20339.01.patch, HIVE-20339.02.patch, > HIVE-20339.03.patch > > > Unnecessary: "PTF operator: More than 1 argument expression of aggregation > function rank" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-20339) Vectorization: Lift unneeded restriction causing some PTF with RANK not to be vectorized
[ https://issues.apache.org/jira/browse/HIVE-20339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-20339: Attachment: HIVE-20339.03.patch > Vectorization: Lift unneeded restriction causing some PTF with RANK not to be > vectorized > > > Key: HIVE-20339 > URL: https://issues.apache.org/jira/browse/HIVE-20339 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-20339.01.patch, HIVE-20339.02.patch, > HIVE-20339.03.patch > > > Unnecessary: "PTF operator: More than 1 argument expression of aggregation > function rank" -- This message was sent by Atlassian JIRA (v7.6.3#76005)