[jira] [Commented] (HIVE-1642) Convert join queries to map-join based on size of table/row
[ https://issues.apache.org/jira/browse/HIVE-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291024#comment-14291024 ] Lefty Leverenz commented on HIVE-1642: -- Doc done: The wiki documents all the configuration parameters created in this issue (some of them renamed). * [hive.smalltable.filesize or hive.mapjoin.smalltable.filesize | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.smalltable.filesizeorhive.mapjoin.smalltable.filesize] * [hive.auto.convert.join | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.auto.convert.join] * [hive.hashtable.initialCapacity | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.hashtable.initialCapacity] * [hive.hashtable.loadfactor | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.hashtable.loadfactor] * [hive.mapjoin.localtask.max.memory.usage | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.mapjoin.localtask.max.memory.usage] * [hive.mapjoin.check.memory.rows | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.mapjoin.check.memory.rows] * [hive.debug.localtask | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.debug.localtask] A description is still needed for *hive.debug.localtask*. Convert join queries to map-join based on size of table/row --- Key: HIVE-1642 URL: https://issues.apache.org/jira/browse/HIVE-1642 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Namit Jain Assignee: Liyin Tang Fix For: 0.7.0 Attachments: hive-1642_10.patch, hive-1642_11.patch, hive-1642_5.patch, hive-1642_6.patch, hive-1642_7.patch, hive-1642_9.patch, hive_1642_1.patch, hive_1642_2.patch, hive_1642_4.patch Based on the number of rows and size of each table, Hive should automatically be able to convert a join into map-join. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-9458) Allow database prefix in GRANT ON TABLE statement.
[ https://issues.apache.org/jira/browse/HIVE-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johndee Burks resolved HIVE-9458. - Resolution: Invalid Supposed to be Sentry Allow database prefix in GRANT ON TABLE statement. --- Key: HIVE-9458 URL: https://issues.apache.org/jira/browse/HIVE-9458 Project: Hive Issue Type: Bug Reporter: Johndee Burks Priority: Minor Currently you get the following error if you do the following command. {code} 0: jdbc:hive2://jrepo2-1.ent.cloudera.com:100 grant select on table testdatabase.j1 to role jrole; Error: Error while compiling statement: FAILED: ParseException line 1:29 mismatched input '.' expecting TO near 'testdatabase' in grant privileges (state=42000,code=4) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9458) Allow database prefix in GRANT ON TABLE statement.
Johndee Burks created HIVE-9458: --- Summary: Allow database prefix in GRANT ON TABLE statement. Key: HIVE-9458 URL: https://issues.apache.org/jira/browse/HIVE-9458 Project: Hive Issue Type: Bug Reporter: Johndee Burks Priority: Minor Currently you get the following error if you do the following command. {code} 0: jdbc:hive2://jrepo2-1.ent.cloudera.com:100 grant select on table testdatabase.j1 to role jrole; Error: Error while compiling statement: FAILED: ParseException line 1:29 mismatched input '.' expecting TO near 'testdatabase' in grant privileges (state=42000,code=4) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9450) [Parquet] Check all data types work for Parquet in Group By operator
[ https://issues.apache.org/jira/browse/HIVE-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dong Chen updated HIVE-9450: Attachment: HIVE-9450.patch Reattach the patch to see the test result [Parquet] Check all data types work for Parquet in Group By operator Key: HIVE-9450 URL: https://issues.apache.org/jira/browse/HIVE-9450 Project: Hive Issue Type: Sub-task Reporter: Dong Chen Assignee: Dong Chen Attachments: HIVE-9450.patch, HIVE-9450.patch Check all data types work for Parquet in Group By operator. 1. Add test cases for data types. 2. Fix the ClassCastException bug for CHARVARCHAR used in group by for Parquet. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9445) Revert HIVE-5700 - enforce single date format for partition column storage
[ https://issues.apache.org/jira/browse/HIVE-9445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291147#comment-14291147 ] Xuefu Zhang commented on HIVE-9445: --- +1 Revert HIVE-5700 - enforce single date format for partition column storage -- Key: HIVE-9445 URL: https://issues.apache.org/jira/browse/HIVE-9445 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0, 0.14.0, 0.13.1, 0.15.0, 0.14.1 Reporter: Brock Noland Assignee: Brock Noland Priority: Blocker Attachments: HIVE-9445.1.patch, HIVE-9445.1.patch HIVE-5700 has the following issues: * HIVE-8730 - fails mysql upgrades * Does not upgrade all metadata, e.g. {{PARTITIONS.PART_NAME}} See comments in HIVE-5700. * Completely corrupts postgres, see below. With a postgres metastore on 0.12, I executed the following: {noformat} CREATE TABLE HIVE5700_DATE_PARTED (line string) PARTITIONED BY (ddate date); CREATE TABLE HIVE5700_STRING_PARTED (line string) PARTITIONED BY (ddate string); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='NOT_DATE'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='20150121'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='20150122'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='2015-01-23'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='NOT_DATE'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='20150121'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='20150122'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='2015-01-23'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='NOT_DATE'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='20150121'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='20150122'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='2015-01-23'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='NOT_DATE'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='20150121'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='20150122'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='2015-01-23'); hive show partitions HIVE5700_DATE_PARTED; OK ddate=20150121 ddate=20150122 ddate=2015-01-23 ddate=NOT_DATE Time taken: 0.052 seconds, Fetched: 4 row(s) hive show partitions HIVE5700_STRING_PARTED; OK ddate=20150121 ddate=20150122 ddate=2015-01-23 ddate=NOT_DATE Time taken: 0.051 seconds, Fetched: 4 row(s) {noformat} I then took a dump of the database named {{postgres-pre-upgrade.sql}} and the data in the dump looks good: {noformat} [root@hive5700-1-1 ~]# egrep -A9 '^COPY PARTITIONS|^COPY PARTITION_KEY_VALS' postgres-pre-upgrade.sql COPY PARTITIONS (PART_ID, CREATE_TIME, LAST_ACCESS_TIME, PART_NAME, SD_ID, TBL_ID) FROM stdin; 3 1421943647 0 ddate=NOT_DATE 6 2 4 1421943647 0 ddate=20150121 7 2 5 1421943648 0 ddate=20150122 8 2 6 1421943664 0 ddate=NOT_DATE 9 3 7 1421943664 0 ddate=20150121 10 3 8 1421943665 0 ddate=20150122 11 3 9 1421943694 0 ddate=2015-01-2312 2 101421943695 0 ddate=2015-01-2313 3 \. -- COPY PARTITION_KEY_VALS (PART_ID, PART_KEY_VAL, INTEGER_IDX) FROM stdin; 3 NOT_DATE0 4 201501210 5 201501220 6 NOT_DATE0 7 201501210 8 201501220 9 2015-01-23 0 102015-01-23 0 \. {noformat} I then upgraded to 0.13 and subsequently upgraded the MS with the following command: {{schematool -dbType postgres -upgradeSchema -verbose}} The file {{postgres-post-upgrade.sql}} is the post-upgrade db dump. As you can see the data is completely corrupt. {noformat} [root@hive5700-1-1 ~]# egrep -A9 '^COPY PARTITIONS|^COPY PARTITION_KEY_VALS' postgres-post-upgrade.sql COPY PARTITIONS (PART_ID, CREATE_TIME, LAST_ACCESS_TIME, PART_NAME, SD_ID, TBL_ID) FROM stdin; 3 1421943647 0 ddate=NOT_DATE 6 2 4 1421943647 0 ddate=20150121 7 2 5 1421943648 0 ddate=20150122 8 2 6 1421943664 0 ddate=NOT_DATE 9 3 7 1421943664 0 ddate=20150121 10 3 8 1421943665 0 ddate=20150122 11
[jira] [Commented] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
[ https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291160#comment-14291160 ] Hive QA commented on HIVE-9327: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12694424/HIVE-9327.07.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7366 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_cast_constant {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2517/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2517/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2517/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12694424 - PreCommit-HIVE-TRUNK-Build CBO (Calcite Return Path): Removing Row Resolvers from ParseContext --- Key: HIVE-9327 URL: https://issues.apache.org/jira/browse/HIVE-9327 Project: Hive Issue Type: Sub-task Components: CBO Reporter: Jesus Camacho Rodriguez Assignee: Jesus Camacho Rodriguez Fix For: 0.15.0 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, HIVE-9327.03.patch, HIVE-9327.04.patch, HIVE-9327.05.patch, HIVE-9327.06.patch, HIVE-9327.07.patch, HIVE-9327.patch CLEAR LIBRARY CACHE ParseContext includes a map of Operator to RowResolver (OpParseContext). It would be ideal to remove this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9454) Test failures due to new Calcite version
[ https://issues.apache.org/jira/browse/HIVE-9454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291156#comment-14291156 ] Laljo John Pullokkaran commented on HIVE-9454: -- [~julianhyde] there seems to be other exceptions (not just Derived Table removal) like subquery_in_having.q. I haven't debugged the root cause. Test failures due to new Calcite version Key: HIVE-9454 URL: https://issues.apache.org/jira/browse/HIVE-9454 Project: Hive Issue Type: Bug Reporter: Brock Noland Attachments: HIVE-9454.1.patch A bunch of failures have started appearing in patches which seen unrelated. I am thinking we've picked up a new version of Calcite. E.g.: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2488/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_auto_join12/ {noformat} Running: diff -a /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../itests/qtest/target/qfile-results/clientpositive/auto_join12.q.out /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../ql/src/test/results/clientpositive/auto_join12.q.out 32c32 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 35c35 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src 39c39 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 54c54 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9039) Support Union Distinct
[ https://issues.apache.org/jira/browse/HIVE-9039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291291#comment-14291291 ] Pengcheng Xiong commented on HIVE-9039: --- [~jpullokkaran], I checked the failed tests. They are not related and they passed on my laptop. The patch is ready to go. Thanks. Support Union Distinct -- Key: HIVE-9039 URL: https://issues.apache.org/jira/browse/HIVE-9039 Project: Hive Issue Type: New Feature Reporter: Pengcheng Xiong Assignee: Pengcheng Xiong Attachments: HIVE-9039.01.patch, HIVE-9039.02.patch, HIVE-9039.03.patch, HIVE-9039.04.patch, HIVE-9039.05.patch, HIVE-9039.06.patch, HIVE-9039.07.patch, HIVE-9039.08.patch, HIVE-9039.09.patch, HIVE-9039.10.patch, HIVE-9039.11.patch, HIVE-9039.12.patch, HIVE-9039.13.patch, HIVE-9039.14.patch, HIVE-9039.15.patch, HIVE-9039.16.patch, HIVE-9039.17.patch, HIVE-9039.18.patch, HIVE-9039.19.patch, HIVE-9039.20.patch, HIVE-9039.21.patch CLEAR LIBRARY CACHE Current version (Hive 0.14) does not support union (or union distinct). It only supports union all. In this patch, we try to add this new feature by rewriting union distinct to union all followed by group by. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9445) Revert HIVE-5700 - enforce single date format for partition column storage
[ https://issues.apache.org/jira/browse/HIVE-9445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9445: --- Resolution: Fixed Fix Version/s: 0.15.0 Status: Resolved (was: Patch Available) Thank you for the review. I have committed this to trunk. Revert HIVE-5700 - enforce single date format for partition column storage -- Key: HIVE-9445 URL: https://issues.apache.org/jira/browse/HIVE-9445 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0, 0.14.0, 0.13.1, 0.15.0, 0.14.1 Reporter: Brock Noland Assignee: Brock Noland Priority: Blocker Fix For: 0.15.0 Attachments: HIVE-9445.1.patch, HIVE-9445.1.patch HIVE-5700 has the following issues: * HIVE-8730 - fails mysql upgrades * Does not upgrade all metadata, e.g. {{PARTITIONS.PART_NAME}} See comments in HIVE-5700. * Completely corrupts postgres, see below. With a postgres metastore on 0.12, I executed the following: {noformat} CREATE TABLE HIVE5700_DATE_PARTED (line string) PARTITIONED BY (ddate date); CREATE TABLE HIVE5700_STRING_PARTED (line string) PARTITIONED BY (ddate string); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='NOT_DATE'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='20150121'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='20150122'); ALTER TABLE HIVE5700_DATE_PARTED ADD PARTITION (ddate='2015-01-23'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='NOT_DATE'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='20150121'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='20150122'); ALTER TABLE HIVE5700_STRING_PARTED ADD PARTITION (ddate='2015-01-23'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='NOT_DATE'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='20150121'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='20150122'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_DATE_PARTED PARTITION (ddate='2015-01-23'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='NOT_DATE'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='20150121'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='20150122'); LOAD DATA LOCAL INPATH '/tmp/single-line-of-data' INTO TABLE HIVE5700_STRING_PARTED PARTITION (ddate='2015-01-23'); hive show partitions HIVE5700_DATE_PARTED; OK ddate=20150121 ddate=20150122 ddate=2015-01-23 ddate=NOT_DATE Time taken: 0.052 seconds, Fetched: 4 row(s) hive show partitions HIVE5700_STRING_PARTED; OK ddate=20150121 ddate=20150122 ddate=2015-01-23 ddate=NOT_DATE Time taken: 0.051 seconds, Fetched: 4 row(s) {noformat} I then took a dump of the database named {{postgres-pre-upgrade.sql}} and the data in the dump looks good: {noformat} [root@hive5700-1-1 ~]# egrep -A9 '^COPY PARTITIONS|^COPY PARTITION_KEY_VALS' postgres-pre-upgrade.sql COPY PARTITIONS (PART_ID, CREATE_TIME, LAST_ACCESS_TIME, PART_NAME, SD_ID, TBL_ID) FROM stdin; 3 1421943647 0 ddate=NOT_DATE 6 2 4 1421943647 0 ddate=20150121 7 2 5 1421943648 0 ddate=20150122 8 2 6 1421943664 0 ddate=NOT_DATE 9 3 7 1421943664 0 ddate=20150121 10 3 8 1421943665 0 ddate=20150122 11 3 9 1421943694 0 ddate=2015-01-2312 2 101421943695 0 ddate=2015-01-2313 3 \. -- COPY PARTITION_KEY_VALS (PART_ID, PART_KEY_VAL, INTEGER_IDX) FROM stdin; 3 NOT_DATE0 4 201501210 5 201501220 6 NOT_DATE0 7 201501210 8 201501220 9 2015-01-23 0 102015-01-23 0 \. {noformat} I then upgraded to 0.13 and subsequently upgraded the MS with the following command: {{schematool -dbType postgres -upgradeSchema -verbose}} The file {{postgres-post-upgrade.sql}} is the post-upgrade db dump. As you can see the data is completely corrupt. {noformat} [root@hive5700-1-1 ~]# egrep -A9 '^COPY PARTITIONS|^COPY PARTITION_KEY_VALS' postgres-post-upgrade.sql COPY PARTITIONS (PART_ID, CREATE_TIME, LAST_ACCESS_TIME, PART_NAME, SD_ID, TBL_ID) FROM stdin; 3 1421943647 0 ddate=NOT_DATE 6 2 4 1421943647 0 ddate=20150121 7 2 5 1421943648 0 ddate=20150122 8 2 6 1421943664 0
[jira] [Commented] (HIVE-9450) [Parquet] Check all data types work for Parquet in Group By operator
[ https://issues.apache.org/jira/browse/HIVE-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291304#comment-14291304 ] Brock Noland commented on HIVE-9450: +1 [Parquet] Check all data types work for Parquet in Group By operator Key: HIVE-9450 URL: https://issues.apache.org/jira/browse/HIVE-9450 Project: Hive Issue Type: Sub-task Reporter: Dong Chen Assignee: Dong Chen Attachments: HIVE-9450.patch, HIVE-9450.patch Check all data types work for Parquet in Group By operator. 1. Add test cases for data types. 2. Fix the ClassCastException bug for CHARVARCHAR used in group by for Parquet. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9461) LLAP: Enable local mode tests on tez to facilitate llap testing
Gunther Hagleitner created HIVE-9461: Summary: LLAP: Enable local mode tests on tez to facilitate llap testing Key: HIVE-9461 URL: https://issues.apache.org/jira/browse/HIVE-9461 Project: Hive Issue Type: Sub-task Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner local mode tez tests will help the testing of multiple fragments running at the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9461) LLAP: Enable local mode tests on tez to facilitate llap testing
[ https://issues.apache.org/jira/browse/HIVE-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-9461: - Attachment: (was: HIVE-9460.1.patch) LLAP: Enable local mode tests on tez to facilitate llap testing --- Key: HIVE-9461 URL: https://issues.apache.org/jira/browse/HIVE-9461 Project: Hive Issue Type: Sub-task Affects Versions: llap Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-9461.1.patch local mode tez tests will help the testing of multiple fragments running at the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9461) LLAP: Enable local mode tests on tez to facilitate llap testing
[ https://issues.apache.org/jira/browse/HIVE-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-9461: - Attachment: HIVE-9461.1.patch LLAP: Enable local mode tests on tez to facilitate llap testing --- Key: HIVE-9461 URL: https://issues.apache.org/jira/browse/HIVE-9461 Project: Hive Issue Type: Sub-task Affects Versions: llap Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-9461.1.patch local mode tez tests will help the testing of multiple fragments running at the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9428) LocalSparkJobStatus may return failed job as successful [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-9428: - Attachment: (was: HIVE-9428.2-spark.patch) LocalSparkJobStatus may return failed job as successful [Spark Branch] -- Key: HIVE-9428 URL: https://issues.apache.org/jira/browse/HIVE-9428 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Rui Li Assignee: Rui Li Priority: Minor Attachments: HIVE-9428.1-spark.patch, HIVE-9428.2-spark.patch, HIVE-9428.3-spark.patch Future is done doesn't necessarily mean the job is successful. We should rely on SparkJobInfo to get job status whenever it's available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9449) Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengxiang Li updated HIVE-9449: Attachment: HIVE-9449.2-spark.patch Push YARN configuration to Spark while deply Spark on YARN[Spark Branch] Key: HIVE-9449 URL: https://issues.apache.org/jira/browse/HIVE-9449 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Chengxiang Li Assignee: Chengxiang Li Attachments: HIVE-9449.1-spark.patch, HIVE-9449.1-spark.patch, HIVE-9449.2-spark.patch We only push Spark configuration and RSC configuration to Spark while launch Spark cluster now, for Spark on YARN mode, Spark need extra YARN configuration to launch Spark cluster. Besides this, to support dynamically configuration setting for RSC configuration/YARN configuration, we need to recreate SparkSession while RSC configuration/YARN configuration update as well, as they may influence the Spark cluster deployment as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9454) Test failures due to new Calcite version
[ https://issues.apache.org/jira/browse/HIVE-9454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291470#comment-14291470 ] Brock Noland commented on HIVE-9454: bq. I'm coming round to the idea that it is a bad idea for Hive to develop on a snapshot of Calcite. But what are the alternatives? In the past we've used SNAPSHOT of some deps on trunk (including myself). I am thinking we should agree that once we remove the SNAPSHOT dep of Calcite, we should not allow SNAPSHOT deps on trunk. If anyone wants to use a SNAPSHOT they would have to do so on a branch. In short: Use a branch. Test failures due to new Calcite version Key: HIVE-9454 URL: https://issues.apache.org/jira/browse/HIVE-9454 Project: Hive Issue Type: Bug Reporter: Brock Noland Attachments: HIVE-9454.1.patch A bunch of failures have started appearing in patches which seen unrelated. I am thinking we've picked up a new version of Calcite. E.g.: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2488/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_auto_join12/ {noformat} Running: diff -a /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../itests/qtest/target/qfile-results/clientpositive/auto_join12.q.out /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../ql/src/test/results/clientpositive/auto_join12.q.out 32c32 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 35c35 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src 39c39 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 54c54 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9462) HIVE-8577 - breaks type evolution
[ https://issues.apache.org/jira/browse/HIVE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9462: --- Attachment: type_evolution.avro HIVE-8577 - breaks type evolution - Key: HIVE-9462 URL: https://issues.apache.org/jira/browse/HIVE-9462 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Affects Versions: 0.15.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9462.1.patch, type_evolution.avro If you write an avro field out as {{int}} and then change it's type to {{long}} you will get an {{UnresolvedUnionException}} due to code in HIVE-8577. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9462) HIVE-8577 - breaks type evolution
[ https://issues.apache.org/jira/browse/HIVE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9462: --- Status: Patch Available (was: Open) HIVE-8577 - breaks type evolution - Key: HIVE-9462 URL: https://issues.apache.org/jira/browse/HIVE-9462 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Affects Versions: 0.15.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9462.1.patch, type_evolution.avro If you write an avro field out as {{int}} and then change it's type to {{long}} you will get an {{UnresolvedUnionException}} due to code in HIVE-8577. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9461) LLAP: Enable local mode tests on tez to facilitate llap testing
[ https://issues.apache.org/jira/browse/HIVE-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-9461: -- Affects Version/s: llap LLAP: Enable local mode tests on tez to facilitate llap testing --- Key: HIVE-9461 URL: https://issues.apache.org/jira/browse/HIVE-9461 Project: Hive Issue Type: Sub-task Affects Versions: llap Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner local mode tez tests will help the testing of multiple fragments running at the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9461) LLAP: Enable local mode tests on tez to facilitate llap testing
[ https://issues.apache.org/jira/browse/HIVE-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-9461: - Attachment: HIVE-9460.1.patch LLAP: Enable local mode tests on tez to facilitate llap testing --- Key: HIVE-9461 URL: https://issues.apache.org/jira/browse/HIVE-9461 Project: Hive Issue Type: Sub-task Affects Versions: llap Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-9460.1.patch local mode tez tests will help the testing of multiple fragments running at the same time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9460) LLAP: Fix some static vars in the operator pipeline
[ https://issues.apache.org/jira/browse/HIVE-9460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HIVE-9460: -- Affects Version/s: llap LLAP: Fix some static vars in the operator pipeline --- Key: HIVE-9460 URL: https://issues.apache.org/jira/browse/HIVE-9460 Project: Hive Issue Type: Sub-task Affects Versions: llap Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-9460.1.patch There are a few static vars left in the operator pipeline. Can't have those with multi-threaded execution... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9454) Test failures due to new Calcite version
[ https://issues.apache.org/jira/browse/HIVE-9454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291459#comment-14291459 ] Julian Hyde commented on HIVE-9454: --- It's unfair to say I've been using hive as a test bed. We have a test suite for Calcite that we're quite happy with. It serves its purpose -- it keeps Calcite from regressing. You want to release Hive, so you need a release of Calcite. I am trying to do integration -- identify the incompatibilities between Hive and the new version of Calcite -- before we make that release. If that doesn't get done, you might well end up with a release of Calcite that is not suitable to make a release of Hive. I have been telegraphing for some time on the Calcite dev list that a Calcite release is imminent. I made a release candidate on Thursday and started a vote. Given that Hive uses Calcite, at least one Hive committer ought to be actively participating on the Calcite dev list, participating in that vote, and doing integration testing. The only integration testing that happened occurred because I pushed a new snapshot. That shit was going to hit the fan at some point between now and Hive 15. It's actually a good thing that it happened now. I'm coming round to the idea that it is a bad idea for Hive to develop on a snapshot of Calcite. But what are the alternatives? Given that it takes at least 6 days for an incubator project such as Calcite to make a release, development could very easily become stalled by a small bug or missing feature in Calcite. The ideal would be a Hive-specific snapshot of Calcite, controlled by Hive developers, but (a) Apache nexus doesn't seem to allow multiple versions of snapshots, (b) the Apache release process doesn't allow releases on snapshots, and (c) this would require proactive efforts by Hive committers to integrate with Calcite ahead of a Calcite release. Whatever we decide, it needs more proactive involvement from the Hive side. There is an urgent need for a decision on the Calcite 1.0 release vote. We have sufficient votes for a release, and I could close the vote in just over an hour, but I won't. There is a non-binding -1 from [~jpullokkaran] due to incompatibilities but we haven't figured out whether the cause is on the Hive side or the Calcite side. I'd like to close the vote as soon as possible, but I need Hive developers to either log bugs or let the vote pass. We haven't had time to do integration testing (I'm paraphrasing a little) is not a valid reason for a -1. Test failures due to new Calcite version Key: HIVE-9454 URL: https://issues.apache.org/jira/browse/HIVE-9454 Project: Hive Issue Type: Bug Reporter: Brock Noland Attachments: HIVE-9454.1.patch A bunch of failures have started appearing in patches which seen unrelated. I am thinking we've picked up a new version of Calcite. E.g.: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2488/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_auto_join12/ {noformat} Running: diff -a /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../itests/qtest/target/qfile-results/clientpositive/auto_join12.q.out /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../ql/src/test/results/clientpositive/auto_join12.q.out 32c32 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 35c35 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src 39c39 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 54c54 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9361) Intermittent NPE in SessionHiveMetaStoreClient.alterTempTable
[ https://issues.apache.org/jira/browse/HIVE-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291460#comment-14291460 ] Eugene Koifman commented on HIVE-9361: -- The failures are not related. They appear in multiple other runs. Intermittent NPE in SessionHiveMetaStoreClient.alterTempTable - Key: HIVE-9361 URL: https://issues.apache.org/jira/browse/HIVE-9361 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.14.0 Reporter: Eugene Koifman Assignee: Eugene Koifman Attachments: HIVE-9361.patch it's happening at {noformat} MetaStoreUtils.updateUnpartitionedTableStatsFast(newtCopy, wh.getFileStatusesForSD(newtCopy.getSd()), false, true); {noformat} other methods in this class call getWh() to get Warehouse so this likely explains why it's intermittent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9462) HIVE-8577 - breaks type evolution
[ https://issues.apache.org/jira/browse/HIVE-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9462: --- Attachment: HIVE-9462.1.patch HIVE-8577 - breaks type evolution - Key: HIVE-9462 URL: https://issues.apache.org/jira/browse/HIVE-9462 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Affects Versions: 0.15.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-9462.1.patch If you write an avro field out as {{int}} and then change it's type to {{long}} you will get an {{UnresolvedUnionException}} due to code in HIVE-8577. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9371) Execution error for Parquet table and GROUP BY involving CHAR data type
[ https://issues.apache.org/jira/browse/HIVE-9371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-9371: --- Resolution: Duplicate Status: Resolved (was: Patch Available) Execution error for Parquet table and GROUP BY involving CHAR data type --- Key: HIVE-9371 URL: https://issues.apache.org/jira/browse/HIVE-9371 Project: Hive Issue Type: Bug Components: File Formats, Query Processor Reporter: Matt McCline Assignee: Ferdinand Xu Priority: Critical Attachments: HIVE-9371.1.patch, HIVE-9371.patch, HIVE-9371.patch Query fails involving PARQUET table format, CHAR data type, and GROUP BY. Probably also fails for VARCHAR, too. {noformat} Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.hive.serde2.io.HiveCharWritable at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:814) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815) at org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:95) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:493) ... 10 more Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.hive.serde2.io.HiveCharWritable at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableHiveCharObjectInspector.copyObject(WritableHiveCharObjectInspector.java:104) at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:305) at org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150) at org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142) at org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119) at org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:827) at org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:739) at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:809) ... 16 more {noformat} Here is a q file: {noformat} SET hive.vectorized.execution.enabled=false; drop table char_2; create table char_2 ( key char(10), value char(20) ) stored as parquet; insert overwrite table char_2 select * from src; select value, sum(cast(key as int)), count(*) numrows from src group by value order by value asc limit 5; explain select value, sum(cast(key as int)), count(*) numrows from char_2 group by value order by value asc limit 5; -- should match the query from src select value, sum(cast(key as int)), count(*) numrows from char_2 group by value order by value asc limit 5; select value, sum(cast(key as int)), count(*) numrows from src group by value order by value desc limit 5; explain select value, sum(cast(key as int)), count(*) numrows from char_2 group by value order by value desc limit 5; -- should match the query from src select value, sum(cast(key as int)), count(*) numrows from char_2 group by value order by value desc limit 5; drop table char_2; {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9463) Table view don't have Authorization to select by other user
Liao, Xiaoge created HIVE-9463: -- Summary: Table view don't have Authorization to select by other user Key: HIVE-9463 URL: https://issues.apache.org/jira/browse/HIVE-9463 Project: Hive Issue Type: Bug Components: Authorization Affects Versions: 0.13.1 Reporter: Liao, Xiaoge i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below: For 0.10.0, when user A create table view, user B have select privilege to read data. But for 0.13.1, user B can't have rights. Command: user A: hive create view table_view as select * from xx; user B: hive select * from table_view; Authorization failed:No privilege 'Select' found for inputs { database:default, table:table_view}. Use SHOW GRANT to get more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9463) Table view don't have Authorization to select by other user
[ https://issues.apache.org/jira/browse/HIVE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liao, Xiaoge updated HIVE-9463: --- Description: i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below: For 0.10.0, when user A create table view, user B have select privilege to read data. But for 0.13.1, user B can't have rights. Command: user A: hive create view table_view as select * from xx; user B: hive select * from table_view; Authorization failed:No privilege 'Select' found for inputs { database:default, table:table_view}.Use SHOW GRANT to get more details. when i grant select on the underlying table, the table view still don't have select privilege. was: i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below: For 0.10.0, when user A create table view, user B have select privilege to read data. But for 0.13.1, user B can't have rights. Command: user A: hive create view table_view as select * from xx; user B: hive select * from table_view; Authorization failed:No privilege 'Select' found for inputs { database:default, table:table_view}. Use SHOW GRANT to get more details. when i grant select on the underlying table, the table view still don't have select privilege. Table view don't have Authorization to select by other user --- Key: HIVE-9463 URL: https://issues.apache.org/jira/browse/HIVE-9463 Project: Hive Issue Type: Bug Components: Authorization Affects Versions: 0.13.1 Reporter: Liao, Xiaoge i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below: For 0.10.0, when user A create table view, user B have select privilege to read data. But for 0.13.1, user B can't have rights. Command: user A: hive create view table_view as select * from xx; user B: hive select * from table_view; Authorization failed:No privilege 'Select' found for inputs { database:default, table:table_view}.Use SHOW GRANT to get more details. when i grant select on the underlying table, the table view still don't have select privilege. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9463) Table view don't have Authorization to select by other user
[ https://issues.apache.org/jira/browse/HIVE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liao, Xiaoge updated HIVE-9463: --- Description: i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below: For 0.10.0, when user A create table view, user B have select privilege to read data. But for 0.13.1, user B can't have rights. Command: user A: hive create view table_view as select * from xx; user B: hive select * from table_view; Authorization failed:No privilege 'Select' found for inputs { database:default, table:table_view}. Use SHOW GRANT to get more details. when i grant select on the underlying table, the table view still don't have select privilege. was: i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below: For 0.10.0, when user A create table view, user B have select privilege to read data. But for 0.13.1, user B can't have rights. Command: user A: hive create view table_view as select * from xx; user B: hive select * from table_view; Authorization failed:No privilege 'Select' found for inputs { database:default, table:table_view}. Use SHOW GRANT to get more details. Table view don't have Authorization to select by other user --- Key: HIVE-9463 URL: https://issues.apache.org/jira/browse/HIVE-9463 Project: Hive Issue Type: Bug Components: Authorization Affects Versions: 0.13.1 Reporter: Liao, Xiaoge i upgrade from 0.10.0 to 0.13.1, but we meet a problem as below: For 0.10.0, when user A create table view, user B have select privilege to read data. But for 0.13.1, user B can't have rights. Command: user A: hive create view table_view as select * from xx; user B: hive select * from table_view; Authorization failed:No privilege 'Select' found for inputs { database:default, table:table_view}. Use SHOW GRANT to get more details. when i grant select on the underlying table, the table view still don't have select privilege. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9428) LocalSparkJobStatus may return failed job as successful [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291510#comment-14291510 ] Hive QA commented on HIVE-9428: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12694474/HIVE-9428.3-spark.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7357 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3_map_skew org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_join_with_different_encryption_keys {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/682/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/682/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-682/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12694474 - PreCommit-HIVE-SPARK-Build LocalSparkJobStatus may return failed job as successful [Spark Branch] -- Key: HIVE-9428 URL: https://issues.apache.org/jira/browse/HIVE-9428 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Rui Li Assignee: Rui Li Priority: Minor Attachments: HIVE-9428.1-spark.patch, HIVE-9428.2-spark.patch, HIVE-9428.3-spark.patch Future is done doesn't necessarily mean the job is successful. We should rely on SparkJobInfo to get job status whenever it's available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9302) Beeline add jar local to client
[ https://issues.apache.org/jira/browse/HIVE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291449#comment-14291449 ] Hive QA commented on HIVE-9302: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12694461/HIVE-9302.1.patch {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 7377 tests executed *Failed tests:* {noformat} org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0] org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[1] org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2519/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2519/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2519/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12694461 - PreCommit-HIVE-TRUNK-Build Beeline add jar local to client --- Key: HIVE-9302 URL: https://issues.apache.org/jira/browse/HIVE-9302 Project: Hive Issue Type: New Feature Reporter: Brock Noland Assignee: Ferdinand Xu Attachments: DummyDriver-1.0-SNAPSHOT.jar, HIVE-9302.1.patch, HIVE-9302.patch, mysql-connector-java-bin.jar, postgresql-9.3.jdbc3.jar At present if a beeline user uses {{add jar}} the path they give is actually on the HS2 server. It'd be great to allow beeline users to add local jars as well. It might be useful to do this in the jdbc driver itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9460) LLAP: Fix some static vars in the operator pipeline
Gunther Hagleitner created HIVE-9460: Summary: LLAP: Fix some static vars in the operator pipeline Key: HIVE-9460 URL: https://issues.apache.org/jira/browse/HIVE-9460 Project: Hive Issue Type: Sub-task Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner There are a few static vars left in the operator pipeline. Can't have those with multi-threaded execution... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9460) LLAP: Fix some static vars in the operator pipeline
[ https://issues.apache.org/jira/browse/HIVE-9460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-9460: - Attachment: HIVE-9460.1.patch LLAP: Fix some static vars in the operator pipeline --- Key: HIVE-9460 URL: https://issues.apache.org/jira/browse/HIVE-9460 Project: Hive Issue Type: Sub-task Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-9460.1.patch There are a few static vars left in the operator pipeline. Can't have those with multi-threaded execution... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 30208: HIVE-9449 Push YARN configuration to Spark while deply Spark on YARN[Spark Branch]
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/30208/ --- (Updated Jan. 26, 2015, 5:06 a.m.) Review request for hive and Xuefu Zhang. Changes --- Fix unit test failure. Bugs: HIVE-9449 https://issues.apache.org/jira/browse/HIVE-9449 Repository: hive-git Description --- We only push Spark configuration and RSC configuration to Spark while launch Spark cluster now, for Spark on YARN mode, Spark need extra YARN configuration to launch Spark cluster. Besides this, to support dynamically configuration setting for RSC configuration/YARN configuration, we need to recreate SparkSession while RSC configuration/YARN configuration update as well, as they may influence the Spark cluster deployment as well. Diffs (updated) - common/src/java/org/apache/hadoop/hive/conf/HiveConf.java d4d98d7 ql/src/java/org/apache/hadoop/hive/ql/exec/spark/HiveSparkClientFactory.java 9dc6c47 Diff: https://reviews.apache.org/r/30208/diff/ Testing --- Thanks, chengxiang li
[jira] [Commented] (HIVE-9454) Test failures due to new Calcite version
[ https://issues.apache.org/jira/browse/HIVE-9454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291473#comment-14291473 ] Brock Noland commented on HIVE-9454: bq. need Hive developers to either log bugs or let the vote pass [~jpullokkaran] [~jcamachorodriguez] - is this something you guys could tackle? Test failures due to new Calcite version Key: HIVE-9454 URL: https://issues.apache.org/jira/browse/HIVE-9454 Project: Hive Issue Type: Bug Reporter: Brock Noland Attachments: HIVE-9454.1.patch A bunch of failures have started appearing in patches which seen unrelated. I am thinking we've picked up a new version of Calcite. E.g.: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2488/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_auto_join12/ {noformat} Running: diff -a /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../itests/qtest/target/qfile-results/clientpositive/auto_join12.q.out /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../ql/src/test/results/clientpositive/auto_join12.q.out 32c32 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 35c35 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src 39c39 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 54c54 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9211) Research on build mini HoS cluster on YARN for unit test[Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chengxiang Li updated HIVE-9211: Attachment: HIVE-9211.1-spark.patch Research on build mini HoS cluster on YARN for unit test[Spark Branch] -- Key: HIVE-9211 URL: https://issues.apache.org/jira/browse/HIVE-9211 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Chengxiang Li Assignee: Chengxiang Li Labels: Spark-M5 Attachments: HIVE-9211.1-spark.patch HoS on YARN is a common use case in product environment, we'd better enable unit test for this case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 30264: HIVE-9221 enable unit test for mini Spark on YARN cluster[Spark Branch]
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/30264/ --- (Updated Jan. 26, 2015, 6:37 a.m.) Review request for hive, Szehon Ho and Xuefu Zhang. Bugs: HIVE-9211 https://issues.apache.org/jira/browse/HIVE-9211 Repository: hive-git Description --- MiniSparkOnYarnCluster is enabled for unit test, Spark is deployed on miniYarnCluster on yarn-client mode, all qfiles in minimr.query.files are enabled in this unit test except 3 qfile: bucket_num_reducers.q, bucket_num_reducers2.q, udf_using.q, which is not supported in HoS. Diffs - data/conf/spark/hive-site.xml 016f568 data/conf/spark/standalone/hive-site.xml PRE-CREATION data/conf/spark/yarn-client/hive-site.xml PRE-CREATION itests/pom.xml e1e88f6 itests/qtest-spark/pom.xml d12fad5 itests/src/test/resources/testconfiguration.properties f583aaf itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 095b9bd ql/src/java/org/apache/hadoop/hive/ql/exec/spark/RemoteHiveSparkClient.java 41a2ab7 ql/src/test/results/clientpositive/miniSparkOnYarn/auto_sortmerge_join_16.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucket4.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucket5.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucket6.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucketizedhiveinputformat.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucketmapjoin6.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucketmapjoin7.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/constprog_partitioner.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/disable_merge_for_bucketing.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/empty_dir_in_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/external_table_with_space_in_location_path.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/file_with_header_footer.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/groupby1.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/groupby2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/import_exported_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/index_bitmap3.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/index_bitmap_auto.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_bucketed_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_dyn_part.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_map_operators.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_merge.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_num_buckets.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_reducers_power_two.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/input16_cc.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/join1.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/leftsemijoin_mr.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/list_bucket_dml_10.q.java1.7.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/load_fs2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/load_hdfs_file_with_space_in_the_name.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/parallel_orderby.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/ql_rewrite_gbtoidx.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/ql_rewrite_gbtoidx_cbo_1.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/ql_rewrite_gbtoidx_cbo_2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/quotedid_smb.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/reduce_deduplicate.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/remote_script.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/root_dir_external_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/schemeAuthority.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/schemeAuthority2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/scriptfile1.q.out PRE-CREATION
Review Request 30264: HIVE-9221 enable unit test for mini Spark on YARN cluster[Spark Branch]
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/30264/ --- Review request for hive, Szehon Ho and Xuefu Zhang. Bugs: HIVE-9211 https://issues.apache.org/jira/browse/HIVE-9211 Repository: hive-git Description --- MiniSparkOnYarnCluster is enabled for unit test, Spark is deployed on miniYarnCluster on yarn-client mode, all qfiles in minimr.query.files are enabled in this unit test except 3 qfile: bucket_num_reducers.q, bucket_num_reducers2.q, udf_using.q, which is not supported in HoS. Diffs - data/conf/spark/hive-site.xml 016f568 data/conf/spark/standalone/hive-site.xml PRE-CREATION data/conf/spark/yarn-client/hive-site.xml PRE-CREATION itests/pom.xml e1e88f6 itests/qtest-spark/pom.xml d12fad5 itests/src/test/resources/testconfiguration.properties f583aaf itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java 095b9bd ql/src/java/org/apache/hadoop/hive/ql/exec/spark/RemoteHiveSparkClient.java 41a2ab7 ql/src/test/results/clientpositive/miniSparkOnYarn/auto_sortmerge_join_16.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucket4.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucket5.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucket6.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucketizedhiveinputformat.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucketmapjoin6.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/bucketmapjoin7.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/constprog_partitioner.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/disable_merge_for_bucketing.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/empty_dir_in_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/external_table_with_space_in_location_path.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/file_with_header_footer.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/groupby1.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/groupby2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/import_exported_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/index_bitmap3.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/index_bitmap_auto.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_bucketed_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_dyn_part.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_map_operators.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_merge.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_num_buckets.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/infer_bucket_sort_reducers_power_two.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/input16_cc.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/join1.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/leftsemijoin_mr.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/list_bucket_dml_10.q.java1.7.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/load_fs2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/load_hdfs_file_with_space_in_the_name.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/parallel_orderby.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/ql_rewrite_gbtoidx.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/ql_rewrite_gbtoidx_cbo_1.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/ql_rewrite_gbtoidx_cbo_2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/quotedid_smb.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/reduce_deduplicate.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/remote_script.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/root_dir_external_table.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/schemeAuthority.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/schemeAuthority2.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/scriptfile1.q.out PRE-CREATION ql/src/test/results/clientpositive/miniSparkOnYarn/smb_mapjoin_8.q.out PRE-CREATION
[jira] [Commented] (HIVE-3280) Make HiveMetaStoreClient a public API
[ https://issues.apache.org/jira/browse/HIVE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291515#comment-14291515 ] Thejas M Nair commented on HIVE-3280: - The failures are unrelated and are seen in some other builds as well. Make HiveMetaStoreClient a public API - Key: HIVE-3280 URL: https://issues.apache.org/jira/browse/HIVE-3280 Project: Hive Issue Type: Improvement Components: Metastore Reporter: Carl Steinbach Assignee: Thejas M Nair Labels: api-addition Attachments: HIVE-3280.1.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6679) HiveServer2 should support configurable the server side socket timeout and keepalive for various transports types where applicable
[ https://issues.apache.org/jira/browse/HIVE-6679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291518#comment-14291518 ] Vaibhav Gumashta commented on HIVE-6679: [~leftylev] Yes, as of now. I'll add a patch for 15 shortly. HiveServer2 should support configurable the server side socket timeout and keepalive for various transports types where applicable -- Key: HIVE-6679 URL: https://issues.apache.org/jira/browse/HIVE-6679 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0, 0.14.0 Reporter: Prasad Mujumdar Assignee: Navis Labels: TODOC14, TODOC15 Fix For: 0.15.0, 0.14.1 Attachments: HIVE-6679.1.patch.txt, HIVE-6679.2.patch.txt, HIVE-6679.3.patch, HIVE-6679.4.patch, HIVE-6679.5.patch HiveServer2 should support configurable the server side socket read timeout and TCP keep-alive option. Metastore server already support this (and the so is the old hive server). We now have multiple client connectivity options like Kerberos, Delegation Token (Digest-MD5), Plain SASL, Plain SASL with SSL and raw sockets. The configuration should be applicable to all types (if possible). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9448) Merge spark to trunk 1/23/15
[ https://issues.apache.org/jira/browse/HIVE-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-9448: Attachment: HIVE-9448.3.patch Regenerate the two golden files Merge spark to trunk 1/23/15 Key: HIVE-9448 URL: https://issues.apache.org/jira/browse/HIVE-9448 Project: Hive Issue Type: Bug Components: Spark Affects Versions: 0.15.0 Reporter: Szehon Ho Assignee: Szehon Ho Attachments: HIVE-9448.2.patch, HIVE-9448.3.patch, HIVE-9448.patch Merging latest spark changes to trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9462) HIVE-8577 - breaks type evolution
Brock Noland created HIVE-9462: -- Summary: HIVE-8577 - breaks type evolution Key: HIVE-9462 URL: https://issues.apache.org/jira/browse/HIVE-9462 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Affects Versions: 0.15.0 Reporter: Brock Noland Assignee: Brock Noland If you write an avro field out as {{int}} and then change it's type to {{long}} you will get an {{UnresolvedUnionException}} due to code in HIVE-8577. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9428) LocalSparkJobStatus may return failed job as successful [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-9428: - Attachment: HIVE-9428.2-spark.patch Log before eat exception LocalSparkJobStatus may return failed job as successful [Spark Branch] -- Key: HIVE-9428 URL: https://issues.apache.org/jira/browse/HIVE-9428 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Rui Li Assignee: Rui Li Priority: Minor Attachments: HIVE-9428.1-spark.patch, HIVE-9428.2-spark.patch, HIVE-9428.2-spark.patch Future is done doesn't necessarily mean the job is successful. We should rely on SparkJobInfo to get job status whenever it's available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9428) LocalSparkJobStatus may return failed job as successful [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Li updated HIVE-9428: - Attachment: HIVE-9428.3-spark.patch LocalSparkJobStatus may return failed job as successful [Spark Branch] -- Key: HIVE-9428 URL: https://issues.apache.org/jira/browse/HIVE-9428 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Rui Li Assignee: Rui Li Priority: Minor Attachments: HIVE-9428.1-spark.patch, HIVE-9428.2-spark.patch, HIVE-9428.3-spark.patch Future is done doesn't necessarily mean the job is successful. We should rely on SparkJobInfo to get job status whenever it's available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9450) [Parquet] Check all data types work for Parquet in Group By operator
[ https://issues.apache.org/jira/browse/HIVE-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291064#comment-14291064 ] Hive QA commented on HIVE-9450: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12694419/HIVE-9450.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 7366 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_histogram_numeric {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2516/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2516/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2516/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12694419 - PreCommit-HIVE-TRUNK-Build [Parquet] Check all data types work for Parquet in Group By operator Key: HIVE-9450 URL: https://issues.apache.org/jira/browse/HIVE-9450 Project: Hive Issue Type: Sub-task Reporter: Dong Chen Assignee: Dong Chen Attachments: HIVE-9450.patch, HIVE-9450.patch Check all data types work for Parquet in Group By operator. 1. Add test cases for data types. 2. Fix the ClassCastException bug for CHARVARCHAR used in group by for Parquet. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9431) CBO (Calcite Return Path): Removing AST from ParseContext
[ https://issues.apache.org/jira/browse/HIVE-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291084#comment-14291084 ] Jesus Camacho Rodriguez commented on HIVE-9431: --- [~jpullokkaran], fails are unrelated, this patch is ready to go in. Thanks CBO (Calcite Return Path): Removing AST from ParseContext - Key: HIVE-9431 URL: https://issues.apache.org/jira/browse/HIVE-9431 Project: Hive Issue Type: Sub-task Components: CBO Reporter: Jesus Camacho Rodriguez Assignee: Jesus Camacho Rodriguez Fix For: 0.15.0 Attachments: HIVE-9431.01.patch, HIVE-9431.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9444) CBO (Calcite Return Path): Rewrite GlobalLimitOptimizer
[ https://issues.apache.org/jira/browse/HIVE-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291087#comment-14291087 ] Jesus Camacho Rodriguez commented on HIVE-9444: --- [~jpullokkaran], test fails are unrelated, they pass in my local machine. The patch is ready to be reviewed. CBO (Calcite Return Path): Rewrite GlobalLimitOptimizer --- Key: HIVE-9444 URL: https://issues.apache.org/jira/browse/HIVE-9444 Project: Hive Issue Type: Sub-task Components: CBO Reporter: Jesus Camacho Rodriguez Assignee: Jesus Camacho Rodriguez Fix For: 0.15.0 Attachments: HIVE-9444.patch Currently, GlobalLimitOptimization relies heavily on the information contained in QBParseInfo. The goal is to extract that information from the operator tree so we do not need to rely on QBParseInfo. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 30254: HIVE-9444
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/30254/ --- Review request for hive and John Pullokkaran. Bugs: HIVE-9444 https://issues.apache.org/jira/browse/HIVE-9444 Repository: hive-git Description --- HIVE-9444 Diffs - ql/src/java/org/apache/hadoop/hive/ql/optimizer/GlobalLimitOptimizer.java c9848dacd1a02db321583c2b91eb6d7317c295ff Diff: https://reviews.apache.org/r/30254/diff/ Testing --- Existing tests. Thanks, Jesús Camacho RodrÃguez
[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
[ https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-9327: -- Attachment: HIVE-9327.07.patch Regenerating additional golden files. CBO (Calcite Return Path): Removing Row Resolvers from ParseContext --- Key: HIVE-9327 URL: https://issues.apache.org/jira/browse/HIVE-9327 Project: Hive Issue Type: Sub-task Components: CBO Reporter: Jesus Camacho Rodriguez Assignee: Jesus Camacho Rodriguez Fix For: 0.15.0 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, HIVE-9327.03.patch, HIVE-9327.04.patch, HIVE-9327.05.patch, HIVE-9327.06.patch, HIVE-9327.07.patch, HIVE-9327.patch CLEAR LIBRARY CACHE ParseContext includes a map of Operator to RowResolver (OpParseContext). It would be ideal to remove this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 28797: Support Union Distinct
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/28797/ --- (Updated Jan. 25, 2015, 9:39 p.m.) Review request for hive and John Pullokkaran. Changes --- rebase the patch Repository: hive-git Description --- Current version (Hive 0.14) does not support union (or union distinct). It only supports union all. In this patch, we try to add this new feature by rewriting union distinct to union all followed by group by. Diffs (updated) - itests/src/test/resources/testconfiguration.properties 860604c ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java 95ad9e0 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g 9c7603c ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g c960a6b ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 4364f28 ql/src/test/queries/clientnegative/unionClusterBy.q PRE-CREATION ql/src/test/queries/clientnegative/unionDistributeBy.q PRE-CREATION ql/src/test/queries/clientnegative/unionLimit.q PRE-CREATION ql/src/test/queries/clientnegative/unionOrderBy.q PRE-CREATION ql/src/test/queries/clientnegative/unionSortBy.q PRE-CREATION ql/src/test/queries/clientpositive/cbo_union.q e9508c5 ql/src/test/queries/clientpositive/explode_null.q 76e4535 ql/src/test/queries/clientpositive/input25.q e48368f ql/src/test/queries/clientpositive/input26.q 642a7db ql/src/test/queries/clientpositive/load_dyn_part14.q c34c3bf ql/src/test/queries/clientpositive/metadataOnlyOptimizer.q a26ef1a ql/src/test/queries/clientpositive/script_env_var1.q 381c5dc ql/src/test/queries/clientpositive/script_env_var2.q 5f10812 ql/src/test/queries/clientpositive/union3.q d402cb0 ql/src/test/queries/clientpositive/unionDistinct_1.q PRE-CREATION ql/src/test/queries/clientpositive/unionDistinct_2.q PRE-CREATION ql/src/test/queries/clientpositive/union_null.q 64e1672 ql/src/test/queries/clientpositive/union_remove_25.q c6c09e1 ql/src/test/queries/clientpositive/union_top_level.q 946473a ql/src/test/queries/clientpositive/vector_multi_insert.q 77404e9 ql/src/test/results/clientnegative/unionClusterBy.q.out PRE-CREATION ql/src/test/results/clientnegative/unionDistributeBy.q.out PRE-CREATION ql/src/test/results/clientnegative/unionLimit.q.out PRE-CREATION ql/src/test/results/clientnegative/unionOrderBy.q.out PRE-CREATION ql/src/test/results/clientnegative/unionSortBy.q.out PRE-CREATION ql/src/test/results/clientpositive/ba_table_union.q.out 706a537 ql/src/test/results/clientpositive/cbo_union.q.out 1fd88ec ql/src/test/results/clientpositive/char_union1.q.out bdc4a1d ql/src/test/results/clientpositive/explain_logical.q.out 2e73a89 ql/src/test/results/clientpositive/explode_null.q.out db71c69 ql/src/test/results/clientpositive/groupby_sort_1_23.q.out dd450cb ql/src/test/results/clientpositive/groupby_sort_skew_1_23.q.out 2f08999 ql/src/test/results/clientpositive/input25.q.out 141a576 ql/src/test/results/clientpositive/input26.q.out 66d3bd2 ql/src/test/results/clientpositive/input_part7.q.out 6094f9c ql/src/test/results/clientpositive/join34.q.out a20e49f ql/src/test/results/clientpositive/join35.q.out 937539c ql/src/test/results/clientpositive/load_dyn_part14.q.out a9dde4d ql/src/test/results/clientpositive/merge4.q.out 121b724 ql/src/test/results/clientpositive/metadataOnlyOptimizer.q.out 1fcbc0a ql/src/test/results/clientpositive/optimize_nullscan.q.out 4eb498e ql/src/test/results/clientpositive/script_env_var1.q.out 8e1075a ql/src/test/results/clientpositive/script_env_var2.q.out 89f3606 ql/src/test/results/clientpositive/spark/groupby_sort_1_23.q.out 569501f ql/src/test/results/clientpositive/spark/groupby_sort_skew_1_23.q.out 6e66697 ql/src/test/results/clientpositive/spark/join34.q.out c337093 ql/src/test/results/clientpositive/spark/join35.q.out 2b217c1 ql/src/test/results/clientpositive/spark/load_dyn_part14.q.out 1f9985f ql/src/test/results/clientpositive/spark/optimize_nullscan.q.out 3a8efcf ql/src/test/results/clientpositive/spark/script_env_var1.q.out 8e1075a ql/src/test/results/clientpositive/spark/script_env_var2.q.out 89f3606 ql/src/test/results/clientpositive/spark/union3.q.out 1e79c34 ql/src/test/results/clientpositive/spark/union_null.q.out 4574a2e ql/src/test/results/clientpositive/spark/union_ppr.q.out 3e1a4b8 ql/src/test/results/clientpositive/spark/union_remove_25.q.out d36a246 ql/src/test/results/clientpositive/tez/cbo_union.q.out 1fd88ec ql/src/test/results/clientpositive/tez/optimize_nullscan.q.out da456c7 ql/src/test/results/clientpositive/tez/script_env_var1.q.out 8e1075a ql/src/test/results/clientpositive/tez/script_env_var2.q.out 89f3606
[jira] [Resolved] (HIVE-9000) LAST_VALUE Window function returns wrong results
[ https://issues.apache.org/jira/browse/HIVE-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan resolved HIVE-9000. Resolution: Fixed Fix Version/s: (was: 0.14.1) As Navis pointed out, result from Hive is indeed correct. I further verified on postgres. Mark, you can use alternative query suggested by Navis if you want your resultset to look like that. LAST_VALUE Window function returns wrong results Key: HIVE-9000 URL: https://issues.apache.org/jira/browse/HIVE-9000 Project: Hive Issue Type: Bug Components: PTF-Windowing Affects Versions: 0.13.1 Reporter: Mark Grover Priority: Critical LAST_VALUE Windowing function has been returning bad results, as far as I can tell from day 1. And, it seems like the tests are also asserting that LAST_VALUE gives the wrong result. Here's the test output: https://github.com/apache/hive/blob/branch-0.14/ql/src/test/results/clientpositive/windowing_navfn.q.out#L587 The query is: {code} select t, s, i, last_value(i) over (partition by t order by s) from over10k where (s = 'oscar allen' or s = 'oscar carson') and t = 10 {code} The result is: {code} t si last_value(i) --- 10oscar allen 65662 65662 10oscar carson65549 65549 {code} {{LAST_VALUE( i )}} should have returned 65549 in both records, instead it simply ends up returning i. Another way you can make sure LAST_VALUE is bad is to verify it's result against LEAD(i,1) over (partition by t order by s). LAST_VALUE being last value should always be more (in terms of the specified 'order by s') than the lead by 1. While this doesn't directly apply to the above query, if the result set had more rows, you would clearly see records where lead is higher than last_value which is semantically incorrect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-7049) Unable to deserialize AVRO data when file schema and record schema are different and nullable
[ https://issues.apache.org/jira/browse/HIVE-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14251986#comment-14251986 ] Brock Noland edited comment on HIVE-7049 at 1/25/15 9:57 PM: - Seems like we can get away with the following patch (confirm the fileSchema AKA writer's schema is actually a union type before trying to find the type that the reader schema expects). If not, just use the schema as is (it should be promoted to a union by Avro). This worked for me in local testing. {noformat} diff --git a/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java b/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java index ce933ff..032761c 100644 --- a/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java +++ b/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java @@ -265,9 +265,12 @@ private Object deserializeNullableUnion(Object datum, Schema fileSchema, Schema if(schema.getType().equals(Schema.Type.NULL)) { return null; } +Schema writerSchema = fileSchema; +if (writerSchema != null writerSchema.getType().equals(Schema.Type.UNION)) { + writerSchema = writerSchema.getTypes().get(tag); +} -return worker(datum, fileSchema == null ? null : fileSchema.getTypes().get(tag), schema, -SchemaToTypeInfo.generateTypeInfo(schema)); +return worker(datum, writerSchema, schema, SchemaToTypeInfo.generateTypeInfo(schema)); } {noformat} was (Author: jonathan.bender): Seems like we can get away with the following patch (confirm the fileSchema AKA writer's schema is actually a union type before trying to find the type that the reader schema expects). If not, just use the schema as is (it should be promoted to a union by Avro). This worked for me in local testing. ```diff --git a/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java b/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java index ce933ff..032761c 100644 --- a/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java +++ b/src/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java @@ -265,9 +265,12 @@ private Object deserializeNullableUnion(Object datum, Schema fileSchema, Schema if(schema.getType().equals(Schema.Type.NULL)) { return null; } +Schema writerSchema = fileSchema; +if (writerSchema != null writerSchema.getType().equals(Schema.Type.UNION)) { + writerSchema = writerSchema.getTypes().get(tag); +} -return worker(datum, fileSchema == null ? null : fileSchema.getTypes().get(tag), schema, -SchemaToTypeInfo.generateTypeInfo(schema)); +return worker(datum, writerSchema, schema, SchemaToTypeInfo.generateTypeInfo(schema)); } ``` Unable to deserialize AVRO data when file schema and record schema are different and nullable - Key: HIVE-7049 URL: https://issues.apache.org/jira/browse/HIVE-7049 Project: Hive Issue Type: Bug Reporter: Mohammad Kamrul Islam Assignee: Mohammad Kamrul Islam Attachments: HIVE-7049.1.patch It mainly happens when 1 )file schema and record schema are not same 2 ) Record schema is nullable but file schema is not. The potential code location is at class AvroDeserialize {noformat} if(AvroSerdeUtils.isNullableType(recordSchema)) { return deserializeNullableUnion(datum, fileSchema, recordSchema, columnType); } {noformat} In the above code snippet, recordSchema is verified if it is nullable. But the file schema is not checked. I tested with these values: {noformat} recordSchema= [null,string] fielSchema= string {noformat} And i got the following exception line numbers might not be the same due to mu debugged code version. {noformat} org.apache.avro.AvroRuntimeException: Not a union: string at org.apache.avro.Schema.getTypes(Schema.java:272) at org.apache.hadoop.hive.serde2.avro.AvroDeserializer.deserializeNullableUnion(AvroDeserializer.java:275) at org.apache.hadoop.hive.serde2.avro.AvroDeserializer.worker(AvroDeserializer.java:205) at org.apache.hadoop.hive.serde2.avro.AvroDeserializer.workerBase(AvroDeserializer.java:188) at org.apache.hadoop.hive.serde2.avro.AvroDeserializer.deserialize(AvroDeserializer.java:174) at org.apache.hadoop.hive.serde2.avro.TestAvroDeserializer.verifyNullableType(TestAvroDeserializer.java:487) at org.apache.hadoop.hive.serde2.avro.TestAvroDeserializer.canDeserializeNullableTypes(TestAvroDeserializer.java:407) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9000) LAST_VALUE Window function returns wrong results
[ https://issues.apache.org/jira/browse/HIVE-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-9000: --- Affects Version/s: (was: 0.13.1) LAST_VALUE Window function returns wrong results Key: HIVE-9000 URL: https://issues.apache.org/jira/browse/HIVE-9000 Project: Hive Issue Type: Bug Components: PTF-Windowing Reporter: Mark Grover Priority: Critical LAST_VALUE Windowing function has been returning bad results, as far as I can tell from day 1. And, it seems like the tests are also asserting that LAST_VALUE gives the wrong result. Here's the test output: https://github.com/apache/hive/blob/branch-0.14/ql/src/test/results/clientpositive/windowing_navfn.q.out#L587 The query is: {code} select t, s, i, last_value(i) over (partition by t order by s) from over10k where (s = 'oscar allen' or s = 'oscar carson') and t = 10 {code} The result is: {code} t si last_value(i) --- 10oscar allen 65662 65662 10oscar carson65549 65549 {code} {{LAST_VALUE( i )}} should have returned 65549 in both records, instead it simply ends up returning i. Another way you can make sure LAST_VALUE is bad is to verify it's result against LEAD(i,1) over (partition by t order by s). LAST_VALUE being last value should always be more (in terms of the specified 'order by s') than the lead by 1. While this doesn't directly apply to the above query, if the result set had more rows, you would clearly see records where lead is higher than last_value which is semantically incorrect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-9000) LAST_VALUE Window function returns wrong results
[ https://issues.apache.org/jira/browse/HIVE-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan resolved HIVE-9000. Resolution: Invalid LAST_VALUE Window function returns wrong results Key: HIVE-9000 URL: https://issues.apache.org/jira/browse/HIVE-9000 Project: Hive Issue Type: Bug Components: PTF-Windowing Reporter: Mark Grover Priority: Critical LAST_VALUE Windowing function has been returning bad results, as far as I can tell from day 1. And, it seems like the tests are also asserting that LAST_VALUE gives the wrong result. Here's the test output: https://github.com/apache/hive/blob/branch-0.14/ql/src/test/results/clientpositive/windowing_navfn.q.out#L587 The query is: {code} select t, s, i, last_value(i) over (partition by t order by s) from over10k where (s = 'oscar allen' or s = 'oscar carson') and t = 10 {code} The result is: {code} t si last_value(i) --- 10oscar allen 65662 65662 10oscar carson65549 65549 {code} {{LAST_VALUE( i )}} should have returned 65549 in both records, instead it simply ends up returning i. Another way you can make sure LAST_VALUE is bad is to verify it's result against LEAD(i,1) over (partition by t order by s). LAST_VALUE being last value should always be more (in terms of the specified 'order by s') than the lead by 1. While this doesn't directly apply to the above query, if the result set had more rows, you would clearly see records where lead is higher than last_value which is semantically incorrect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8569) error result when hive meet window function
[ https://issues.apache.org/jira/browse/HIVE-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-8569: --- Affects Version/s: (was: 0.13.0) (was: 0.12.0) error result when hive meet window function --- Key: HIVE-8569 URL: https://issues.apache.org/jira/browse/HIVE-8569 Project: Hive Issue Type: Bug Components: SQL Reporter: Yi Tian how to reproduce: {quote} drop table over10k; create table over10k( t tinyint, si smallint, i int, b bigint, f float, d double, bo boolean, s string, ts timestamp, dec decimal, bin binary) row format delimited fields terminated by '|'; load data local inpath '../data/files/over10k' into table over10k; select ts,s,i, sum(i) over(partition by ts order by s) from over10k where s='ethan van buren' and ts='2013-03-01 09:11:58.703325'; {quote} the result is : {quote} 2013-03-01 09:11:58.703325ethan van buren 65644 131222 2013-03-01 09:11:58.703325ethan van buren 65578 131222 {quote} but the fourth field of the first line should be 65644. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HIVE-9000) LAST_VALUE Window function returns wrong results
[ https://issues.apache.org/jira/browse/HIVE-9000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan reopened HIVE-9000: LAST_VALUE Window function returns wrong results Key: HIVE-9000 URL: https://issues.apache.org/jira/browse/HIVE-9000 Project: Hive Issue Type: Bug Components: PTF-Windowing Reporter: Mark Grover Priority: Critical LAST_VALUE Windowing function has been returning bad results, as far as I can tell from day 1. And, it seems like the tests are also asserting that LAST_VALUE gives the wrong result. Here's the test output: https://github.com/apache/hive/blob/branch-0.14/ql/src/test/results/clientpositive/windowing_navfn.q.out#L587 The query is: {code} select t, s, i, last_value(i) over (partition by t order by s) from over10k where (s = 'oscar allen' or s = 'oscar carson') and t = 10 {code} The result is: {code} t si last_value(i) --- 10oscar allen 65662 65662 10oscar carson65549 65549 {code} {{LAST_VALUE( i )}} should have returned 65549 in both records, instead it simply ends up returning i. Another way you can make sure LAST_VALUE is bad is to verify it's result against LEAD(i,1) over (partition by t order by s). LAST_VALUE being last value should always be more (in terms of the specified 'order by s') than the lead by 1. While this doesn't directly apply to the above query, if the result set had more rows, you would clearly see records where lead is higher than last_value which is semantically incorrect. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-8569) error result when hive meet window function
[ https://issues.apache.org/jira/browse/HIVE-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan resolved HIVE-8569. Resolution: Invalid Results are correct. Since, you have not defined any windowing spec, you will get cumulated sum along with each row. To get running sum (which I assume what you want) you need to specify window with your over clause. I further verified this with postgres, which gives same result as Hive. error result when hive meet window function --- Key: HIVE-8569 URL: https://issues.apache.org/jira/browse/HIVE-8569 Project: Hive Issue Type: Bug Components: SQL Reporter: Yi Tian how to reproduce: {quote} drop table over10k; create table over10k( t tinyint, si smallint, i int, b bigint, f float, d double, bo boolean, s string, ts timestamp, dec decimal, bin binary) row format delimited fields terminated by '|'; load data local inpath '../data/files/over10k' into table over10k; select ts,s,i, sum(i) over(partition by ts order by s) from over10k where s='ethan van buren' and ts='2013-03-01 09:11:58.703325'; {quote} the result is : {quote} 2013-03-01 09:11:58.703325ethan van buren 65644 131222 2013-03-01 09:11:58.703325ethan van buren 65578 131222 {quote} but the fourth field of the first line should be 65644. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6308) COLUMNS_V2 Metastore table not populated for tables created without an explicit column list.
[ https://issues.apache.org/jira/browse/HIVE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291325#comment-14291325 ] Yongzhi Chen commented on HIVE-6308: The test failures are not related to the change. COLUMNS_V2 Metastore table not populated for tables created without an explicit column list. Key: HIVE-6308 URL: https://issues.apache.org/jira/browse/HIVE-6308 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.10.0 Reporter: Alexander Behm Assignee: Yongzhi Chen Attachments: HIVE-6308.1.patch Consider this example table: CREATE TABLE avro_test ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' TBLPROPERTIES ( 'avro.schema.url'='file:///path/to/the/schema/test_serializer.avsc'); When I try to run an ANALYZE TABLE for computing column stats on any of the columns, then I get: org.apache.hadoop.hive.ql.metadata.HiveException: NoSuchObjectException(message:Column o_orderpriority for which stats gathering is requested doesn't exist.) at org.apache.hadoop.hive.ql.metadata.Hive.updateTableColumnStatistics(Hive.java:2280) at org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistTableStats(ColumnStatsTask.java:331) at org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:343) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:66) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1383) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1169) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:982) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) The root cause appears to be that the COLUMNS_V2 table in the Metastore isn't populated properly during the table creation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9450) [Parquet] Check all data types work for Parquet in Group By operator
[ https://issues.apache.org/jira/browse/HIVE-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291326#comment-14291326 ] Ferdinand Xu commented on HIVE-9450: Hi [~brocknoland] and [~dongc], do we really need to change the WritableHiveCharObjectInspector.java ? See https://issues.apache.org/jira/browse/HIVE-9371 [Parquet] Check all data types work for Parquet in Group By operator Key: HIVE-9450 URL: https://issues.apache.org/jira/browse/HIVE-9450 Project: Hive Issue Type: Sub-task Reporter: Dong Chen Assignee: Dong Chen Attachments: HIVE-9450.patch, HIVE-9450.patch Check all data types work for Parquet in Group By operator. 1. Add test cases for data types. 2. Fix the ClassCastException bug for CHARVARCHAR used in group by for Parquet. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9428) LocalSparkJobStatus may return failed job as successful [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291367#comment-14291367 ] Rui Li commented on HIVE-9428: -- OK, will do. LocalSparkJobStatus may return failed job as successful [Spark Branch] -- Key: HIVE-9428 URL: https://issues.apache.org/jira/browse/HIVE-9428 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Rui Li Assignee: Rui Li Priority: Minor Attachments: HIVE-9428.1-spark.patch, HIVE-9428.2-spark.patch Future is done doesn't necessarily mean the job is successful. We should rely on SparkJobInfo to get job status whenever it's available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9459) Concat plus date functions appears to be broken in 0.14
Nathan Lande created HIVE-9459: -- Summary: Concat plus date functions appears to be broken in 0.14 Key: HIVE-9459 URL: https://issues.apache.org/jira/browse/HIVE-9459 Project: Hive Issue Type: Bug Reporter: Nathan Lande In the below example I create year_month and month_year vars. These each should be mm and mm integer strings but it appears as if hive is calling the first function twice such that it is returning and . hive select month(a.joined) month, year(a.joined) year, concat(cast(year(a.joined) as string),cast(month(a.joined) as string)) year_month, concat(cast(month(a.joined) as string),cast(year(a.joined) as string)) month_year from a limit 20; OK month yearyear_month month_year 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 Time taken: 0.109 seconds, Fetched: 20 row(s) Other users appear to experience similar issues in this stack overflow: http://stackoverflow.com/questions/27740866/convert-date-to-decimal-format-in-hive . I tested this in 0.13 and 0.14 and it does not appear to be an issue in 0.13. I looked around and could not find a similar issue so hopefully this is not a duplicate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9459) Concat plus date functions appear to be broken in 0.14
[ https://issues.apache.org/jira/browse/HIVE-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Lande updated HIVE-9459: --- Summary: Concat plus date functions appear to be broken in 0.14 (was: Concat plus date functions appears to be broken in 0.14) Concat plus date functions appear to be broken in 0.14 -- Key: HIVE-9459 URL: https://issues.apache.org/jira/browse/HIVE-9459 Project: Hive Issue Type: Bug Reporter: Nathan Lande In the below example I create year_month and month_year vars. These each should be mm and mm integer strings but it appears as if hive is calling the first function twice such that it is returning and . hive select month(a.joined) month, year(a.joined) year, concat(cast(year(a.joined) as string),cast(month(a.joined) as string)) year_month, concat(cast(month(a.joined) as string),cast(year(a.joined) as string)) month_year from a limit 20; OK month yearyear_month month_year 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 7 20142014201477 Time taken: 0.109 seconds, Fetched: 20 row(s) Other users appear to experience similar issues in this stack overflow: http://stackoverflow.com/questions/27740866/convert-date-to-decimal-format-in-hive . I tested this in 0.13 and 0.14 and it does not appear to be an issue in 0.13. I looked around and could not find a similar issue so hopefully this is not a duplicate. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9423) HiveServer2: Implement some admission control policy
[ https://issues.apache.org/jira/browse/HIVE-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-9423: --- Summary: HiveServer2: Implement some admission control policy (was: HiveServer2: handle max handler thread exhaustion gracefully) HiveServer2: Implement some admission control policy Key: HIVE-9423 URL: https://issues.apache.org/jira/browse/HIVE-9423 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.12.0, 0.13.0, 0.14.0, 0.15.0 Reporter: Vaibhav Gumashta It has been reported that when # of client connections is greater than {{hive.server2.thrift.max.worker.threads}}, HiveServer2 stops accepting new connections and ends up having to be restarted. This should be handled more gracefully by the server and the JDBC driver, so that the end user gets aware of the problem and can take appropriate steps (either close existing connections or bump of the config value or use multiple server instances with dynamic service discovery enabled). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9423) HiveServer2: Implement some admission control policy
[ https://issues.apache.org/jira/browse/HIVE-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-9423: --- Description: An example of where it is needed: it has been reported that when # of client connections is greater than {{hive.server2.thrift.max.worker.threads}}, HiveServer2 stops accepting new connections and ends up having to be restarted. This should be handled more gracefully by the server and the JDBC driver, so that the end user gets aware of the problem and can take appropriate steps (either close existing connections or bump of the config value or use multiple server instances with dynamic service discovery enabled). Similarly, we should also review the behaviour of background thread pool to have a well defined behavior on the the pool getting exhausted. Ideally implementing some form of general admission control will be a better solution, so that we do not accept new work unless sufficient resources are available and display graceful degradation under overload. was:It has been reported that when # of client connections is greater than {{hive.server2.thrift.max.worker.threads}}, HiveServer2 stops accepting new connections and ends up having to be restarted. This should be handled more gracefully by the server and the JDBC driver, so that the end user gets aware of the problem and can take appropriate steps (either close existing connections or bump of the config value or use multiple server instances with dynamic service discovery enabled). HiveServer2: Implement some admission control policy Key: HIVE-9423 URL: https://issues.apache.org/jira/browse/HIVE-9423 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.12.0, 0.13.0, 0.14.0, 0.15.0 Reporter: Vaibhav Gumashta An example of where it is needed: it has been reported that when # of client connections is greater than {{hive.server2.thrift.max.worker.threads}}, HiveServer2 stops accepting new connections and ends up having to be restarted. This should be handled more gracefully by the server and the JDBC driver, so that the end user gets aware of the problem and can take appropriate steps (either close existing connections or bump of the config value or use multiple server instances with dynamic service discovery enabled). Similarly, we should also review the behaviour of background thread pool to have a well defined behavior on the the pool getting exhausted. Ideally implementing some form of general admission control will be a better solution, so that we do not accept new work unless sufficient resources are available and display graceful degradation under overload. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9423) HiveServer2: Implement some admission control mechanism for graceful degradation when resources are exhausted
[ https://issues.apache.org/jira/browse/HIVE-9423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-9423: --- Summary: HiveServer2: Implement some admission control mechanism for graceful degradation when resources are exhausted (was: HiveServer2: Implement some admission control policy) HiveServer2: Implement some admission control mechanism for graceful degradation when resources are exhausted - Key: HIVE-9423 URL: https://issues.apache.org/jira/browse/HIVE-9423 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.12.0, 0.13.0, 0.14.0, 0.15.0 Reporter: Vaibhav Gumashta An example of where it is needed: it has been reported that when # of client connections is greater than {{hive.server2.thrift.max.worker.threads}}, HiveServer2 stops accepting new connections and ends up having to be restarted. This should be handled more gracefully by the server and the JDBC driver, so that the end user gets aware of the problem and can take appropriate steps (either close existing connections or bump of the config value or use multiple server instances with dynamic service discovery enabled). Similarly, we should also review the behaviour of background thread pool to have a well defined behavior on the the pool getting exhausted. Ideally implementing some form of general admission control will be a better solution, so that we do not accept new work unless sufficient resources are available and display graceful degradation under overload. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
[ https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-9327: -- Attachment: HIVE-9327.08.patch CBO (Calcite Return Path): Removing Row Resolvers from ParseContext --- Key: HIVE-9327 URL: https://issues.apache.org/jira/browse/HIVE-9327 Project: Hive Issue Type: Sub-task Components: CBO Reporter: Jesus Camacho Rodriguez Assignee: Jesus Camacho Rodriguez Fix For: 0.15.0 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, HIVE-9327.03.patch, HIVE-9327.04.patch, HIVE-9327.05.patch, HIVE-9327.06.patch, HIVE-9327.07.patch, HIVE-9327.08.patch, HIVE-9327.patch CLEAR LIBRARY CACHE ParseContext includes a map of Operator to RowResolver (OpParseContext). It would be ideal to remove this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9454) Test failures due to new Calcite version
[ https://issues.apache.org/jira/browse/HIVE-9454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291366#comment-14291366 ] Navis commented on HIVE-9454: - [~julianhyde] Can we get any justification on calcite using hive as a test bed? This issue effectively stopped whole hive dev process for three days (Build #2486 ~ #2502, except time for re-testing all of them) and there seemed not any confirmation that will not happen again. Test failures due to new Calcite version Key: HIVE-9454 URL: https://issues.apache.org/jira/browse/HIVE-9454 Project: Hive Issue Type: Bug Reporter: Brock Noland Attachments: HIVE-9454.1.patch A bunch of failures have started appearing in patches which seen unrelated. I am thinking we've picked up a new version of Calcite. E.g.: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2488/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_auto_join12/ {noformat} Running: diff -a /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../itests/qtest/target/qfile-results/clientpositive/auto_join12.q.out /home/hiveptest/54.147.202.89-hiveptest-1/apache-svn-trunk-source/itests/qtest/../../ql/src/test/results/clientpositive/auto_join12.q.out 32c32 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 35c35 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src 39c39 $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src --- $hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:$hdt$_0:src 54c54 $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:src --- $hdt$_0:$hdt$_0:$hdt$_1:$hdt$_1:$hdt$_1:$hdt$_1:src {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9327) CBO (Calcite Return Path): Removing Row Resolvers from ParseContext
[ https://issues.apache.org/jira/browse/HIVE-9327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14291401#comment-14291401 ] Hive QA commented on HIVE-9327: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12694453/HIVE-9327.08.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 7365 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3_map org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2518/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2518/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2518/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12694453 - PreCommit-HIVE-TRUNK-Build CBO (Calcite Return Path): Removing Row Resolvers from ParseContext --- Key: HIVE-9327 URL: https://issues.apache.org/jira/browse/HIVE-9327 Project: Hive Issue Type: Sub-task Components: CBO Reporter: Jesus Camacho Rodriguez Assignee: Jesus Camacho Rodriguez Fix For: 0.15.0 Attachments: HIVE-9327.01.patch, HIVE-9327.02.patch, HIVE-9327.03.patch, HIVE-9327.04.patch, HIVE-9327.05.patch, HIVE-9327.06.patch, HIVE-9327.07.patch, HIVE-9327.08.patch, HIVE-9327.patch CLEAR LIBRARY CACHE ParseContext includes a map of Operator to RowResolver (OpParseContext). It would be ideal to remove this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9302) Beeline add jar local to client
[ https://issues.apache.org/jira/browse/HIVE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-9302: --- Attachment: HIVE-9302.1.patch Beeline add jar local to client --- Key: HIVE-9302 URL: https://issues.apache.org/jira/browse/HIVE-9302 Project: Hive Issue Type: New Feature Reporter: Brock Noland Assignee: Ferdinand Xu Attachments: HIVE-9302.1.patch, HIVE-9302.patch, mysql-connector-java-bin.jar, postgresql-9.3.jdbc3.jar At present if a beeline user uses {{add jar}} the path they give is actually on the HS2 server. It'd be great to allow beeline users to add local jars as well. It might be useful to do this in the jdbc driver itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 29961: HIVE-9302 Beeline add jar local to client
On Jan. 20, 2015, 4:50 p.m., Brock Noland wrote: Hi Ferdinand! What license are the drivers under? We'll have to make sure they both fit under: http://www.apache.org/legal/resolved.html As opposed to that, I wonder if we can create some dummy class which is used to generate a Driver? Then you can passing a url like jdbc:mockdb:// and we don't have to ship a real jar? Thanks Brock for figuring it out. I add the license for PostgreSQL into the license file and remove the mysql driver related codes since it is under the GPL license. - cheng --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/29961/#review68733 --- On Jan. 16, 2015, 6:17 a.m., cheng xu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/29961/ --- (Updated Jan. 16, 2015, 6:17 a.m.) Review request for hive, Brock Noland, Dong Chen, and Sergio Pena. Repository: hive-git Description --- Support adding local driver jar file in the beeline side and add unit test for it Diffs - beeline/src/java/org/apache/hive/beeline/BeeLine.java 630ead4 beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java 065eab4 beeline/src/java/org/apache/hive/beeline/Commands.java 291adba beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java 8ba0232 beeline/src/main/resources/BeeLine.properties d038d46 beeline/src/test/org/apache/hive/beeline/TestBeelineArgParsing.java a6ee93a beeline/src/test/resources/mysql-connector-java-bin.jar PRE-CREATION beeline/src/test/resources/postgresql-9.3.jdbc3.jar PRE-CREATION Diff: https://reviews.apache.org/r/29961/diff/ Testing --- Manullay test done. Newly added test passed. Thanks, cheng xu
[jira] [Updated] (HIVE-9302) Beeline add jar local to client
[ https://issues.apache.org/jira/browse/HIVE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-9302: --- Attachment: DummyDriver-1.0-SNAPSHOT.jar Beeline add jar local to client --- Key: HIVE-9302 URL: https://issues.apache.org/jira/browse/HIVE-9302 Project: Hive Issue Type: New Feature Reporter: Brock Noland Assignee: Ferdinand Xu Attachments: DummyDriver-1.0-SNAPSHOT.jar, HIVE-9302.1.patch, HIVE-9302.patch, mysql-connector-java-bin.jar, postgresql-9.3.jdbc3.jar At present if a beeline user uses {{add jar}} the path they give is actually on the HS2 server. It'd be great to allow beeline users to add local jars as well. It might be useful to do this in the jdbc driver itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 29961: HIVE-9302 Beeline add jar local to client
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/29961/ --- (Updated Jan. 26, 2015, 2:54 a.m.) Review request for hive, Brock Noland, Dong Chen, and Sergio Pena. Changes --- Summary: 1. add adddrivername command for customized driver 2. add test for newly added adddrivername command 3. refine log part Repository: hive-git Description --- Support adding local driver jar file in the beeline side and add unit test for it Diffs (updated) - LICENSE c973c36 beeline/src/java/org/apache/hive/beeline/BeeLine.java 630ead4 beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java 065eab4 beeline/src/java/org/apache/hive/beeline/Commands.java 291adba beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java 8ba0232 beeline/src/main/resources/BeeLine.properties d038d46 beeline/src/test/org/apache/hive/beeline/TestBeelineArgParsing.java a6ee93a beeline/src/test/resources/DummyDriver-1.0-SNAPSHOT.jar PRE-CREATION beeline/src/test/resources/mysql-connector-java-bin.jar PRE-CREATION beeline/src/test/resources/postgresql-9.3.jdbc3.jar PRE-CREATION Diff: https://reviews.apache.org/r/29961/diff/ Testing --- Manullay test done. Newly added test passed. Thanks, cheng xu
Re: Review Request 29961: HIVE-9302 Beeline add jar local to client
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/29961/ --- (Updated Jan. 26, 2015, 2:57 a.m.) Review request for hive, Brock Noland, Dong Chen, and Sergio Pena. Changes --- remove mysql driver jar file Repository: hive-git Description --- Support adding local driver jar file in the beeline side and add unit test for it Diffs (updated) - LICENSE c973c36 beeline/src/java/org/apache/hive/beeline/BeeLine.java 630ead4 beeline/src/java/org/apache/hive/beeline/ClassNameCompleter.java 065eab4 beeline/src/java/org/apache/hive/beeline/Commands.java 291adba beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java 8ba0232 beeline/src/main/resources/BeeLine.properties d038d46 beeline/src/test/org/apache/hive/beeline/TestBeelineArgParsing.java a6ee93a beeline/src/test/resources/DummyDriver-1.0-SNAPSHOT.jar PRE-CREATION beeline/src/test/resources/postgresql-9.3.jdbc3.jar PRE-CREATION Diff: https://reviews.apache.org/r/29961/diff/ Testing --- Manullay test done. Newly added test passed. Thanks, cheng xu
[jira] [Updated] (HIVE-9302) Beeline add jar local to client
[ https://issues.apache.org/jira/browse/HIVE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-9302: --- Attachment: (was: HIVE-9302.1.patch) Beeline add jar local to client --- Key: HIVE-9302 URL: https://issues.apache.org/jira/browse/HIVE-9302 Project: Hive Issue Type: New Feature Reporter: Brock Noland Assignee: Ferdinand Xu Attachments: DummyDriver-1.0-SNAPSHOT.jar, HIVE-9302.1.patch, HIVE-9302.patch, mysql-connector-java-bin.jar, postgresql-9.3.jdbc3.jar At present if a beeline user uses {{add jar}} the path they give is actually on the HS2 server. It'd be great to allow beeline users to add local jars as well. It might be useful to do this in the jdbc driver itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9302) Beeline add jar local to client
[ https://issues.apache.org/jira/browse/HIVE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-9302: --- Attachment: HIVE-9302.1.patch Beeline add jar local to client --- Key: HIVE-9302 URL: https://issues.apache.org/jira/browse/HIVE-9302 Project: Hive Issue Type: New Feature Reporter: Brock Noland Assignee: Ferdinand Xu Attachments: DummyDriver-1.0-SNAPSHOT.jar, HIVE-9302.1.patch, HIVE-9302.patch, mysql-connector-java-bin.jar, postgresql-9.3.jdbc3.jar At present if a beeline user uses {{add jar}} the path they give is actually on the HS2 server. It'd be great to allow beeline users to add local jars as well. It might be useful to do this in the jdbc driver itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332)