[jira] [Commented] (HIVE-6144) Implement non-staged MapJoin
[ https://issues.apache.org/jira/browse/HIVE-6144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871788#comment-13871788 ] Navis commented on HIVE-6144: - I've ran CliDriver tests in ql package and confirmed results are not changed. But with the option true, it virtually removes most of LocalMapredTasks in results file. What I concern is that it would cover future bugs in LocalMapredTask. Opinions? Implement non-staged MapJoin Key: HIVE-6144 URL: https://issues.apache.org/jira/browse/HIVE-6144 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6144.1.patch.txt, HIVE-6144.2.patch.txt, HIVE-6144.3.patch.txt For map join, all data in small aliases are hashed and stored into temporary file in MapRedLocalTask. But for some aliases without filter or projection, it seemed not necessary to do that. For example. {noformat} select a.* from src a join src b on a.key=b.key; {noformat} makes plan like this. {noformat} STAGE PLANS: Stage: Stage-4 Map Reduce Local Work Alias - Map Local Tables: a Fetch Operator limit: -1 Alias - Map Local Operator Tree: a TableScan alias: a HashTable Sink Operator condition expressions: 0 {key} {value} 1 handleSkewJoin: false keys: 0 [Column[key]] 1 [Column[key]] Position of Big Table: 1 Stage: Stage-3 Map Reduce Alias - Map Operator Tree: b TableScan alias: b Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {key} {value} 1 handleSkewJoin: false keys: 0 [Column[key]] 1 [Column[key]] outputColumnNames: _col0, _col1 Position of Big Table: 1 Select Operator File Output Operator Local Work: Map Reduce Local Work Stage: Stage-0 Fetch Operator {noformat} table src(a) is fetched and stored as-is in MRLocalTask. With this patch, plan can be like below. {noformat} Stage: Stage-3 Map Reduce Alias - Map Operator Tree: b TableScan alias: b Map Join Operator condition map: Inner Join 0 to 1 condition expressions: 0 {key} {value} 1 handleSkewJoin: false keys: 0 [Column[key]] 1 [Column[key]] outputColumnNames: _col0, _col1 Position of Big Table: 1 Select Operator File Output Operator Local Work: Map Reduce Local Work Alias - Map Local Tables: a Fetch Operator limit: -1 Alias - Map Local Operator Tree: a TableScan alias: a Has Any Stage Alias: false Stage: Stage-0 Fetch Operator {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6170) Upgrade to the latest version of bonecp
[ https://issues.apache.org/jira/browse/HIVE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871796#comment-13871796 ] Vaibhav Gumashta commented on HIVE-6170: +1 (non-binding). cc [~ashutoshc] Upgrade to the latest version of bonecp --- Key: HIVE-6170 URL: https://issues.apache.org/jira/browse/HIVE-6170 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-6170.1.patch -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6054) HiveServer2 does not log the output of LogUtils.initHiveLog4j();
[ https://issues.apache.org/jira/browse/HIVE-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871806#comment-13871806 ] Hive QA commented on HIVE-6054: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623029/HIVE-6054.1.patch {color:green}SUCCESS:{color} +1 4925 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/915/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/915/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12623029 HiveServer2 does not log the output of LogUtils.initHiveLog4j(); Key: HIVE-6054 URL: https://issues.apache.org/jira/browse/HIVE-6054 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-6054.1.patch Inside the main(), we just call LogUtils.initHiveLog4j() and do not log this information. This needs to be logged to see if the user has configured log4j.properties correctly. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: New Hive Website
It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. Sweet. -- Lefty On Tue, Jan 14, 2014 at 5:26 PM, Brock Noland br...@cloudera.com wrote: Thanks guys! It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. On Jan 14, 2014 7:20 PM, Lefty Leverenz leftylever...@gmail.com wrote: Looks good. The menu improves access considerably. What makes this website easier to edit? +1 -- Lefty On Tue, Jan 14, 2014 at 1:30 PM, Carl Steinbach cwsteinb...@gmail.com wrote: +1 to switching over now. On Jan 14, 2014 12:56 PM, Brock Noland br...@cloudera.com wrote: The *staging* version of the new Hive website is available: http://hive.staging.apache.org/ Notes: 1) It's a first pass, we can add more later 2) The javadocs links won't work until we cut over as they are committed directly to the production SVN 3) The How to edit the website will need to be updated after we cut over. The guide will look similar to this: http://mrunit.apache.org/development/edit_website.html Since the new website is so much easier to edit, I think we should cut over now. Brock
[jira] [Commented] (HIVE-6189) Support top level union all statements
[ https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871884#comment-13871884 ] Hive QA commented on HIVE-6189: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623031/HIVE-6189.3.patch {color:green}SUCCESS:{color} +1 4925 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/916/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/916/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12623031 Support top level union all statements -- Key: HIVE-6189 URL: https://issues.apache.org/jira/browse/HIVE-6189 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch, HIVE-6189.3.patch I've always wondered why union all has to be in subqueries in hive. After looking at it, problems are: - Hive Parser: - Union happens at the wrong place (insert ... select ... union all select ...) is parsed as (insert select) union select. - There are many rewrite rules in the parser to force any query into the a from - insert -select form. No doubt for historical reasons. - Plan generation/semantic analysis assumes top level TOK_QUERY and not top level TOK_UNION. The rewrite rules don't work when we move the UNION ALL recursion into the select statements. However, it's not hard to do that in code. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-5997) Replace SemanticException with HiveException for method signatues
[ https://issues.apache.org/jira/browse/HIVE-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871889#comment-13871889 ] Hive QA commented on HIVE-5997: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623028/HIVE-5997.2.patch.txt Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/917/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/917/console Messages: {noformat} This message was trimmed, see log for full details [INFO] Hive Integration - Test Serde [INFO] Hive Integration - QFile Tests [INFO] [INFO] [INFO] Building Hive Integration - Parent 0.13.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-it --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/itests (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-it --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-it --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/itests/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/itests/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/itests/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/itests/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-it --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/itests/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/hive-it/0.13.0-SNAPSHOT/hive-it-0.13.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive Integration - Custom Serde 0.13.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-it-custom-serde --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ hive-it-custom-serde --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/src/main/resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-it-custom-serde --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-it-custom-serde --- [INFO] Compiling 8 source files to /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/target/classes [INFO] [INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ hive-it-custom-serde --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/src/test/resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-it-custom-serde --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-it-custom-serde --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-it-custom-serde --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-it-custom-serde --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/itests/custom-serde/target/hive-it-custom-serde-0.13.0-SNAPSHOT.jar [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-it-custom-serde --- [INFO] Installing
[jira] [Created] (HIVE-6206) JDBC: HiveDriver should not throw RuntimeException when passed an invalid URL
Aliaksei Haidukou created HIVE-6206: --- Summary: JDBC: HiveDriver should not throw RuntimeException when passed an invalid URL Key: HIVE-6206 URL: https://issues.apache.org/jira/browse/HIVE-6206 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.12.0 Reporter: Aliaksei Haidukou The same issue as HIVE-4149, but for HiveDriver(org.apache.hadoop.hive.jdbc.HiveDriver) that corresponds for jdbc:hive:// jdbc url. driver.acceptsURL(wrongUrl) returns false driver.connect(wrongUrl, new Properties()) throws SQLException that is not correct -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6200) Hive custom SerDe cannot load DLL added by ADD FILE command
[ https://issues.apache.org/jira/browse/HIVE-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871954#comment-13871954 ] Hive QA commented on HIVE-6200: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623035/HIVE-6200.1.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4925 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/918/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/918/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12623035 Hive custom SerDe cannot load DLL added by ADD FILE command - Key: HIVE-6200 URL: https://issues.apache.org/jira/browse/HIVE-6200 Project: Hive Issue Type: Bug Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-6200.1.patch When custom SerDe need to load a DLL file added using ADD FILE command in HIVE, the loading fail with exception like java.lang.UnsatisfiedLinkError:C:\tmp\admin2_6996@headnode0_201401100431_resources\hello.dll: Access is denied. The reason is when FileSystem creating local copy of the file, the permission of local file is set to default as 666. DLL file need execute permission to be loaded successfully. Similar scenario also happens when Hadoop localize files in distributed cache. The solution in Hadoop is to add execute permission to the file after localizationl. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6201) Print failed query for qfile tests
[ https://issues.apache.org/jira/browse/HIVE-6201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872034#comment-13872034 ] Hive QA commented on HIVE-6201: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623041/HIVE-6201.1.patch.txt {color:green}SUCCESS:{color} +1 4925 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/919/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/919/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12623041 Print failed query for qfile tests -- Key: HIVE-6201 URL: https://issues.apache.org/jira/browse/HIVE-6201 Project: Hive Issue Type: Improvement Components: Tests Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-6201.1.patch.txt Looking for the cause of notorious test failure of 'infer_bucket_sort_bucketed_table', I've found I even cannot tell what query caused that. Now shows last query, which is failed(auto_join0.q, which I've replaced sum to sums) {noformat} Running org.apache.hadoop.hive.cli.TestCliDriver Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.496 sec FAILURE! - in org.apache.hadoop.hive.cli.TestCliDriver testCliDriver_auto_join0(org.apache.hadoop.hive.cli.TestCliDriver) Time elapsed: 0.38 sec FAILURE! junit.framework.AssertionFailedError: Unexpected exception running explain select sums(hash(a.k1,a.v1,a.k2, a.v2)) from ( SELECT src1.key as k1, src1.value as v1, src2.key as k2, src2.value as v2 FROM (SELECT * FROM src WHERE src.key 10) src1 JOIN (SELECT * FROM src WHERE src.key 10) src2 SORT BY k1, v1, k2, v2 ) a See ./ql/target/tmp/log/hive.log or ./itests/qtest/target/tmp/log/hive.log, or check ./ql/target/surefire-reports or ./itests/qtest/target/surefire-reports/ for specific test cases logs. at junit.framework.Assert.fail(Assert.java:50) at org.apache.hadoop.hive.ql.QTestUtil.failed(QTestUtil.java:1737) at org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:143) at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0(TestCliDriver.java:117) {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6206) JDBC: HiveDriver should not throw RuntimeException when passed an invalid URL
[ https://issues.apache.org/jira/browse/HIVE-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aliaksei Haidukou updated HIVE-6206: Description: The same issue as HIVE-4194, but for HiveDriver(org.apache.hadoop.hive.jdbc.HiveDriver) that corresponds for jdbc:hive:// jdbc url. driver.acceptsURL(wrongUrl) returns false driver.connect(wrongUrl, new Properties()) throws SQLException that is not correct was: The same issue as HIVE-4149, but for HiveDriver(org.apache.hadoop.hive.jdbc.HiveDriver) that corresponds for jdbc:hive:// jdbc url. driver.acceptsURL(wrongUrl) returns false driver.connect(wrongUrl, new Properties()) throws SQLException that is not correct JDBC: HiveDriver should not throw RuntimeException when passed an invalid URL - Key: HIVE-6206 URL: https://issues.apache.org/jira/browse/HIVE-6206 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.12.0 Reporter: Aliaksei Haidukou The same issue as HIVE-4194, but for HiveDriver(org.apache.hadoop.hive.jdbc.HiveDriver) that corresponds for jdbc:hive:// jdbc url. driver.acceptsURL(wrongUrl) returns false driver.connect(wrongUrl, new Properties()) throws SQLException that is not correct -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6122) Implement show grant on resource
[ https://issues.apache.org/jira/browse/HIVE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872143#comment-13872143 ] Hive QA commented on HIVE-6122: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623061/HIVE-6122.2.patch.txt {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 4926 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_partition_authorization org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_6 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_4 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_fail_5 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_authorization_part {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/920/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/920/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12623061 Implement show grant on resource -- Key: HIVE-6122 URL: https://issues.apache.org/jira/browse/HIVE-6122 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6122.1.patch.txt, HIVE-6122.2.patch.txt Currently, hive shows privileges owned by a principal. Reverse API is also needed, which shows all principals for a resource. {noformat} show grant user hive_test_user on database default; show grant user hive_test_user on table dummy; show grant user hive_test_user on all; {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6170) Upgrade to the latest version of bonecp
[ https://issues.apache.org/jira/browse/HIVE-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872200#comment-13872200 ] Ashutosh Chauhan commented on HIVE-6170: +1 Upgrade to the latest version of bonecp --- Key: HIVE-6170 URL: https://issues.apache.org/jira/browse/HIVE-6170 Project: Hive Issue Type: Bug Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Attachments: HIVE-6170.1.patch -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-4144) Add select database() command to show the current database
[ https://issues.apache.org/jira/browse/HIVE-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872240#comment-13872240 ] Hive QA commented on HIVE-4144: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623069/HIVE-4144.10.patch.txt {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 4927 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket5 org.apache.hadoop.hive.ql.parse.TestParseNegative.testParseNegative_invalid_select {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/922/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/922/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12623069 Add select database() command to show the current database Key: HIVE-4144 URL: https://issues.apache.org/jira/browse/HIVE-4144 Project: Hive Issue Type: Bug Components: SQL Reporter: Mark Grover Assignee: Navis Attachments: D9597.5.patch, HIVE-4144.10.patch.txt, HIVE-4144.6.patch.txt, HIVE-4144.7.patch.txt, HIVE-4144.8.patch.txt, HIVE-4144.9.patch.txt, HIVE-4144.D9597.1.patch, HIVE-4144.D9597.2.patch, HIVE-4144.D9597.3.patch, HIVE-4144.D9597.4.patch A recent hive-user mailing list conversation asked about having a command to show the current database. http://mail-archives.apache.org/mod_mbox/hive-user/201303.mbox/%3CCAMGr+0i+CRY69m3id=DxthmUCWLf0NxpKMCtROb=uauh2va...@mail.gmail.com%3E MySQL seems to have a command to do so: {code} select database(); {code} http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_database We should look into having something similar in Hive. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Hive-trunk-hadoop2 - Build # 678 - Still Failing
Changes for Build #640 Changes for Build #641 [navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis reviewed by Thejas M Nair) [navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh) Changes for Build #642 Changes for Build #643 [ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package from Microsoft to Hive (Hideaki Kumura via Eric Hanson) Changes for Build #644 [cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming conventions (Sergey Shelukhin via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II (Navis via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis via cws) [jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure vectorization produces same results as non-vectorized execution (Sergey Shelukhin via Jitendra Pandey) Changes for Build #645 Changes for Build #646 [ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson) Changes for Build #647 [thejas] HIVE-5795 : Hive should be able to skip header and footer rows when reading data file for a table (Shuaishuai Nie via Thejas Nair) Changes for Build #648 [thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by Brock Noland) Changes for Build #649 Changes for Build #650 Changes for Build #651 [brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim Kulkarni via Brock) Changes for Build #652 Changes for Build #653 [gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by Thejas M Nair) Changes for Build #654 [cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam via cws) Changes for Build #655 [brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler (Swarnim Kulkarni via Brock Noland) [brock] HIVE-5946 - DDL authorization task factory should be better tested (Brock reviewed by Thejas) Changes for Build #656 Changes for Build #657 [gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther Hagleitner) Changes for Build #658 Changes for Build #659 [ehans] HIVE-6051: Create DecimalColumnVector and a representative VectorExpression for decimal (Eric Hanson) Changes for Build #660 [thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url should be about to load serde schema from file system beside HDFS (Shuaishuai Nie via Thejas Nair) [thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via Thejas Nair) Changes for Build #661 Changes for Build #662 [gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan) Changes for Build #663 [hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas Nair) Changes for Build #664 [xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it when file path have spaces Changes for Build #665 Changes for Build #666 Changes for Build #667 [brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock reviewed by Xuefu and Sushanth) Changes for Build #668 [hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth Sowmyan via Ashutosh Chauhan) [hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin via Ashutosh Chauhan) Changes for Build #669 Changes for Build #670 [ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson) Changes for Build #671 [hashutosh] HIVE-6185 : DDLTask is inconsistent in creating a table and adding a partition when dealing with location (Xuefu Zhang via Ashutosh Chauhan) [hashutosh] HIVE-5032 : Enable hive creating external table at the root directory of DFS (Shuaishuai Nie via Ashutosh Chauhan) Changes for Build #672 [navis] HIVE-6177 : Fix keyword KW_REANME which was intended to be KW_RENAME (Navis reviewed by Brock Noland) [jitendra] HIVE-6156. Implement vectorized reader for Date datatype for ORC format. (jitendra) Changes for Build #673 [hashutosh] HIVE-4216 : TestHBaseMinimrCliDriver throws weird error with HBase 0.94.5 and Hadoop 23 and test is stuck infinitely (Jason Dere via Brock Noland) Changes for Build #674 [hashutosh] HIVE-5515 : Writing to an HBase table throws IllegalArgumentException, failing job submission (Viraj Bhat via Ashutosh Chauhan Sushanth Sowmyan) Changes for Build #675 Changes for Build #676 [thejas] HIVE-5941 : SQL std auth - support 'show roles' (Navis via Thejas Nair) Changes for Build #677 [navis] HIVE-6161 : Fix TCLIService duplicate thrift definition for TColumn (Jay Bennett via Navis) Changes for Build #678 No tests ran. The Apache Jenkins build system has built Hive-trunk-hadoop2 (build #678) Status:
[jira] [Commented] (HIVE-6205) alter table partition column throws NPE in authorization
[ https://issues.apache.org/jira/browse/HIVE-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872241#comment-13872241 ] Hive QA commented on HIVE-6205: --- {color:red}Overall{color}: -1 no tests executed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623084/HIVE-6205.1.patch.txt Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/923/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/923/console Messages: {noformat} This message was trimmed, see log for full details main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/contrib/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/contrib/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/contrib/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/contrib/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-contrib --- [INFO] Compiling 2 source files to /data/hive-ptest/working/apache-svn-trunk-source/contrib/target/test-classes [WARNING] Note: /data/hive-ptest/working/apache-svn-trunk-source/contrib/src/test/org/apache/hadoop/hive/contrib/serde2/TestRegexSerDe.java uses or overrides a deprecated API. [WARNING] Note: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hive-contrib --- [INFO] Tests are skipped. [INFO] [INFO] --- maven-jar-plugin:2.2:jar (default-jar) @ hive-contrib --- [INFO] Building jar: /data/hive-ptest/working/apache-svn-trunk-source/contrib/target/hive-contrib-0.13.0-SNAPSHOT.jar [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ hive-contrib --- [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/contrib/target/hive-contrib-0.13.0-SNAPSHOT.jar to /data/hive-ptest/working/maven/org/apache/hive/hive-contrib/0.13.0-SNAPSHOT/hive-contrib-0.13.0-SNAPSHOT.jar [INFO] Installing /data/hive-ptest/working/apache-svn-trunk-source/contrib/pom.xml to /data/hive-ptest/working/maven/org/apache/hive/hive-contrib/0.13.0-SNAPSHOT/hive-contrib-0.13.0-SNAPSHOT.pom [INFO] [INFO] [INFO] Building Hive HBase Handler 0.13.0-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-hbase-handler --- [INFO] Deleting /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler (includes = [datanucleus.log, derby.log], excludes = []) [INFO] [INFO] --- maven-resources-plugin:2.5:resources (default-resources) @ hive-hbase-handler --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/src/main/resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-hbase-handler --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-hbase-handler --- [INFO] Compiling 18 source files to /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/target/classes [WARNING] Note: Some input files use or override a deprecated API. [WARNING] Note: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.5:testResources (default-testResources) @ hive-hbase-handler --- [debug] execute contextualize [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/src/test/resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-hbase-handler --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/target/tmp/conf [copy] Copying 5 files to /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-hbase-handler --- [INFO] Compiling 4 source files to /data/hive-ptest/working/apache-svn-trunk-source/hbase-handler/target/test-classes [WARNING] Note: Some input files use or override a deprecated API. [WARNING] Note: Recompile with -Xlint:deprecation for details. [INFO] [INFO] ---
Hive-trunk-h0.21 - Build # 2578 - Still Failing
Changes for Build #2539 Changes for Build #2540 [navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis reviewed by Thejas M Nair) Changes for Build #2541 Changes for Build #2542 [ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package from Microsoft to Hive (Hideaki Kumura via Eric Hanson) Changes for Build #2543 [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II (Navis via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis via cws) [jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure vectorization produces same results as non-vectorized execution (Sergey Shelukhin via Jitendra Pandey) Changes for Build #2544 [cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming conventions (Sergey Shelukhin via cws) Changes for Build #2545 Changes for Build #2546 [ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson) Changes for Build #2547 [thejas] HIVE-5795 : Hive should be able to skip header and footer rows when reading data file for a table (Shuaishuai Nie via Thejas Nair) Changes for Build #2548 [thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by Brock Noland) Changes for Build #2549 Changes for Build #2550 Changes for Build #2551 [brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim Kulkarni via Brock) Changes for Build #2552 Changes for Build #2553 [gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by Thejas M Nair) Changes for Build #2554 [cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam via cws) Changes for Build #2555 [brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler (Swarnim Kulkarni via Brock Noland) [brock] HIVE-5946 - DDL authorization task factory should be better tested (Brock reviewed by Thejas) Changes for Build #2556 [gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther Hagleitner) Changes for Build #2557 Changes for Build #2558 [ehans] HIVE-6051: Create DecimalColumnVector and a representative VectorExpression for decimal (Eric Hanson) Changes for Build #2559 [thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url should be about to load serde schema from file system beside HDFS (Shuaishuai Nie via Thejas Nair) [thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via Thejas Nair) Changes for Build #2560 Changes for Build #2561 [gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan) Changes for Build #2562 [hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas Nair) Changes for Build #2563 Changes for Build #2564 [xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it when file path have spaces Changes for Build #2565 Changes for Build #2566 Changes for Build #2567 [brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock reviewed by Xuefu and Sushanth) Changes for Build #2568 [hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth Sowmyan via Ashutosh Chauhan) [hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin via Ashutosh Chauhan) Changes for Build #2569 Changes for Build #2570 [ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson) Changes for Build #2571 [hashutosh] HIVE-6185 : DDLTask is inconsistent in creating a table and adding a partition when dealing with location (Xuefu Zhang via Ashutosh Chauhan) [hashutosh] HIVE-5032 : Enable hive creating external table at the root directory of DFS (Shuaishuai Nie via Ashutosh Chauhan) Changes for Build #2572 [jitendra] HIVE-6156. Implement vectorized reader for Date datatype for ORC format. (jitendra) Changes for Build #2573 [navis] HIVE-6177 : Fix keyword KW_REANME which was intended to be KW_RENAME (Navis reviewed by Brock Noland) Changes for Build #2574 [hashutosh] HIVE-5515 : Writing to an HBase table throws IllegalArgumentException, failing job submission (Viraj Bhat via Ashutosh Chauhan Sushanth Sowmyan) [hashutosh] HIVE-4216 : TestHBaseMinimrCliDriver throws weird error with HBase 0.94.5 and Hadoop 23 and test is stuck infinitely (Jason Dere via Brock Noland) Changes for Build #2575 Changes for Build #2576 [thejas] HIVE-5941 : SQL std auth - support 'show roles' (Navis via Thejas Nair) Changes for Build #2577 [navis] HIVE-6161 : Fix TCLIService duplicate thrift definition for TColumn (Jay Bennett via Navis) Changes for Build #2578 No tests ran. The Apache Jenkins build system has built Hive-trunk-h0.21 (build #2578) Status: Still Failing Check console output at
Re: Review Request 16899: Path refactor patch
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16899/#review31879 --- Look good to me. Two comments: 1. Could we change the variable name and method name from ...URI to ...Path or ...Dir to reflect the meaning? 2. I saw a few places that use Path.toUri().toString() instead of Path.toString(). Any catch here? - Xuefu Zhang On Jan. 15, 2014, 5:45 a.m., Ashutosh Chauhan wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16899/ --- (Updated Jan. 15, 2014, 5:45 a.m.) Review request for hive. Bugs: HIVE-6197 https://issues.apache.org/jira/browse/HIVE-6197 Repository: hive Description --- Refactor patch. Diffs - trunk/ql/src/java/org/apache/hadoop/hive/ql/Context.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HashTableLoader.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/index/bitmap/BitmapIndexHandler.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MapJoinResolver.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredLocalWork.java 1558249 trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestPlan.java 1558249 trunk/ql/src/test/org/apache/hadoop/hive/ql/io/TestHiveBinarySearchRecordReader.java 1558249 trunk/ql/src/test/org/apache/hadoop/hive/ql/io/TestSymlinkTextInputFormat.java 1558249 Diff: https://reviews.apache.org/r/16899/diff/ Testing --- No new functionality. Regression suite suffices. Thanks, Ashutosh Chauhan
[jira] [Commented] (HIVE-6197) Use paths consistently - VI
[ https://issues.apache.org/jira/browse/HIVE-6197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872278#comment-13872278 ] Xuefu Zhang commented on HIVE-6197: --- Looks good. Two minor comments on review boards. Use paths consistently - VI --- Key: HIVE-6197 URL: https://issues.apache.org/jira/browse/HIVE-6197 Project: Hive Issue Type: Task Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-6197.2.patch, HIVE-6197.patch Next in series. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: Review Request 16899: Path refactor patch
On Jan. 15, 2014, 5:06 p.m., Xuefu Zhang wrote: Look good to me. Two comments: 1. Could we change the variable name and method name from ...URI to ...Path or ...Dir to reflect the meaning? 2. I saw a few places that use Path.toUri().toString() instead of Path.toString(). Any catch here? 1. Yup will rename methods and variables. 2. Not really. AFAIK, there is no catch. Regardless, at all of those places also we should be using path (no string anywhere). So, those will be gone in subsequent patches as well. - Ashutosh --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16899/#review31879 --- On Jan. 15, 2014, 5:45 a.m., Ashutosh Chauhan wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16899/ --- (Updated Jan. 15, 2014, 5:45 a.m.) Review request for hive. Bugs: HIVE-6197 https://issues.apache.org/jira/browse/HIVE-6197 Repository: hive Description --- Refactor patch. Diffs - trunk/ql/src/java/org/apache/hadoop/hive/ql/Context.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HashTableLoader.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/DagUtils.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/index/bitmap/BitmapIndexHandler.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MapJoinResolver.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java 1558249 trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredLocalWork.java 1558249 trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestPlan.java 1558249 trunk/ql/src/test/org/apache/hadoop/hive/ql/io/TestHiveBinarySearchRecordReader.java 1558249 trunk/ql/src/test/org/apache/hadoop/hive/ql/io/TestSymlinkTextInputFormat.java 1558249 Diff: https://reviews.apache.org/r/16899/diff/ Testing --- No new functionality. Regression suite suffices. Thanks, Ashutosh Chauhan
[jira] [Updated] (HIVE-6196) Incorrect package name for few tests.
[ https://issues.apache.org/jira/browse/HIVE-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6196: --- Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) Committed to trunk. Incorrect package name for few tests. - Key: HIVE-6196 URL: https://issues.apache.org/jira/browse/HIVE-6196 Project: Hive Issue Type: Test Components: Tests, UDF Affects Versions: 0.13.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Fix For: 0.13.0 Attachments: HIVE-6196.patch These are tests which were moved from one dir to another recently. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-2867) Timestamp is defined to be timezoneless but timestamps appear to be processed in the current timezoneql
[ https://issues.apache.org/jira/browse/HIVE-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872291#comment-13872291 ] Patrick Surry commented on HIVE-2867: - I think I ran into this same problem in Hive 0.10 I have an epoch-seconds value of 1389802875 which corresponds to 2014-01-15 11:21:15 in my local timezone (America/Montreal). If I try to convert directly as millis via from_utc_timestamp(1389802875000, 'America/Los_Angeles') I get the wrong answer 2014-01-15 03:21:15. My workaround is from_utc_timestamp(to_utc_timestamp(from_unixtime(1389802875),'America/Montreal'), 'America/Los_Angeles') which gives the correct 2014-01-15 08:21:15 Timestamp is defined to be timezoneless but timestamps appear to be processed in the current timezoneql --- Key: HIVE-2867 URL: https://issues.apache.org/jira/browse/HIVE-2867 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.8.1 Reporter: Michael Ubell Hive-2272 says: Timestamps are interpreted to be timezoneless and stored as an offset from the UNIX epoch. Convenience UDFs for conversion to and from timezones are provided (to_utc_timestamp, from_utc_timestamp). The following shows that the timezone is used. The epic should display as 1970-01-01 00:00:00. hive select cast(0 as timestamp) from alltypes limit 1; OK 1969-12-31 16:00:00 hive select to_utc_timestamp(cast(0 as timestamp), 'PST') from alltypes limit 1; OK 1970-01-01 00:00:00 hive select unix_timestamp(cast(1970-01-01 00:00:00 as timestamp)) from alltypes limit 1; OK 28800 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6174) Beeline set varible doesn't show the value of the variable as Hive CLI
[ https://issues.apache.org/jira/browse/HIVE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872297#comment-13872297 ] Xuefu Zhang commented on HIVE-6174: --- Patch committed to trunk. Thanks to Prasad for the review. Beeline set varible doesn't show the value of the variable as Hive CLI Key: HIVE-6174 URL: https://issues.apache.org/jira/browse/HIVE-6174 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.10.0, 0.11.0, 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-5174.3.patch, HIVE-6174.2.patch, HIVE-6174.patch Currently it displays nothing. {code} 0: jdbc:hive2:// set env:TERM; 0: jdbc:hive2:// {code} In contrast, Hive CLI displays the value of the variable. {code} hive set env:TERM; env:TERM=xterm {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6174) Beeline set varible doesn't show the value of the variable as Hive CLI
[ https://issues.apache.org/jira/browse/HIVE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-6174: -- Resolution: Fixed Status: Resolved (was: Patch Available) Beeline set varible doesn't show the value of the variable as Hive CLI Key: HIVE-6174 URL: https://issues.apache.org/jira/browse/HIVE-6174 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.10.0, 0.11.0, 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-5174.3.patch, HIVE-6174.2.patch, HIVE-6174.patch Currently it displays nothing. {code} 0: jdbc:hive2:// set env:TERM; 0: jdbc:hive2:// {code} In contrast, Hive CLI displays the value of the variable. {code} hive set env:TERM; env:TERM=xterm {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Review Request 16910: HIVE-6173: Beeline doesn't accept --hiveconf option as Hive CLI does
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16910/ --- Review request for hive. Bugs: HIVE-6173 https://issues.apache.org/jira/browse/HIVE-6173 Repository: hive-git Description --- Introduced --hiveconf option on beeline option. Diffs - beeline/src/java/org/apache/hive/beeline/BeeLine.java c5e36a5 beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java 04802bc beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java 553722d beeline/src/main/resources/BeeLine.properties 408286d itests/hive-unit/src/test/java/org/apache/hive/beeline/TestBeeLineWithArgs.java 539ebdb Diff: https://reviews.apache.org/r/16910/diff/ Testing --- Unit test added. Thanks, Xuefu Zhang
Hive-trunk-hadoop2 - Build # 679 - Still Failing
Changes for Build #640 Changes for Build #641 [navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis reviewed by Thejas M Nair) [navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh) Changes for Build #642 Changes for Build #643 [ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package from Microsoft to Hive (Hideaki Kumura via Eric Hanson) Changes for Build #644 [cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming conventions (Sergey Shelukhin via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II (Navis via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis via cws) [jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure vectorization produces same results as non-vectorized execution (Sergey Shelukhin via Jitendra Pandey) Changes for Build #645 Changes for Build #646 [ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson) Changes for Build #647 [thejas] HIVE-5795 : Hive should be able to skip header and footer rows when reading data file for a table (Shuaishuai Nie via Thejas Nair) Changes for Build #648 [thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by Brock Noland) Changes for Build #649 Changes for Build #650 Changes for Build #651 [brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim Kulkarni via Brock) Changes for Build #652 Changes for Build #653 [gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by Thejas M Nair) Changes for Build #654 [cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam via cws) Changes for Build #655 [brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler (Swarnim Kulkarni via Brock Noland) [brock] HIVE-5946 - DDL authorization task factory should be better tested (Brock reviewed by Thejas) Changes for Build #656 Changes for Build #657 [gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther Hagleitner) Changes for Build #658 Changes for Build #659 [ehans] HIVE-6051: Create DecimalColumnVector and a representative VectorExpression for decimal (Eric Hanson) Changes for Build #660 [thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url should be about to load serde schema from file system beside HDFS (Shuaishuai Nie via Thejas Nair) [thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via Thejas Nair) Changes for Build #661 Changes for Build #662 [gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan) Changes for Build #663 [hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas Nair) Changes for Build #664 [xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it when file path have spaces Changes for Build #665 Changes for Build #666 Changes for Build #667 [brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock reviewed by Xuefu and Sushanth) Changes for Build #668 [hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth Sowmyan via Ashutosh Chauhan) [hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin via Ashutosh Chauhan) Changes for Build #669 Changes for Build #670 [ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson) Changes for Build #671 [hashutosh] HIVE-6185 : DDLTask is inconsistent in creating a table and adding a partition when dealing with location (Xuefu Zhang via Ashutosh Chauhan) [hashutosh] HIVE-5032 : Enable hive creating external table at the root directory of DFS (Shuaishuai Nie via Ashutosh Chauhan) Changes for Build #672 [navis] HIVE-6177 : Fix keyword KW_REANME which was intended to be KW_RENAME (Navis reviewed by Brock Noland) [jitendra] HIVE-6156. Implement vectorized reader for Date datatype for ORC format. (jitendra) Changes for Build #673 [hashutosh] HIVE-4216 : TestHBaseMinimrCliDriver throws weird error with HBase 0.94.5 and Hadoop 23 and test is stuck infinitely (Jason Dere via Brock Noland) Changes for Build #674 [hashutosh] HIVE-5515 : Writing to an HBase table throws IllegalArgumentException, failing job submission (Viraj Bhat via Ashutosh Chauhan Sushanth Sowmyan) Changes for Build #675 Changes for Build #676 [thejas] HIVE-5941 : SQL std auth - support 'show roles' (Navis via Thejas Nair) Changes for Build #677 [navis] HIVE-6161 : Fix TCLIService duplicate thrift definition for TColumn (Jay Bennett via Navis) Changes for Build #678 Changes for Build #679 [xuefu] HIVE-6174: Beeline 'set varible' doesn't show the value of the
Re: Review Request 16910: HIVE-6173: Beeline doesn't accept --hiveconf option as Hive CLI does
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16910/ --- (Updated Jan. 15, 2014, 5:43 p.m.) Review request for hive. Bugs: HIVE-6173 https://issues.apache.org/jira/browse/HIVE-6173 Repository: hive-git Description --- Introduced --hiveconf option on beeline option. Diffs - beeline/src/java/org/apache/hive/beeline/BeeLine.java c5e36a5 beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java 04802bc beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java 553722d beeline/src/main/resources/BeeLine.properties 408286d itests/hive-unit/src/test/java/org/apache/hive/beeline/TestBeeLineWithArgs.java 539ebdb Diff: https://reviews.apache.org/r/16910/diff/ Testing --- Unit test added. Thanks, Xuefu Zhang
Hive-trunk-h0.21 - Build # 2579 - Still Failing
Changes for Build #2539 Changes for Build #2540 [navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis reviewed by Thejas M Nair) Changes for Build #2541 Changes for Build #2542 [ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package from Microsoft to Hive (Hideaki Kumura via Eric Hanson) Changes for Build #2543 [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II (Navis via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis via cws) [jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure vectorization produces same results as non-vectorized execution (Sergey Shelukhin via Jitendra Pandey) Changes for Build #2544 [cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming conventions (Sergey Shelukhin via cws) Changes for Build #2545 Changes for Build #2546 [ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson) Changes for Build #2547 [thejas] HIVE-5795 : Hive should be able to skip header and footer rows when reading data file for a table (Shuaishuai Nie via Thejas Nair) Changes for Build #2548 [thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by Brock Noland) Changes for Build #2549 Changes for Build #2550 Changes for Build #2551 [brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim Kulkarni via Brock) Changes for Build #2552 Changes for Build #2553 [gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by Thejas M Nair) Changes for Build #2554 [cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam via cws) Changes for Build #2555 [brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler (Swarnim Kulkarni via Brock Noland) [brock] HIVE-5946 - DDL authorization task factory should be better tested (Brock reviewed by Thejas) Changes for Build #2556 [gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther Hagleitner) Changes for Build #2557 Changes for Build #2558 [ehans] HIVE-6051: Create DecimalColumnVector and a representative VectorExpression for decimal (Eric Hanson) Changes for Build #2559 [thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url should be about to load serde schema from file system beside HDFS (Shuaishuai Nie via Thejas Nair) [thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via Thejas Nair) Changes for Build #2560 Changes for Build #2561 [gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan) Changes for Build #2562 [hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas Nair) Changes for Build #2563 Changes for Build #2564 [xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it when file path have spaces Changes for Build #2565 Changes for Build #2566 Changes for Build #2567 [brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock reviewed by Xuefu and Sushanth) Changes for Build #2568 [hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth Sowmyan via Ashutosh Chauhan) [hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin via Ashutosh Chauhan) Changes for Build #2569 Changes for Build #2570 [ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson) Changes for Build #2571 [hashutosh] HIVE-6185 : DDLTask is inconsistent in creating a table and adding a partition when dealing with location (Xuefu Zhang via Ashutosh Chauhan) [hashutosh] HIVE-5032 : Enable hive creating external table at the root directory of DFS (Shuaishuai Nie via Ashutosh Chauhan) Changes for Build #2572 [jitendra] HIVE-6156. Implement vectorized reader for Date datatype for ORC format. (jitendra) Changes for Build #2573 [navis] HIVE-6177 : Fix keyword KW_REANME which was intended to be KW_RENAME (Navis reviewed by Brock Noland) Changes for Build #2574 [hashutosh] HIVE-5515 : Writing to an HBase table throws IllegalArgumentException, failing job submission (Viraj Bhat via Ashutosh Chauhan Sushanth Sowmyan) [hashutosh] HIVE-4216 : TestHBaseMinimrCliDriver throws weird error with HBase 0.94.5 and Hadoop 23 and test is stuck infinitely (Jason Dere via Brock Noland) Changes for Build #2575 Changes for Build #2576 [thejas] HIVE-5941 : SQL std auth - support 'show roles' (Navis via Thejas Nair) Changes for Build #2577 [navis] HIVE-6161 : Fix TCLIService duplicate thrift definition for TColumn (Jay Bennett via Navis) Changes for Build #2578 Changes for Build #2579 [xuefu] HIVE-6174: Beeline 'set varible' doesn't show the value of the variable as Hive CLI [hashutosh] HIVE-6196 : Incorrect package name for
Re: Review Request 16900: Add select database() command to show the current database
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16900/#review31885 --- Very useful functionality! Thanks for patch. Looks fine, a couple of minor comments below. ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java https://reviews.apache.org/r/16900/#comment60653 Should this be defined via HiveConf ? Also would be better to add this in the restrict list so a user can't change it using 'set' command. ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java https://reviews.apache.org/r/16900/#comment60654 Perhaps this can be cached in SessionState for subsequent queries ? - Prasad Mujumdar On Jan. 15, 2014, 6:22 a.m., Navis Ryu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16900/ --- (Updated Jan. 15, 2014, 6:22 a.m.) Review request for hive. Bugs: HIVE-4144 https://issues.apache.org/jira/browse/HIVE-4144 Repository: hive-git Description --- A recent hive-user mailing list conversation asked about having a command to show the current database. http://mail-archives.apache.org/mod_mbox/hive-user/201303.mbox/%3CCAMGr+0i+CRY69m3id=DxthmUCWLf0NxpKMCtROb=uauh2va...@mail.gmail.com%3E MySQL seems to have a command to do so: {code} select database(); {code} http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_database We should look into having something similar in Hive. Diffs - ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java 96a78fc ql/src/java/org/apache/hadoop/hive/ql/exec/RowSchema.java 1bfcee6 ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java 5511bca ql/src/java/org/apache/hadoop/hive/ql/io/NullRowsInputFormat.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/io/OneNullRowInputFormat.java 8ce1c15 ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g cd52e47 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 52d7c75 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/UDFCurrentDB.java PRE-CREATION ql/src/test/queries/clientpositive/select_dummy_source.q PRE-CREATION ql/src/test/queries/clientpositive/udf_current_database.q PRE-CREATION ql/src/test/results/clientnegative/ptf_negative_DistributeByOrderBy.q.out d73a0fa ql/src/test/results/clientnegative/ptf_negative_PartitionBySortBy.q.out 48139f0 ql/src/test/results/clientnegative/select_udtf_alias.q.out 614a18e ql/src/test/results/clientpositive/select_dummy_source.q.out PRE-CREATION ql/src/test/results/clientpositive/show_functions.q.out 57c9036 ql/src/test/results/clientpositive/udf_current_database.q.out PRE-CREATION Diff: https://reviews.apache.org/r/16900/diff/ Testing --- Thanks, Navis Ryu
[jira] [Created] (HIVE-6207) Integrate HCatalog with locking
Alan Gates created HIVE-6207: Summary: Integrate HCatalog with locking Key: HIVE-6207 URL: https://issues.apache.org/jira/browse/HIVE-6207 Project: Hive Issue Type: Sub-task Components: HCatalog Affects Versions: 0.13.0 Reporter: Alan Gates Assignee: Alan Gates Fix For: 0.13.0 HCatalog currently ignores any locks created by Hive users. It should respect the locks Hive creates as well as create locks itself when locking is configured. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6173) Beeline doesn't accept --hiveconf option as Hive CLI does
[ https://issues.apache.org/jira/browse/HIVE-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872413#comment-13872413 ] Prasad Mujumdar commented on HIVE-6173: --- [~xuefuz] Thanks for the putting the patch out. I left some comments on the RB. Beeline doesn't accept --hiveconf option as Hive CLI does - Key: HIVE-6173 URL: https://issues.apache.org/jira/browse/HIVE-6173 Project: Hive Issue Type: Improvement Components: CLI Affects Versions: 0.10.0, 0.11.0, 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-6173.patch {code} beeline -u jdbc:hive2:// --hiveconf a=b Usage: java org.apache.hive.cli.beeline.BeeLine {code} Since Beeline is replacing Hive CLI, it should support this command line option as well. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: Review Request 16910: HIVE-6173: Beeline doesn't accept --hiveconf option as Hive CLI does
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16910/#review31892 --- The patch itself look fine. Though I think it's not covering the case when you already have some hiveconf or hivevars in the URL along with the command line options. Looks like we can support only one way of adding these parameters at a time. If there's a mix and match, it would be better to catch that and raise an error ? beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java https://reviews.apache.org/r/16910/#comment60657 Nit: whitespace - Prasad Mujumdar On Jan. 15, 2014, 5:43 p.m., Xuefu Zhang wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16910/ --- (Updated Jan. 15, 2014, 5:43 p.m.) Review request for hive. Bugs: HIVE-6173 https://issues.apache.org/jira/browse/HIVE-6173 Repository: hive-git Description --- Introduced --hiveconf option on beeline option. Diffs - beeline/src/java/org/apache/hive/beeline/BeeLine.java c5e36a5 beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java 04802bc beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java 553722d beeline/src/main/resources/BeeLine.properties 408286d itests/hive-unit/src/test/java/org/apache/hive/beeline/TestBeeLineWithArgs.java 539ebdb Diff: https://reviews.apache.org/r/16910/diff/ Testing --- Unit test added. Thanks, Xuefu Zhang
[jira] [Commented] (HIVE-6200) Hive custom SerDe cannot load DLL added by ADD FILE command
[ https://issues.apache.org/jira/browse/HIVE-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872420#comment-13872420 ] Thejas M Nair commented on HIVE-6200: - +1 Hive custom SerDe cannot load DLL added by ADD FILE command - Key: HIVE-6200 URL: https://issues.apache.org/jira/browse/HIVE-6200 Project: Hive Issue Type: Bug Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-6200.1.patch When custom SerDe need to load a DLL file added using ADD FILE command in HIVE, the loading fail with exception like java.lang.UnsatisfiedLinkError:C:\tmp\admin2_6996@headnode0_201401100431_resources\hello.dll: Access is denied. The reason is when FileSystem creating local copy of the file, the permission of local file is set to default as 666. DLL file need execute permission to be loaded successfully. Similar scenario also happens when Hadoop localize files in distributed cache. The solution in Hadoop is to add execute permission to the file after localizationl. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6189) Support top level union all statements
[ https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872426#comment-13872426 ] Gunther Hagleitner commented on HIVE-6189: -- [~leftylev] - the documentation in the link you sent looks good. We could specify that unions can be used in views, insert, and ctas statements, but this I'm thinking that's almost self explanatory. As for hive .12 and below - the restriction was that unions could only be used within a subquery. I.e.: select_statement union all select_statement union all ... had to be written as select * from (select_statement union all select_statement union all ...) unionresult Ditto for CTAS, insert, create/alter view as. Does that make sense? Support top level union all statements -- Key: HIVE-6189 URL: https://issues.apache.org/jira/browse/HIVE-6189 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch, HIVE-6189.3.patch I've always wondered why union all has to be in subqueries in hive. After looking at it, problems are: - Hive Parser: - Union happens at the wrong place (insert ... select ... union all select ...) is parsed as (insert select) union select. - There are many rewrite rules in the parser to force any query into the a from - insert -select form. No doubt for historical reasons. - Plan generation/semantic analysis assumes top level TOK_QUERY and not top level TOK_UNION. The rewrite rules don't work when we move the UNION ALL recursion into the select statements. However, it's not hard to do that in code. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Work started] (HIVE-6183) Implement vectorized type cast from/to decimal(p, s)
[ https://issues.apache.org/jira/browse/HIVE-6183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-6183 started by Eric Hanson. Implement vectorized type cast from/to decimal(p, s) Key: HIVE-6183 URL: https://issues.apache.org/jira/browse/HIVE-6183 Project: Hive Issue Type: Sub-task Affects Versions: 0.13.0 Reporter: Eric Hanson Assignee: Eric Hanson Add support for all the type supported type casts to/from decimal(p,s) in vectorized mode. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-5901) Query cancel should stop running MR tasks
[ https://issues.apache.org/jira/browse/HIVE-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872473#comment-13872473 ] Sergey Shelukhin commented on HIVE-5901: sorry, missed the response; lgtm Query cancel should stop running MR tasks - Key: HIVE-5901 URL: https://issues.apache.org/jira/browse/HIVE-5901 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-5901.1.patch.txt, HIVE-5901.2.patch.txt, HIVE-5901.3.patch.txt, HIVE-5901.4.patch.txt Currently, query canceling does not stop running MR job immediately. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6124) Support basic Decimal arithmetic in vector mode (+, -, *)
[ https://issues.apache.org/jira/browse/HIVE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872491#comment-13872491 ] Jitendra Nath Pandey commented on HIVE-6124: Posted a comment on review board. +1 otherwise. Support basic Decimal arithmetic in vector mode (+, -, *) - Key: HIVE-6124 URL: https://issues.apache.org/jira/browse/HIVE-6124 Project: Hive Issue Type: Sub-task Affects Versions: 0.13.0 Reporter: Eric Hanson Assignee: Eric Hanson Attachments: HIVE-6124.01.patch, HIVE-6124.02.patch, HIVE-6124.03.patch Create support for basic decimal arithmetic (+, -, * but not /, %) based on templates for column-scalar, scalar-column, and column-column operations. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-5945) ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task.
[ https://issues.apache.org/jira/browse/HIVE-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872582#comment-13872582 ] Yin Huai commented on HIVE-5945: +1 ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask also sums those tables which are not used in the child of this conditional task. - Key: HIVE-5945 URL: https://issues.apache.org/jira/browse/HIVE-5945 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0 Reporter: Yin Huai Assignee: Navis Priority: Critical Attachments: HIVE-5945.1.patch.txt, HIVE-5945.2.patch.txt, HIVE-5945.3.patch.txt, HIVE-5945.4.patch.txt, HIVE-5945.5.patch.txt, HIVE-5945.6.patch.txt, HIVE-5945.7.patch.txt, HIVE-5945.8.patch.txt Here is an example {code} select i_item_id, s_state, avg(ss_quantity) agg1, avg(ss_list_price) agg2, avg(ss_coupon_amt) agg3, avg(ss_sales_price) agg4 FROM store_sales JOIN date_dim on (store_sales.ss_sold_date_sk = date_dim.d_date_sk) JOIN item on (store_sales.ss_item_sk = item.i_item_sk) JOIN customer_demographics on (store_sales.ss_cdemo_sk = customer_demographics.cd_demo_sk) JOIN store on (store_sales.ss_store_sk = store.s_store_sk) where cd_gender = 'F' and cd_marital_status = 'U' and cd_education_status = 'Primary' and d_year = 2002 and s_state in ('GA','PA', 'LA', 'SC', 'MI', 'AL') group by i_item_id, s_state order by i_item_id, s_state limit 100; {\code} I turned off noconditionaltask. So, I expected that there will be 4 Map-only jobs for this query. However, I got 1 Map-only job (joining strore_sales and date_dim) and 3 MR job (for reduce joins.) So, I checked the conditional task determining the plan of the join involving item. In ql.plan.ConditionalResolverCommonJoin.resolveMapJoinTask, aliasToFileSizeMap contains all input tables used in this query and the intermediate table generated by joining store_sales and date_dim. So, when we sum the size of all small tables, the size of store_sales (which is around 45GB in my test) will be also counted. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HIVE-6208) user-defined aggregate functions cannot be used as windowing function
Jason Dere created HIVE-6208: Summary: user-defined aggregate functions cannot be used as windowing function Key: HIVE-6208 URL: https://issues.apache.org/jira/browse/HIVE-6208 Project: Hive Issue Type: Bug Components: UDF Reporter: Jason Dere Assignee: Jason Dere Function registry does a pass to register all GenericUDAFs as window functions. However any aggregate functions added after this (such as a user-added temporary function) don't work as window functions: hive create temporary function mysum as 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum'; OK Time taken: 0.0050 seconds hive explain select mysum(key) over () from src; FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:354) at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:194) at org.apache.hadoop.hive.ql.parse.WindowingComponentizer.next(WindowingComponentizer.java:86) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:10721) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7904) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7862) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8678) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8904) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:65) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:440) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:996) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1039) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:932) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:922) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6208) user-defined aggregate functions cannot be used as windowing function
[ https://issues.apache.org/jira/browse/HIVE-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872594#comment-13872594 ] Jason Dere commented on HIVE-6208: -- Also should not be getting NPE here if the window function cannot be found. user-defined aggregate functions cannot be used as windowing function - Key: HIVE-6208 URL: https://issues.apache.org/jira/browse/HIVE-6208 Project: Hive Issue Type: Bug Components: UDF Reporter: Jason Dere Assignee: Jason Dere Function registry does a pass to register all GenericUDAFs as window functions. However any aggregate functions added after this (such as a user-added temporary function) don't work as window functions: hive create temporary function mysum as 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum'; OK Time taken: 0.0050 seconds hive explain select mysum(key) over () from src; FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:354) at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:194) at org.apache.hadoop.hive.ql.parse.WindowingComponentizer.next(WindowingComponentizer.java:86) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:10721) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7904) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7862) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8678) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8904) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:65) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:440) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:996) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1039) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:932) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:922) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: New Hive Website
Sounds good, since we have three PMC members on board, I will go ahead and publish it tomorrow. On Wed, Jan 15, 2014 at 3:08 AM, Lefty Leverenz leftylever...@gmail.comwrote: It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. Sweet. -- Lefty On Tue, Jan 14, 2014 at 5:26 PM, Brock Noland br...@cloudera.com wrote: Thanks guys! It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. On Jan 14, 2014 7:20 PM, Lefty Leverenz leftylever...@gmail.com wrote: Looks good. The menu improves access considerably. What makes this website easier to edit? +1 -- Lefty On Tue, Jan 14, 2014 at 1:30 PM, Carl Steinbach cwsteinb...@gmail.com wrote: +1 to switching over now. On Jan 14, 2014 12:56 PM, Brock Noland br...@cloudera.com wrote: The *staging* version of the new Hive website is available: http://hive.staging.apache.org/ Notes: 1) It's a first pass, we can add more later 2) The javadocs links won't work until we cut over as they are committed directly to the production SVN 3) The How to edit the website will need to be updated after we cut over. The guide will look similar to this: http://mrunit.apache.org/development/edit_website.html Since the new website is so much easier to edit, I think we should cut over now. Brock -- Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
[jira] [Updated] (HIVE-6124) Support basic Decimal arithmetic in vector mode (+, -, *)
[ https://issues.apache.org/jira/browse/HIVE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Hanson updated HIVE-6124: -- Attachment: HIVE-6124.04.patch Fixed small issue pointed out by Jitendra on Review Board. Support basic Decimal arithmetic in vector mode (+, -, *) - Key: HIVE-6124 URL: https://issues.apache.org/jira/browse/HIVE-6124 Project: Hive Issue Type: Sub-task Affects Versions: 0.13.0 Reporter: Eric Hanson Assignee: Eric Hanson Attachments: HIVE-6124.01.patch, HIVE-6124.02.patch, HIVE-6124.03.patch, HIVE-6124.04.patch Create support for basic decimal arithmetic (+, -, * but not /, %) based on templates for column-scalar, scalar-column, and column-column operations. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6054) HiveServer2 does not log the output of LogUtils.initHiveLog4j();
[ https://issues.apache.org/jira/browse/HIVE-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-6054: --- Component/s: HiveServer2 HiveServer2 does not log the output of LogUtils.initHiveLog4j(); Key: HIVE-6054 URL: https://issues.apache.org/jira/browse/HIVE-6054 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6054.1.patch Inside the main(), we just call LogUtils.initHiveLog4j() and do not log this information. This needs to be logged to see if the user has configured log4j.properties correctly. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6054) HiveServer2 does not log the output of LogUtils.initHiveLog4j();
[ https://issues.apache.org/jira/browse/HIVE-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-6054: --- Fix Version/s: 0.13.0 HiveServer2 does not log the output of LogUtils.initHiveLog4j(); Key: HIVE-6054 URL: https://issues.apache.org/jira/browse/HIVE-6054 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6054.1.patch Inside the main(), we just call LogUtils.initHiveLog4j() and do not log this information. This needs to be logged to see if the user has configured log4j.properties correctly. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6054) HiveServer2 does not log the output of LogUtils.initHiveLog4j();
[ https://issues.apache.org/jira/browse/HIVE-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872617#comment-13872617 ] Vaibhav Gumashta commented on HIVE-6054: +1 (non-binding). cc [~thejas] HiveServer2 does not log the output of LogUtils.initHiveLog4j(); Key: HIVE-6054 URL: https://issues.apache.org/jira/browse/HIVE-6054 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6054.1.patch Inside the main(), we just call LogUtils.initHiveLog4j() and do not log this information. This needs to be logged to see if the user has configured log4j.properties correctly. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6054) HiveServer2 does not log the output of LogUtils.initHiveLog4j();
[ https://issues.apache.org/jira/browse/HIVE-6054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872627#comment-13872627 ] Thejas M Nair commented on HIVE-6054: - +1 HiveServer2 does not log the output of LogUtils.initHiveLog4j(); Key: HIVE-6054 URL: https://issues.apache.org/jira/browse/HIVE-6054 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Hari Sankar Sivarama Subramaniyan Assignee: Hari Sankar Sivarama Subramaniyan Fix For: 0.13.0 Attachments: HIVE-6054.1.patch Inside the main(), we just call LogUtils.initHiveLog4j() and do not log this information. This needs to be logged to see if the user has configured log4j.properties correctly. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6200) Hive custom SerDe cannot load DLL added by ADD FILE command
[ https://issues.apache.org/jira/browse/HIVE-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872641#comment-13872641 ] Shuaishuai Nie commented on HIVE-6200: -- Validated the failed test in HIVE QA result. It is not related to the change made in this JIRA. Hive custom SerDe cannot load DLL added by ADD FILE command - Key: HIVE-6200 URL: https://issues.apache.org/jira/browse/HIVE-6200 Project: Hive Issue Type: Bug Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-6200.1.patch When custom SerDe need to load a DLL file added using ADD FILE command in HIVE, the loading fail with exception like java.lang.UnsatisfiedLinkError:C:\tmp\admin2_6996@headnode0_201401100431_resources\hello.dll: Access is denied. The reason is when FileSystem creating local copy of the file, the permission of local file is set to default as 666. DLL file need execute permission to be loaded successfully. Similar scenario also happens when Hadoop localize files in distributed cache. The solution in Hadoop is to add execute permission to the file after localizationl. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: New Hive Website
Should we wait for this to go live before tinkering with the menu? DOCUMENTATION should include the wiki and COMMUNITY could just have the wiki's Resources for Contributorshttps://cwiki.apache.org/confluence/display/Hive/Home#Home-ResourcesforContributors section. (But it can wait.) -- Lefty On Wed, Jan 15, 2014 at 1:21 PM, Brock Noland br...@cloudera.com wrote: Sounds good, since we have three PMC members on board, I will go ahead and publish it tomorrow. On Wed, Jan 15, 2014 at 3:08 AM, Lefty Leverenz leftylever...@gmail.com wrote: It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. Sweet. -- Lefty On Tue, Jan 14, 2014 at 5:26 PM, Brock Noland br...@cloudera.com wrote: Thanks guys! It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. On Jan 14, 2014 7:20 PM, Lefty Leverenz leftylever...@gmail.com wrote: Looks good. The menu improves access considerably. What makes this website easier to edit? +1 -- Lefty On Tue, Jan 14, 2014 at 1:30 PM, Carl Steinbach cwsteinb...@gmail.com wrote: +1 to switching over now. On Jan 14, 2014 12:56 PM, Brock Noland br...@cloudera.com wrote: The *staging* version of the new Hive website is available: http://hive.staging.apache.org/ Notes: 1) It's a first pass, we can add more later 2) The javadocs links won't work until we cut over as they are committed directly to the production SVN 3) The How to edit the website will need to be updated after we cut over. The guide will look similar to this: http://mrunit.apache.org/development/edit_website.html Since the new website is so much easier to edit, I think we should cut over now. Brock -- Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
[jira] [Commented] (HIVE-6200) Hive custom SerDe cannot load DLL added by ADD FILE command
[ https://issues.apache.org/jira/browse/HIVE-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872647#comment-13872647 ] shanyu zhao commented on HIVE-6200: --- +1 Hive custom SerDe cannot load DLL added by ADD FILE command - Key: HIVE-6200 URL: https://issues.apache.org/jira/browse/HIVE-6200 Project: Hive Issue Type: Bug Reporter: Shuaishuai Nie Assignee: Shuaishuai Nie Attachments: HIVE-6200.1.patch When custom SerDe need to load a DLL file added using ADD FILE command in HIVE, the loading fail with exception like java.lang.UnsatisfiedLinkError:C:\tmp\admin2_6996@headnode0_201401100431_resources\hello.dll: Access is denied. The reason is when FileSystem creating local copy of the file, the permission of local file is set to default as 666. DLL file need execute permission to be loaded successfully. Similar scenario also happens when Hadoop localize files in distributed cache. The solution in Hadoop is to add execute permission to the file after localizationl. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6208) user-defined aggregate functions cannot be used as windowing function
[ https://issues.apache.org/jira/browse/HIVE-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-6208: - Status: Patch Available (was: Open) user-defined aggregate functions cannot be used as windowing function - Key: HIVE-6208 URL: https://issues.apache.org/jira/browse/HIVE-6208 Project: Hive Issue Type: Bug Components: UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-6208.1.patch Function registry does a pass to register all GenericUDAFs as window functions. However any aggregate functions added after this (such as a user-added temporary function) don't work as window functions: hive create temporary function mysum as 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum'; OK Time taken: 0.0050 seconds hive explain select mysum(key) over () from src; FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:354) at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:194) at org.apache.hadoop.hive.ql.parse.WindowingComponentizer.next(WindowingComponentizer.java:86) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:10721) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7904) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7862) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8678) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8904) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:65) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:440) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:996) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1039) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:932) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:922) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6208) user-defined aggregate functions cannot be used as windowing function
[ https://issues.apache.org/jira/browse/HIVE-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-6208: - Attachment: HIVE-6208.1.patch user-defined aggregate functions cannot be used as windowing function - Key: HIVE-6208 URL: https://issues.apache.org/jira/browse/HIVE-6208 Project: Hive Issue Type: Bug Components: UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-6208.1.patch Function registry does a pass to register all GenericUDAFs as window functions. However any aggregate functions added after this (such as a user-added temporary function) don't work as window functions: hive create temporary function mysum as 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum'; OK Time taken: 0.0050 seconds hive explain select mysum(key) over () from src; FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:354) at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:194) at org.apache.hadoop.hive.ql.parse.WindowingComponentizer.next(WindowingComponentizer.java:86) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:10721) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7904) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7862) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8678) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8904) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:65) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:440) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:996) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1039) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:932) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:922) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: New Hive Website
Done! On Wed, Jan 15, 2014 at 3:48 PM, Lefty Leverenz leftylever...@gmail.comwrote: Should we wait for this to go live before tinkering with the menu? DOCUMENTATION should include the wiki and COMMUNITY could just have the wiki's Resources for Contributors https://cwiki.apache.org/confluence/display/Hive/Home#Home-ResourcesforContributors section. (But it can wait.) -- Lefty On Wed, Jan 15, 2014 at 1:21 PM, Brock Noland br...@cloudera.com wrote: Sounds good, since we have three PMC members on board, I will go ahead and publish it tomorrow. On Wed, Jan 15, 2014 at 3:08 AM, Lefty Leverenz leftylever...@gmail.com wrote: It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. Sweet. -- Lefty On Tue, Jan 14, 2014 at 5:26 PM, Brock Noland br...@cloudera.com wrote: Thanks guys! It uses Markdown, doesn't require local builds, has a staging view, and has a basic web editor. On Jan 14, 2014 7:20 PM, Lefty Leverenz leftylever...@gmail.com wrote: Looks good. The menu improves access considerably. What makes this website easier to edit? +1 -- Lefty On Tue, Jan 14, 2014 at 1:30 PM, Carl Steinbach cwsteinb...@gmail.com wrote: +1 to switching over now. On Jan 14, 2014 12:56 PM, Brock Noland br...@cloudera.com wrote: The *staging* version of the new Hive website is available: http://hive.staging.apache.org/ Notes: 1) It's a first pass, we can add more later 2) The javadocs links won't work until we cut over as they are committed directly to the production SVN 3) The How to edit the website will need to be updated after we cut over. The guide will look similar to this: http://mrunit.apache.org/development/edit_website.html Since the new website is so much easier to edit, I think we should cut over now. Brock -- Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org -- Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
Review Request 16921: HIVE-6208 user-defined aggregate functions cannot be used as windowing function
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16921/ --- Review request for hive and Harish Butani. Bugs: HIVE-6208 https://issues.apache.org/jira/browse/HIVE-6208 Repository: hive-git Description --- - All UDAFs are also added to window function list when registered to function registry - Avoid NPE when invalid function name used as window function Diffs - ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java 96a78fc ql/src/java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java f011258 ql/src/test/queries/clientnegative/windowing_invalid_udaf.q PRE-CREATION ql/src/test/queries/clientpositive/windowing_udaf2.q PRE-CREATION ql/src/test/results/clientnegative/windowing_invalid_udaf.q.out PRE-CREATION ql/src/test/results/clientpositive/windowing_udaf2.q.out PRE-CREATION Diff: https://reviews.apache.org/r/16921/diff/ Testing --- Added positive/negative test Thanks, Jason Dere
[jira] [Commented] (HIVE-6208) user-defined aggregate functions cannot be used as windowing function
[ https://issues.apache.org/jira/browse/HIVE-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872653#comment-13872653 ] Jason Dere commented on HIVE-6208: -- RB at https://reviews.apache.org/r/16921/ user-defined aggregate functions cannot be used as windowing function - Key: HIVE-6208 URL: https://issues.apache.org/jira/browse/HIVE-6208 Project: Hive Issue Type: Bug Components: UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-6208.1.patch Function registry does a pass to register all GenericUDAFs as window functions. However any aggregate functions added after this (such as a user-added temporary function) don't work as window functions: hive create temporary function mysum as 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFSum'; OK Time taken: 0.0050 seconds hive explain select mysum(key) over () from src; FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:354) at org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:194) at org.apache.hadoop.hive.ql.parse.WindowingComponentizer.next(WindowingComponentizer.java:86) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:10721) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:7904) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:7862) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:8678) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8904) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:65) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:310) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:440) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:996) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1039) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:932) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:922) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0
[ https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] shanyu zhao updated HIVE-6159: -- Attachment: HIVE-6159.5.patch Hive uses deprecated hadoop configuration in Hadoop 2.0 --- Key: HIVE-6159 URL: https://issues.apache.org/jira/browse/HIVE-6159 Project: Hive Issue Type: Bug Components: Configuration Affects Versions: 0.12.0 Reporter: shanyu zhao Assignee: shanyu zhao Fix For: 0.13.0 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159-v4.patch, HIVE-6159.4.patch, HIVE-6159.5.patch, HIVE-6159.patch Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated configurations warnings like this: 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.maxsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.minsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.r ack 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.n ode 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is depreca ted. Instead, use mapreduce.job.reduces 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks.speculativ e.execution is deprecated. Instead, use mapreduce.reduce.speculative -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6124) Support basic Decimal arithmetic in vector mode (+, -, *)
[ https://issues.apache.org/jira/browse/HIVE-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872687#comment-13872687 ] Jitendra Nath Pandey commented on HIVE-6124: +1 Support basic Decimal arithmetic in vector mode (+, -, *) - Key: HIVE-6124 URL: https://issues.apache.org/jira/browse/HIVE-6124 Project: Hive Issue Type: Sub-task Affects Versions: 0.13.0 Reporter: Eric Hanson Assignee: Eric Hanson Attachments: HIVE-6124.01.patch, HIVE-6124.02.patch, HIVE-6124.03.patch, HIVE-6124.04.patch Create support for basic decimal arithmetic (+, -, * but not /, %) based on templates for column-scalar, scalar-column, and column-column operations. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0
[ https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872686#comment-13872686 ] shanyu zhao commented on HIVE-6159: --- Remove the unit test “testDeprecatedProperties()” because the addDeprecation() method isn’t available for hdoop-1. We would have to add the method to shims which I don’t think worth the effort. The deprecation feature is unit tested thoroughly in Hadoop and in my opinion we shouldn’t add these in hive project. Hive uses deprecated hadoop configuration in Hadoop 2.0 --- Key: HIVE-6159 URL: https://issues.apache.org/jira/browse/HIVE-6159 Project: Hive Issue Type: Bug Components: Configuration Affects Versions: 0.12.0 Reporter: shanyu zhao Assignee: shanyu zhao Fix For: 0.13.0 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159-v4.patch, HIVE-6159.4.patch, HIVE-6159.5.patch, HIVE-6159.patch Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated configurations warnings like this: 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.maxsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.minsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.r ack 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.n ode 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is depreca ted. Instead, use mapreduce.job.reduces 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks.speculativ e.execution is deprecated. Instead, use mapreduce.reduce.speculative -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6189) Support top level union all statements
[ https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872742#comment-13872742 ] Harish Butani commented on HIVE-6189: - +1 Support top level union all statements -- Key: HIVE-6189 URL: https://issues.apache.org/jira/browse/HIVE-6189 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch, HIVE-6189.3.patch I've always wondered why union all has to be in subqueries in hive. After looking at it, problems are: - Hive Parser: - Union happens at the wrong place (insert ... select ... union all select ...) is parsed as (insert select) union select. - There are many rewrite rules in the parser to force any query into the a from - insert -select form. No doubt for historical reasons. - Plan generation/semantic analysis assumes top level TOK_QUERY and not top level TOK_UNION. The rewrite rules don't work when we move the UNION ALL recursion into the select statements. However, it's not hard to do that in code. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: Review Request 16847: Add a hive authorization plugin api that does not assume privileges needed
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16847/#review31929 --- ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/DefaultHiveAuthorizerFactory.java https://reviews.apache.org/r/16847/#comment60687 removed the comment. Will create DefaultHiveAccessController and DefaultHiveAuthValidator classes as I implement the sql standard auth in other jiras. ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveMetastoreClientFactoryImpl.java https://reviews.apache.org/r/16847/#comment60707 I will change it to catch the specific checked exceptions that getMsc() and Hive.get() throw. I don't want to expose MetaException and HiveException as public interface at this point. ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveMetastoreClientFactoryImpl.java https://reviews.apache.org/r/16847/#comment60708 I will change it to catch the specific checked exceptions that getMsc() and Hive.get() throw. I don't want to expose MetaException and HiveException as public interface, that is why I don't throw them here. ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java https://reviews.apache.org/r/16847/#comment60710 that check is being done and an exception is being thrown - RunTimeException (now AssertTionError) - Thejas Nair On Jan. 14, 2014, 3:51 a.m., Thejas Nair wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16847/ --- (Updated Jan. 14, 2014, 3:51 a.m.) Review request for hive, Ashutosh Chauhan and Brock Noland. Bugs: HIVE-5928 https://issues.apache.org/jira/browse/HIVE-5928 Repository: hive-git Description --- The existing HiveAuthorizationProvider interface implementations can be used to support custom authorization models. But this interface limits the customization for these reasons - 1. It has assumptions about the privileges required for an action. 2. It does have not functions that you can implement for having custom ways of doing the actions of access control statements. This jira proposes a new interface HiveAuthorizer that does not make assumptions of the privileges required for the actions. The authorize() functions will be equivalent of authorize(operation type, input objects, output objects). It will also have functions that will be called from the access control statements. The current HiveAuthorizationProvider will continue to be supported for backward compatibility. Diffs - ql/src/java/org/apache/hadoop/hive/ql/Driver.java 72c04d3 ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java b36a4ca ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java dc45ea2 ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java 143c0a6 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 52d7c75 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/DefaultHiveAuthorizerFactory.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAccessController.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizationValidator.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizer.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerFactory.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerImpl.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveMetastoreClientFactory.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveMetastoreClientFactoryImpl.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrincipal.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilege.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java ef35f1a ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveOperationType.java PRE-CREATION Diff: https://reviews.apache.org/r/16847/diff/ Testing --- Thanks, Thejas Nair
[jira] [Updated] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0
[ https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6159: Status: Patch Available (was: Open) Hive uses deprecated hadoop configuration in Hadoop 2.0 --- Key: HIVE-6159 URL: https://issues.apache.org/jira/browse/HIVE-6159 Project: Hive Issue Type: Bug Components: Configuration Affects Versions: 0.12.0 Reporter: shanyu zhao Assignee: shanyu zhao Fix For: 0.13.0 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159-v4.patch, HIVE-6159.4.patch, HIVE-6159.5.patch, HIVE-6159.patch Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated configurations warnings like this: 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.maxsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.minsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.r ack 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.n ode 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is depreca ted. Instead, use mapreduce.job.reduces 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks.speculativ e.execution is deprecated. Instead, use mapreduce.reduce.speculative -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0
[ https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6159: Status: Open (was: Patch Available) Hive uses deprecated hadoop configuration in Hadoop 2.0 --- Key: HIVE-6159 URL: https://issues.apache.org/jira/browse/HIVE-6159 Project: Hive Issue Type: Bug Components: Configuration Affects Versions: 0.12.0 Reporter: shanyu zhao Assignee: shanyu zhao Fix For: 0.13.0 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159-v4.patch, HIVE-6159.4.patch, HIVE-6159.5.patch, HIVE-6159.patch Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated configurations warnings like this: 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.maxsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.minsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.r ack 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.n ode 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is depreca ted. Instead, use mapreduce.job.reduces 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks.speculativ e.execution is deprecated. Instead, use mapreduce.reduce.speculative -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0
[ https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872775#comment-13872775 ] Thejas M Nair commented on HIVE-6159: - Sounds good to me . +1 Hive uses deprecated hadoop configuration in Hadoop 2.0 --- Key: HIVE-6159 URL: https://issues.apache.org/jira/browse/HIVE-6159 Project: Hive Issue Type: Bug Components: Configuration Affects Versions: 0.12.0 Reporter: shanyu zhao Assignee: shanyu zhao Fix For: 0.13.0 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159-v4.patch, HIVE-6159.4.patch, HIVE-6159.5.patch, HIVE-6159.patch Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated configurations warnings like this: 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.maxsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.minsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.r ack 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.n ode 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is depreca ted. Instead, use mapreduce.job.reduces 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks.speculativ e.execution is deprecated. Instead, use mapreduce.reduce.speculative -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6182) LDAP Authentication errors need to be more informative
[ https://issues.apache.org/jira/browse/HIVE-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872791#comment-13872791 ] Szehon Ho commented on HIVE-6182: - [~xuefuz] can this be committed when you have the cycles? Thanks! LDAP Authentication errors need to be more informative -- Key: HIVE-6182 URL: https://issues.apache.org/jira/browse/HIVE-6182 Project: Hive Issue Type: Improvement Components: Authentication Affects Versions: 0.13.0 Reporter: Szehon Ho Assignee: Szehon Ho Attachments: HIVE-6182.patch There are a host of errors that can happen when logging into an LDAP-enabled Hive-server2 from beeline. But for any error there is only a generic log message: {code} SASL negotiation failure javax.security.sasl.SaslException: PLAIN auth failed: Error validating LDAP user at org.apache.hadoop.security.SaslPlainServer.evaluateResponse(SaslPlainServer.java:108) at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrRespons {code} And on Beeline side there is only an even more unhelpful message: {code} Error: Invalid URL: jdbc:hive2://localhost:1/default (state=08S01,code=0) {code} It would be good to print out the underlying error message at least in the log, if not beeline. But today they are swallowed. This is bad because the underlying message is the most important, having the error codes as shown here : [LDAP error code|https://wiki.servicenow.com/index.php?title=LDAP_Error_Codes] The beeline seems to throw that exception for any error during connection, authetication or otherwise. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: Review Request 16847: Add a hive authorization plugin api that does not assume privileges needed
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16847/ --- (Updated Jan. 15, 2014, 11:28 p.m.) Review request for hive, Ashutosh Chauhan and Brock Noland. Changes --- addressing review comments - HIVE-5928.2.patch Bugs: HIVE-5928 https://issues.apache.org/jira/browse/HIVE-5928 Repository: hive-git Description --- The existing HiveAuthorizationProvider interface implementations can be used to support custom authorization models. But this interface limits the customization for these reasons - 1. It has assumptions about the privileges required for an action. 2. It does have not functions that you can implement for having custom ways of doing the actions of access control statements. This jira proposes a new interface HiveAuthorizer that does not make assumptions of the privileges required for the actions. The authorize() functions will be equivalent of authorize(operation type, input objects, output objects). It will also have functions that will be called from the access control statements. The current HiveAuthorizationProvider will continue to be supported for backward compatibility. Diffs (updated) - ql/src/java/org/apache/hadoop/hive/ql/Driver.java 72c04d3 ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java b36a4ca ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 617bba8 ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java fccea89 ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 441f329 ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java 143c0a6 ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 52d7c75 ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/DefaultHiveAuthorizerFactory.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAccessController.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizationValidator.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizer.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerFactory.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerImpl.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveMetastoreClientFactory.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveMetastoreClientFactoryImpl.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveOperationType.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrincipal.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilege.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrivilegeObject.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java ef35f1a ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java 4f31f75 ql/src/test/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveOperationType.java PRE-CREATION Diff: https://reviews.apache.org/r/16847/diff/ Testing --- Thanks, Thejas Nair
[jira] [Updated] (HIVE-5928) Add a hive authorization plugin api that does not assume privileges needed
[ https://issues.apache.org/jira/browse/HIVE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5928: Attachment: HIVE-5928.2.patch Addressing reviewing comments - HIVE-5928.2.patch. Add a hive authorization plugin api that does not assume privileges needed -- Key: HIVE-5928 URL: https://issues.apache.org/jira/browse/HIVE-5928 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5928.1.patch, HIVE-5928.2.patch, hive_auth_class_preview.txt Original Estimate: 120h Time Spent: 2h Remaining Estimate: 12h The existing HiveAuthorizationProvider interface implementations can be used to support custom authorization models. But this interface limits the customization for these reasons - 1. It has assumptions about the privileges required for an action. 2. It does have not functions that you can implement for having custom ways of doing the actions of access control statements. This jira proposes a new interface HiveAuthorizer that does not make assumptions of the privileges required for the actions. The authorize() functions will be equivalent of authorize(operation type, input objects, output objects). It will also have functions that will be called from the access control statements. The current HiveAuthorizationProvider will continue to be supported for backward compatibility. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Re: [DISCUSS] Proposed Changes to the Apache Hive Project Bylaws
Adding another sentence to clarify that with a -1, the patch can be reverted If the code has been committed before the -1, the code can be reverted until the vote is over. Approval : Code Change : The code can be committed after the first +1. Committers should wait for reasonable time after patch is available so that other committers have had a chance to look at it. If a -1 is received and an agreement is not reached among the committers on how to resolve the issue, lazy majority with a voting period of 7 days will be used. If the code has been committed before the -1, the code can be reverted until the vote is over. Carl, People seem to agree (and other people seem to be OK, considering the silence). Can you please include this in the by-law changes being proposed and put it to vote ? Thanks, Thejas On Tue, Jan 14, 2014 at 11:05 PM, Lefty Leverenz leftylever...@gmail.com wrote: This wording seems fine. You could add a here: Committers should wait for [a] reasonable time The guidance is good. +1 -- Lefty On Tue, Jan 14, 2014 at 7:53 PM, Thejas Nair the...@hortonworks.com wrote: I guess the silence from others on the changing the '24 hours from +1' to a guidance of '24 hours from patch available', implies they are OK with this change. Proposed general guidance for commits for committers: Wait for 24 hours from the time a patch is made 'patch available' before doing a +1 and committing, so that other committers have had sufficient time to look at the patch. If the patch is trivial and safe changes such as a small bug fix, improvement in error message or an incremental documentation change, it is OK to not wait for 24 hours. For significant changes the wait should be for a couple of days. If patch is updated the new patch is significantly different from old one, the wait should start from the time the new patch is uploaded. Use your discretion to decide if it would be useful to wait longer than 24 hours on a weekend or holiday for that patch. Proposed change in by-law : (if someone can word it better, that would be great!) Action : Code Change : A change made to a codebase of the project and committed by a committer. This includes source code, documentation, website content, etc. Approval : Code Change : The code can be committed after the first +1. Committers should wait for reasonable time after patch is available so that other committers have had a chance to look at it. If a -1 is received and an agreement is not reached among the committers on how to resolve the issue, lazy majority with a voting period of 7 days will be used. Minimum Length : Code Change : 7 days on a -1. On Tue, Jan 14, 2014 at 6:25 PM, Vikram Dixit vik...@hortonworks.com wrote: I think there is value in having some changes committed in less than 24 hours. Particularly for minor changes. Also reverting of patches makes sense. Although it could be cumbersome, it is not much worse than what would happen now incase of a bad commit. Anyway we wait for the unit tests to complete at the very least. I am +1 on Thejas' proposal. On Tue, Jan 7, 2014 at 7:01 PM, Thejas Nair the...@hortonworks.com wrote: After thinking some more about it, I am not sure if we need to have a hard and fast rule of 24 hours before commit. I think we should let committers make a call on if this is a trivial, safe and non controversial change and commit it in less than 24 hours in such cases. In case of larger changes, waiting for couple of days for feedback makes sense. If a committer feel that a patch shouldn't have gone in (because of technical issues or it went it too soon), they should be able to -1 it and revert the patch, until further review is done. In other words, I think this can be a guidance instead of a law in the by-laws. What do others in hive community think about this ? This has been working well in case of other apache hadoop related projects. On Fri, Dec 27, 2013 at 2:28 PM, Sergey Shelukhin ser...@hortonworks.com wrote: I actually have a patch out on a jira that says it will be committed in 24 hours from long ago ;) Is 24h rule is needed at all? In other projects, I've seen patches simply reverted by author (or someone else). It's a rare occurrence, and it should be possible to revert a patch if someone -1s it after commit, esp. within the same 24 hours when not many other changes are in. On Fri, Dec 27, 2013 at 1:03 PM, Thejas Nair the...@hortonworks.com wrote: I agree with Ashutosh that the 24 hour waiting period after +1 is cumbersome, I have also forgotten to commit patches after +1, resulting in patches going stale. But I think 24 hours wait between creation of jira and patch commit is not very useful, as the thing to be examined is the patch and not the jira summary/description. I think having a waiting period of 24 hours between a jira being made
[jira] [Updated] (HIVE-6182) LDAP Authentication errors need to be more informative
[ https://issues.apache.org/jira/browse/HIVE-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-6182: -- Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks to Szehon for the contribution. LDAP Authentication errors need to be more informative -- Key: HIVE-6182 URL: https://issues.apache.org/jira/browse/HIVE-6182 Project: Hive Issue Type: Improvement Components: Authentication Affects Versions: 0.13.0 Reporter: Szehon Ho Assignee: Szehon Ho Fix For: 0.13.0 Attachments: HIVE-6182.patch There are a host of errors that can happen when logging into an LDAP-enabled Hive-server2 from beeline. But for any error there is only a generic log message: {code} SASL negotiation failure javax.security.sasl.SaslException: PLAIN auth failed: Error validating LDAP user at org.apache.hadoop.security.SaslPlainServer.evaluateResponse(SaslPlainServer.java:108) at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrRespons {code} And on Beeline side there is only an even more unhelpful message: {code} Error: Invalid URL: jdbc:hive2://localhost:1/default (state=08S01,code=0) {code} It would be good to print out the underlying error message at least in the log, if not beeline. But today they are swallowed. This is bad because the underlying message is the most important, having the error codes as shown here : [LDAP error code|https://wiki.servicenow.com/index.php?title=LDAP_Error_Codes] The beeline seems to throw that exception for any error during connection, authetication or otherwise. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6174) Beeline set varible doesn't show the value of the variable as Hive CLI
[ https://issues.apache.org/jira/browse/HIVE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-6174: -- Fix Version/s: 0.13.0 Beeline set varible doesn't show the value of the variable as Hive CLI Key: HIVE-6174 URL: https://issues.apache.org/jira/browse/HIVE-6174 Project: Hive Issue Type: Bug Components: CLI Affects Versions: 0.10.0, 0.11.0, 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Fix For: 0.13.0 Attachments: HIVE-5174.3.patch, HIVE-6174.2.patch, HIVE-6174.patch Currently it displays nothing. {code} 0: jdbc:hive2:// set env:TERM; 0: jdbc:hive2:// {code} In contrast, Hive CLI displays the value of the variable. {code} hive set env:TERM; env:TERM=xterm {code} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6184) Bug in SessionManager.stop() in HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-6184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6184: Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks for the contribution Navis! Bug in SessionManager.stop() in HiveServer2 --- Key: HIVE-6184 URL: https://issues.apache.org/jira/browse/HIVE-6184 Project: Hive Issue Type: Bug Components: HiveServer2 Reporter: Jaideep Dhok Assignee: Navis Fix For: 0.13.0 Attachments: HIVE-6184.1.patch.txt The conf setting hive.server2.async.exec.shutdown.timeout is set to a long value (10L) in HiveConf.java, but it is read using getIntVar in SessionManager.stop. Instead it should be read as - {code} long timeout = hiveConf.getLongVar(ConfVars.HIVE_SERVER2_ASYNC_EXEC_SHUTDOWN_TIMEOUT); {code} Current code will either cause an assertion error if assertions are enabled, or it would return the timeout as -1 if the property is not set in hive-site.xml Workaround is to explicitly set the property in hive-site.xml -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Hive-trunk-hadoop2 - Build # 680 - Still Failing
Changes for Build #640 Changes for Build #641 [navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis reviewed by Thejas M Nair) [navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh) Changes for Build #642 Changes for Build #643 [ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package from Microsoft to Hive (Hideaki Kumura via Eric Hanson) Changes for Build #644 [cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming conventions (Sergey Shelukhin via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II (Navis via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis via cws) [jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure vectorization produces same results as non-vectorized execution (Sergey Shelukhin via Jitendra Pandey) Changes for Build #645 Changes for Build #646 [ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson) Changes for Build #647 [thejas] HIVE-5795 : Hive should be able to skip header and footer rows when reading data file for a table (Shuaishuai Nie via Thejas Nair) Changes for Build #648 [thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by Brock Noland) Changes for Build #649 Changes for Build #650 Changes for Build #651 [brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim Kulkarni via Brock) Changes for Build #652 Changes for Build #653 [gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by Thejas M Nair) Changes for Build #654 [cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam via cws) Changes for Build #655 [brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler (Swarnim Kulkarni via Brock Noland) [brock] HIVE-5946 - DDL authorization task factory should be better tested (Brock reviewed by Thejas) Changes for Build #656 Changes for Build #657 [gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther Hagleitner) Changes for Build #658 Changes for Build #659 [ehans] HIVE-6051: Create DecimalColumnVector and a representative VectorExpression for decimal (Eric Hanson) Changes for Build #660 [thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url should be about to load serde schema from file system beside HDFS (Shuaishuai Nie via Thejas Nair) [thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via Thejas Nair) Changes for Build #661 Changes for Build #662 [gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan) Changes for Build #663 [hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas Nair) Changes for Build #664 [xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it when file path have spaces Changes for Build #665 Changes for Build #666 Changes for Build #667 [brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock reviewed by Xuefu and Sushanth) Changes for Build #668 [hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth Sowmyan via Ashutosh Chauhan) [hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin via Ashutosh Chauhan) Changes for Build #669 Changes for Build #670 [ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson) Changes for Build #671 [hashutosh] HIVE-6185 : DDLTask is inconsistent in creating a table and adding a partition when dealing with location (Xuefu Zhang via Ashutosh Chauhan) [hashutosh] HIVE-5032 : Enable hive creating external table at the root directory of DFS (Shuaishuai Nie via Ashutosh Chauhan) Changes for Build #672 [navis] HIVE-6177 : Fix keyword KW_REANME which was intended to be KW_RENAME (Navis reviewed by Brock Noland) [jitendra] HIVE-6156. Implement vectorized reader for Date datatype for ORC format. (jitendra) Changes for Build #673 [hashutosh] HIVE-4216 : TestHBaseMinimrCliDriver throws weird error with HBase 0.94.5 and Hadoop 23 and test is stuck infinitely (Jason Dere via Brock Noland) Changes for Build #674 [hashutosh] HIVE-5515 : Writing to an HBase table throws IllegalArgumentException, failing job submission (Viraj Bhat via Ashutosh Chauhan Sushanth Sowmyan) Changes for Build #675 Changes for Build #676 [thejas] HIVE-5941 : SQL std auth - support 'show roles' (Navis via Thejas Nair) Changes for Build #677 [navis] HIVE-6161 : Fix TCLIService duplicate thrift definition for TColumn (Jay Bennett via Navis) Changes for Build #678 Changes for Build #679 [xuefu] HIVE-6174: Beeline 'set varible' doesn't show the value of the
[jira] [Updated] (HIVE-6189) Support top level union all statements
[ https://issues.apache.org/jira/browse/HIVE-6189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-6189: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanks for reviewing [~rhbutani] and [~navis]! Support top level union all statements -- Key: HIVE-6189 URL: https://issues.apache.org/jira/browse/HIVE-6189 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-6189.1.patch, HIVE-6189.2.patch, HIVE-6189.3.patch I've always wondered why union all has to be in subqueries in hive. After looking at it, problems are: - Hive Parser: - Union happens at the wrong place (insert ... select ... union all select ...) is parsed as (insert select) union select. - There are many rewrite rules in the parser to force any query into the a from - insert -select form. No doubt for historical reasons. - Plan generation/semantic analysis assumes top level TOK_QUERY and not top level TOK_UNION. The rewrite rules don't work when we move the UNION ALL recursion into the select statements. However, it's not hard to do that in code. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Hive-trunk-h0.21 - Build # 2580 - Still Failing
Changes for Build #2539 Changes for Build #2540 [navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis reviewed by Thejas M Nair) Changes for Build #2541 Changes for Build #2542 [ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package from Microsoft to Hive (Hideaki Kumura via Eric Hanson) Changes for Build #2543 [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II (Navis via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis via cws) [jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure vectorization produces same results as non-vectorized execution (Sergey Shelukhin via Jitendra Pandey) Changes for Build #2544 [cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming conventions (Sergey Shelukhin via cws) Changes for Build #2545 Changes for Build #2546 [ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson) Changes for Build #2547 [thejas] HIVE-5795 : Hive should be able to skip header and footer rows when reading data file for a table (Shuaishuai Nie via Thejas Nair) Changes for Build #2548 [thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by Brock Noland) Changes for Build #2549 Changes for Build #2550 Changes for Build #2551 [brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim Kulkarni via Brock) Changes for Build #2552 Changes for Build #2553 [gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by Thejas M Nair) Changes for Build #2554 [cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam via cws) Changes for Build #2555 [brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler (Swarnim Kulkarni via Brock Noland) [brock] HIVE-5946 - DDL authorization task factory should be better tested (Brock reviewed by Thejas) Changes for Build #2556 [gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther Hagleitner) Changes for Build #2557 Changes for Build #2558 [ehans] HIVE-6051: Create DecimalColumnVector and a representative VectorExpression for decimal (Eric Hanson) Changes for Build #2559 [thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url should be about to load serde schema from file system beside HDFS (Shuaishuai Nie via Thejas Nair) [thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via Thejas Nair) Changes for Build #2560 Changes for Build #2561 [gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan) Changes for Build #2562 [hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas Nair) Changes for Build #2563 Changes for Build #2564 [xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it when file path have spaces Changes for Build #2565 Changes for Build #2566 Changes for Build #2567 [brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock reviewed by Xuefu and Sushanth) Changes for Build #2568 [hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth Sowmyan via Ashutosh Chauhan) [hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin via Ashutosh Chauhan) Changes for Build #2569 Changes for Build #2570 [ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson) Changes for Build #2571 [hashutosh] HIVE-6185 : DDLTask is inconsistent in creating a table and adding a partition when dealing with location (Xuefu Zhang via Ashutosh Chauhan) [hashutosh] HIVE-5032 : Enable hive creating external table at the root directory of DFS (Shuaishuai Nie via Ashutosh Chauhan) Changes for Build #2572 [jitendra] HIVE-6156. Implement vectorized reader for Date datatype for ORC format. (jitendra) Changes for Build #2573 [navis] HIVE-6177 : Fix keyword KW_REANME which was intended to be KW_RENAME (Navis reviewed by Brock Noland) Changes for Build #2574 [hashutosh] HIVE-5515 : Writing to an HBase table throws IllegalArgumentException, failing job submission (Viraj Bhat via Ashutosh Chauhan Sushanth Sowmyan) [hashutosh] HIVE-4216 : TestHBaseMinimrCliDriver throws weird error with HBase 0.94.5 and Hadoop 23 and test is stuck infinitely (Jason Dere via Brock Noland) Changes for Build #2575 Changes for Build #2576 [thejas] HIVE-5941 : SQL std auth - support 'show roles' (Navis via Thejas Nair) Changes for Build #2577 [navis] HIVE-6161 : Fix TCLIService duplicate thrift definition for TColumn (Jay Bennett via Navis) Changes for Build #2578 Changes for Build #2579 [xuefu] HIVE-6174: Beeline 'set varible' doesn't show the value of the variable as Hive CLI [hashutosh] HIVE-6196 : Incorrect package name for
[jira] [Updated] (HIVE-5928) Add a hive authorization plugin api that does not assume privileges needed
[ https://issues.apache.org/jira/browse/HIVE-5928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5928: Status: Patch Available (was: Open) Add a hive authorization plugin api that does not assume privileges needed -- Key: HIVE-5928 URL: https://issues.apache.org/jira/browse/HIVE-5928 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Assignee: Thejas M Nair Attachments: HIVE-5928.1.patch, HIVE-5928.2.patch, hive_auth_class_preview.txt Original Estimate: 120h Time Spent: 2h Remaining Estimate: 12h The existing HiveAuthorizationProvider interface implementations can be used to support custom authorization models. But this interface limits the customization for these reasons - 1. It has assumptions about the privileges required for an action. 2. It does have not functions that you can implement for having custom ways of doing the actions of access control statements. This jira proposes a new interface HiveAuthorizer that does not make assumptions of the privileges required for the actions. The authorize() functions will be equivalent of authorize(operation type, input objects, output objects). It will also have functions that will be called from the access control statements. The current HiveAuthorizationProvider will continue to be supported for backward compatibility. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6159) Hive uses deprecated hadoop configuration in Hadoop 2.0
[ https://issues.apache.org/jira/browse/HIVE-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872875#comment-13872875 ] Hive QA commented on HIVE-6159: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12623246/HIVE-6159.5.patch {color:green}SUCCESS:{color} +1 4927 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/924/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/924/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12623246 Hive uses deprecated hadoop configuration in Hadoop 2.0 --- Key: HIVE-6159 URL: https://issues.apache.org/jira/browse/HIVE-6159 Project: Hive Issue Type: Bug Components: Configuration Affects Versions: 0.12.0 Reporter: shanyu zhao Assignee: shanyu zhao Fix For: 0.13.0 Attachments: HIVE-6159-v2.patch, HIVE-6159-v3.patch, HIVE-6159-v4.patch, HIVE-6159.4.patch, HIVE-6159.5.patch, HIVE-6159.patch Build hive against hadoop 2.0. Then run hive CLI, you'll see deprecated configurations warnings like this: 13/12/14 01:00:51 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.max.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.maxsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size is depre cated. Instead, use mapreduce.input.fileinputformat.split.minsize 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.r ack 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.n ode 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks is depreca ted. Instead, use mapreduce.job.reduces 13/12/14 01:00:52 INFO Configuration.deprecation: mapred.reduce.tasks.speculativ e.execution is deprecated. Instead, use mapreduce.reduce.speculative -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Hive-trunk-hadoop2 - Build # 681 - Still Failing
Changes for Build #640 Changes for Build #641 [navis] HIVE-5414 : The result of show grant is not visible via JDBC (Navis reviewed by Thejas M Nair) [navis] HIVE-4257 : java.sql.SQLNonTransientConnectionException on JDBCStatsAggregator (Teddy Choi via Navis, reviewed by Ashutosh) Changes for Build #642 Changes for Build #643 [ehans] HIVE-6017: Contribute Decimal128 high-performance decimal(p, s) package from Microsoft to Hive (Hideaki Kumura via Eric Hanson) Changes for Build #644 [cws] HIVE-5911: Recent change to schema upgrade scripts breaks file naming conventions (Sergey Shelukhin via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression II (Navis via cws) [cws] HIVE-3746: Fix HS2 ResultSet Serialization Performance Regression (Navis via cws) [jitendra] HIVE-6010: TestCompareCliDriver enables tests that would ensure vectorization produces same results as non-vectorized execution (Sergey Shelukhin via Jitendra Pandey) Changes for Build #645 Changes for Build #646 [ehans] HIVE-5757: Implement vectorized support for CASE (Eric Hanson) Changes for Build #647 [thejas] HIVE-5795 : Hive should be able to skip header and footer rows when reading data file for a table (Shuaishuai Nie via Thejas Nair) Changes for Build #648 [thejas] HIVE-5923 : SQL std auth - parser changes (Thejas Nair, reviewed by Brock Noland) Changes for Build #649 Changes for Build #650 Changes for Build #651 [brock] HIVE-3936 - Remote debug failed with hadoop 0.23X, hadoop 2.X (Swarnim Kulkarni via Brock) Changes for Build #652 Changes for Build #653 [gunther] HIVE-6125: Tez: Refactoring changes (Gunther Hagleitner, reviewed by Thejas M Nair) Changes for Build #654 [cws] HIVE-5829: Rewrite Trim and Pad UDFs based on GenericUDF (Mohammad Islam via cws) Changes for Build #655 [brock] HIVE-2599 - Support Composit/Compound Keys with HBaseStorageHandler (Swarnim Kulkarni via Brock Noland) [brock] HIVE-5946 - DDL authorization task factory should be better tested (Brock reviewed by Thejas) Changes for Build #656 Changes for Build #657 [gunther] HIVE-6105: LongWritable.compareTo needs shimming (Navis vis Gunther Hagleitner) Changes for Build #658 Changes for Build #659 [ehans] HIVE-6051: Create DecimalColumnVector and a representative VectorExpression for decimal (Eric Hanson) Changes for Build #660 [thejas] HIVE-5224 : When creating table with AVRO serde, the avro.schema.url should be about to load serde schema from file system beside HDFS (Shuaishuai Nie via Thejas Nair) [thejas] HIVE-6154 : HiveServer2 returns a detailed error message to the client only when the underlying exception is a HiveSQLException (Vaibhav Gumashta via Thejas Nair) Changes for Build #661 Changes for Build #662 [gunther] HIVE-6098: Merge Tez branch into trunk (Gunther Hagleitner et al, reviewed by Thejas Nair, Vikram Dixit K, Ashutosh Chauhan) Changes for Build #663 [hashutosh] HIVE-6171 : Use Paths consistently - V (Ashutosh Chauhan via Thejas Nair) Changes for Build #664 [xuefu] HIVE-5446: Hive can CREATE an external table but not SELECT from it when file path have spaces Changes for Build #665 Changes for Build #666 Changes for Build #667 [brock] HIVE-6115 - Remove redundant code in HiveHBaseStorageHandler (Brock reviewed by Xuefu and Sushanth) Changes for Build #668 [hashutosh] HIVE-6166 : JsonSerDe is too strict about table schema (Sushanth Sowmyan via Ashutosh Chauhan) [hashutosh] HIVE-5679 : add date support to metastore JDO/SQL (Sergey Shelukhin via Ashutosh Chauhan) Changes for Build #669 Changes for Build #670 [ehans] HIVE-6067: Implement vectorized decimal comparison filters (Eric Hanson) Changes for Build #671 [hashutosh] HIVE-6185 : DDLTask is inconsistent in creating a table and adding a partition when dealing with location (Xuefu Zhang via Ashutosh Chauhan) [hashutosh] HIVE-5032 : Enable hive creating external table at the root directory of DFS (Shuaishuai Nie via Ashutosh Chauhan) Changes for Build #672 [navis] HIVE-6177 : Fix keyword KW_REANME which was intended to be KW_RENAME (Navis reviewed by Brock Noland) [jitendra] HIVE-6156. Implement vectorized reader for Date datatype for ORC format. (jitendra) Changes for Build #673 [hashutosh] HIVE-4216 : TestHBaseMinimrCliDriver throws weird error with HBase 0.94.5 and Hadoop 23 and test is stuck infinitely (Jason Dere via Brock Noland) Changes for Build #674 [hashutosh] HIVE-5515 : Writing to an HBase table throws IllegalArgumentException, failing job submission (Viraj Bhat via Ashutosh Chauhan Sushanth Sowmyan) Changes for Build #675 Changes for Build #676 [thejas] HIVE-5941 : SQL std auth - support 'show roles' (Navis via Thejas Nair) Changes for Build #677 [navis] HIVE-6161 : Fix TCLIService duplicate thrift definition for TColumn (Jay Bennett via Navis) Changes for Build #678 Changes for Build #679 [xuefu] HIVE-6174: Beeline 'set varible' doesn't show the value of the
[jira] [Updated] (HIVE-5997) Replace SemanticException with HiveException for method signatues
[ https://issues.apache.org/jira/browse/HIVE-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-5997: Status: Patch Available (was: Open) Rebased to trunk. Replace SemanticException with HiveException for method signatues - Key: HIVE-5997 URL: https://issues.apache.org/jira/browse/HIVE-5997 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-5597.1.patch.txt, HIVE-5997.2.patch.txt, HIVE-5997.3.patch.txt There is so many codes just wrapping HiveException to SemanticException in planing stage which seemed totally meaningless. How about replacing it all? -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-5997) Replace SemanticException with HiveException for method signatues
[ https://issues.apache.org/jira/browse/HIVE-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-5997: Attachment: HIVE-5997.3.patch.txt Replace SemanticException with HiveException for method signatues - Key: HIVE-5997 URL: https://issues.apache.org/jira/browse/HIVE-5997 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-5597.1.patch.txt, HIVE-5997.2.patch.txt, HIVE-5997.3.patch.txt There is so many codes just wrapping HiveException to SemanticException in planing stage which seemed totally meaningless. How about replacing it all? -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-5997) Replace SemanticException with HiveException for method signatues
[ https://issues.apache.org/jira/browse/HIVE-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-5997: Status: Open (was: Patch Available) Replace SemanticException with HiveException for method signatues - Key: HIVE-5997 URL: https://issues.apache.org/jira/browse/HIVE-5997 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-5597.1.patch.txt, HIVE-5997.2.patch.txt, HIVE-5997.3.patch.txt There is so many codes just wrapping HiveException to SemanticException in planing stage which seemed totally meaningless. How about replacing it all? -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (HIVE-6209) 'LOAD DATA INPATH ... OVERWRITE ..' doesn't overwrite current data
Szehon Ho created HIVE-6209: --- Summary: 'LOAD DATA INPATH ... OVERWRITE ..' doesn't overwrite current data Key: HIVE-6209 URL: https://issues.apache.org/jira/browse/HIVE-6209 Project: Hive Issue Type: Bug Reporter: Szehon Ho Assignee: Szehon Ho Fix For: 0.13.0 In case where user loads data into table using overwrite, using a different file, it is not being overwritten. {code} $ hdfs dfs -cat /tmp/data aaa bbb ccc $ hdfs dfs -cat /tmp/data2 ddd eee fff $ hive hive create table test (id string); hive load data inpath '/tmp/data' overwrite into table test; hive select * from test; aaa bbb ccc hive load data inpath '/tmp/data2' overwrite into table test; hive select * from test; aaa bbb ccc ddd eee fff {code} It seems it is broken by HIVE-3756 which added another condition to whether rmr should be run on old directory, and skips in this case. There is a workaround of set fs.hdfs.impl.disable.cache=true; which sabotages this condition, but this condition should be removed in long-term. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6209) 'LOAD DATA INPATH ... OVERWRITE ..' doesn't overwrite current data
[ https://issues.apache.org/jira/browse/HIVE-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6209: Affects Version/s: 0.12.0 Fix Version/s: (was: 0.13.0) 'LOAD DATA INPATH ... OVERWRITE ..' doesn't overwrite current data -- Key: HIVE-6209 URL: https://issues.apache.org/jira/browse/HIVE-6209 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Szehon Ho Assignee: Szehon Ho In case where user loads data into table using overwrite, using a different file, it is not being overwritten. {code} $ hdfs dfs -cat /tmp/data aaa bbb ccc $ hdfs dfs -cat /tmp/data2 ddd eee fff $ hive hive create table test (id string); hive load data inpath '/tmp/data' overwrite into table test; hive select * from test; aaa bbb ccc hive load data inpath '/tmp/data2' overwrite into table test; hive select * from test; aaa bbb ccc ddd eee fff {code} It seems it is broken by HIVE-3756 which added another condition to whether rmr should be run on old directory, and skips in this case. There is a workaround of set fs.hdfs.impl.disable.cache=true; which sabotages this condition, but this condition should be removed in long-term. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE
[ https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872901#comment-13872901 ] Thejas M Nair commented on HIVE-5931: - Navis, can you please contribute NHIVE-31 patch ? Do let me know if that patch is blocked on some other patch already submitted. SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE - Key: HIVE-5931 URL: https://issues.apache.org/jira/browse/HIVE-5931 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Original Estimate: 24h Remaining Estimate: 24h This is necessary for DESCRIBE ROLE role statement. This will list all users and roles that participate in a role. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-4144) Add select database() command to show the current database
[ https://issues.apache.org/jira/browse/HIVE-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-4144: Status: Open (was: Patch Available) Add select database() command to show the current database Key: HIVE-4144 URL: https://issues.apache.org/jira/browse/HIVE-4144 Project: Hive Issue Type: Bug Components: SQL Reporter: Mark Grover Assignee: Navis Attachments: D9597.5.patch, HIVE-4144.10.patch.txt, HIVE-4144.6.patch.txt, HIVE-4144.7.patch.txt, HIVE-4144.8.patch.txt, HIVE-4144.9.patch.txt, HIVE-4144.D9597.1.patch, HIVE-4144.D9597.2.patch, HIVE-4144.D9597.3.patch, HIVE-4144.D9597.4.patch A recent hive-user mailing list conversation asked about having a command to show the current database. http://mail-archives.apache.org/mod_mbox/hive-user/201303.mbox/%3CCAMGr+0i+CRY69m3id=DxthmUCWLf0NxpKMCtROb=uauh2va...@mail.gmail.com%3E MySQL seems to have a command to do so: {code} select database(); {code} http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_database We should look into having something similar in Hive. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-4144) Add select database() command to show the current database
[ https://issues.apache.org/jira/browse/HIVE-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-4144: Attachment: HIVE-4144.11.patch.txt Add select database() command to show the current database Key: HIVE-4144 URL: https://issues.apache.org/jira/browse/HIVE-4144 Project: Hive Issue Type: Bug Components: SQL Reporter: Mark Grover Assignee: Navis Attachments: D9597.5.patch, HIVE-4144.10.patch.txt, HIVE-4144.11.patch.txt, HIVE-4144.6.patch.txt, HIVE-4144.7.patch.txt, HIVE-4144.8.patch.txt, HIVE-4144.9.patch.txt, HIVE-4144.D9597.1.patch, HIVE-4144.D9597.2.patch, HIVE-4144.D9597.3.patch, HIVE-4144.D9597.4.patch A recent hive-user mailing list conversation asked about having a command to show the current database. http://mail-archives.apache.org/mod_mbox/hive-user/201303.mbox/%3CCAMGr+0i+CRY69m3id=DxthmUCWLf0NxpKMCtROb=uauh2va...@mail.gmail.com%3E MySQL seems to have a command to do so: {code} select database(); {code} http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_database We should look into having something similar in Hive. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-4144) Add select database() command to show the current database
[ https://issues.apache.org/jira/browse/HIVE-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-4144: Status: Patch Available (was: Open) Rebased to trunk fixed test fail (cannot reproduce TestMinimrCliDriver.testCliDriver_bucket5) Add select database() command to show the current database Key: HIVE-4144 URL: https://issues.apache.org/jira/browse/HIVE-4144 Project: Hive Issue Type: Bug Components: SQL Reporter: Mark Grover Assignee: Navis Attachments: D9597.5.patch, HIVE-4144.10.patch.txt, HIVE-4144.11.patch.txt, HIVE-4144.6.patch.txt, HIVE-4144.7.patch.txt, HIVE-4144.8.patch.txt, HIVE-4144.9.patch.txt, HIVE-4144.D9597.1.patch, HIVE-4144.D9597.2.patch, HIVE-4144.D9597.3.patch, HIVE-4144.D9597.4.patch A recent hive-user mailing list conversation asked about having a command to show the current database. http://mail-archives.apache.org/mod_mbox/hive-user/201303.mbox/%3CCAMGr+0i+CRY69m3id=DxthmUCWLf0NxpKMCtROb=uauh2va...@mail.gmail.com%3E MySQL seems to have a command to do so: {code} select database(); {code} http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_database We should look into having something similar in Hive. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-5771) Constant propagation optimizer for Hive
[ https://issues.apache.org/jira/browse/HIVE-5771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Xu updated HIVE-5771: - Attachment: (was: HIVE-5771.6.patch) Constant propagation optimizer for Hive --- Key: HIVE-5771 URL: https://issues.apache.org/jira/browse/HIVE-5771 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Ted Xu Assignee: Ted Xu Attachments: HIVE-5771.1.patch, HIVE-5771.2.patch, HIVE-5771.3.patch, HIVE-5771.4.patch, HIVE-5771.5.patch, HIVE-5771.patch Currently there is no constant folding/propagation optimizer, all expressions are evaluated at runtime. HIVE-2470 did a great job on evaluating constants on UDF initializing phase, however, it is still a runtime evaluation and it doesn't propagate constants from a subquery to outside. It may reduce I/O and accelerate process if we introduce such an optimizer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-5771) Constant propagation optimizer for Hive
[ https://issues.apache.org/jira/browse/HIVE-5771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Xu updated HIVE-5771: - Attachment: HIVE-5771.6.patch Constant propagation optimizer for Hive --- Key: HIVE-5771 URL: https://issues.apache.org/jira/browse/HIVE-5771 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Ted Xu Assignee: Ted Xu Attachments: HIVE-5771.1.patch, HIVE-5771.2.patch, HIVE-5771.3.patch, HIVE-5771.4.patch, HIVE-5771.5.patch, HIVE-5771.patch Currently there is no constant folding/propagation optimizer, all expressions are evaluated at runtime. HIVE-2470 did a great job on evaluating constants on UDF initializing phase, however, it is still a runtime evaluation and it doesn't propagate constants from a subquery to outside. It may reduce I/O and accelerate process if we introduce such an optimizer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-5771) Constant propagation optimizer for Hive
[ https://issues.apache.org/jira/browse/HIVE-5771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Xu updated HIVE-5771: - Attachment: HIVE-5771.6.patch Constant propagation optimizer for Hive --- Key: HIVE-5771 URL: https://issues.apache.org/jira/browse/HIVE-5771 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Ted Xu Assignee: Ted Xu Attachments: HIVE-5771.1.patch, HIVE-5771.2.patch, HIVE-5771.3.patch, HIVE-5771.4.patch, HIVE-5771.5.patch, HIVE-5771.6.patch, HIVE-5771.patch Currently there is no constant folding/propagation optimizer, all expressions are evaluated at runtime. HIVE-2470 did a great job on evaluating constants on UDF initializing phase, however, it is still a runtime evaluation and it doesn't propagate constants from a subquery to outside. It may reduce I/O and accelerate process if we introduce such an optimizer. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE
[ https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872931#comment-13872931 ] Navis commented on HIVE-5931: - [~thejas] I've tried rebase NHIVE-31 on trunk but it's heavily dependent to HIVE-6204 and it is again dependent to HIVE-6122. I don't like the way HIVE-6122 is implemented but should hurry on it. SQL std auth - add metastore get_role_participants api - to support DESCRIBE ROLE - Key: HIVE-5931 URL: https://issues.apache.org/jira/browse/HIVE-5931 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Original Estimate: 24h Remaining Estimate: 24h This is necessary for DESCRIBE ROLE role statement. This will list all users and roles that participate in a role. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
Review Request 16938: HIVE-6209 'LOAD DATA INPATH ... OVERWRITE ..' doesn't overwrite current data
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/16938/ --- Review request for hive. Bugs: HIVE-6209 https://issues.apache.org/jira/browse/HIVE-6209 Repository: hive-git Description --- There was a wrong condition introduced in HIVE-3756, that prevented load data overwrite from working properly. In these situations, destf == oldPath == /user/warehouse/hive/tableName, so -rmr was skipped on old data. Note that if file name was same, ie load data inpath 'path' with same path repeatedly, it would work as the rename would overwrite the old data file. But in this case, the filename is different. Other minor changes are trying to improve logging in this area to better diagnose the issues (for example file permission, etc). Diffs - ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 2fe86e1 Diff: https://reviews.apache.org/r/16938/diff/ Testing --- The primary concern was whether removing the directory in these scenarios would make the rename fail. It should not due to fs.mkdirs call before, but I still verified the following scenarios: load/insert overwrite into table with partitions load/insert overwrite into table with buckets Thanks, Szehon Ho
[jira] [Updated] (HIVE-6209) 'LOAD DATA INPATH ... OVERWRITE ..' doesn't overwrite current data
[ https://issues.apache.org/jira/browse/HIVE-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6209: Attachment: HIVE-6209.patch Attaching a fix. 'LOAD DATA INPATH ... OVERWRITE ..' doesn't overwrite current data -- Key: HIVE-6209 URL: https://issues.apache.org/jira/browse/HIVE-6209 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Szehon Ho Assignee: Szehon Ho Attachments: HIVE-6209.patch In case where user loads data into table using overwrite, using a different file, it is not being overwritten. {code} $ hdfs dfs -cat /tmp/data aaa bbb ccc $ hdfs dfs -cat /tmp/data2 ddd eee fff $ hive hive create table test (id string); hive load data inpath '/tmp/data' overwrite into table test; hive select * from test; aaa bbb ccc hive load data inpath '/tmp/data2' overwrite into table test; hive select * from test; aaa bbb ccc ddd eee fff {code} It seems it is broken by HIVE-3756 which added another condition to whether rmr should be run on old directory, and skips in this case. There is a workaround of set fs.hdfs.impl.disable.cache=true; which sabotages this condition, but this condition should be removed in long-term. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6122) Implement show grant on resource
[ https://issues.apache.org/jira/browse/HIVE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-6122: Status: Open (was: Patch Available) Implement show grant on resource -- Key: HIVE-6122 URL: https://issues.apache.org/jira/browse/HIVE-6122 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6122.1.patch.txt, HIVE-6122.2.patch.txt, HIVE-6122.3.patch.txt Currently, hive shows privileges owned by a principal. Reverse API is also needed, which shows all principals for a resource. {noformat} show grant user hive_test_user on database default; show grant user hive_test_user on table dummy; show grant user hive_test_user on all; {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6122) Implement show grant on resource
[ https://issues.apache.org/jira/browse/HIVE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-6122: Attachment: HIVE-6122.3.patch.txt Implement show grant on resource -- Key: HIVE-6122 URL: https://issues.apache.org/jira/browse/HIVE-6122 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6122.1.patch.txt, HIVE-6122.2.patch.txt, HIVE-6122.3.patch.txt Currently, hive shows privileges owned by a principal. Reverse API is also needed, which shows all principals for a resource. {noformat} show grant user hive_test_user on database default; show grant user hive_test_user on table dummy; show grant user hive_test_user on all; {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6167) Allow user-defined functions to be qualified with database name
[ https://issues.apache.org/jira/browse/HIVE-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-6167: - Attachment: HIVE-6167.1.patch Initial patch. Since built-in/temp functions do not have qualifiers there isn't won't actually be any functions that are resolvable when qualified with a db name, but this does allow Hive to parse qualified function names. Allow user-defined functions to be qualified with database name --- Key: HIVE-6167 URL: https://issues.apache.org/jira/browse/HIVE-6167 Project: Hive Issue Type: Sub-task Components: UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-6167.1.patch Function names in Hive are currently unqualified and there is a single namespace for all function names. This task would allow users to define temporary UDFs (and eventually permanent UDFs) with a database name, such as: CREATE TEMPORARY FUNCTION userdb.myfunc 'myudfclass'; -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6050) JDBC backward compatibility is broken
[ https://issues.apache.org/jira/browse/HIVE-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6050: Affects Version/s: (was: 0.12.0) 0.13.0 JDBC backward compatibility is broken - Key: HIVE-6050 URL: https://issues.apache.org/jira/browse/HIVE-6050 Project: Hive Issue Type: Bug Components: HiveServer2, JDBC Affects Versions: 0.13.0 Reporter: Szehon Ho Assignee: Carl Steinbach Priority: Blocker Connect from JDBC driver of Hive 0.13 (TProtocolVersion=v4) to HiveServer2 of Hive 0.10 (TProtocolVersion=v1), will return the following exception: {noformat} java.sql.SQLException: Could not establish connection to jdbc:hive2://localhost:1/default: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:336) at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:158) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:187) at org.apache.hive.jdbc.MyTestJdbcDriver2.getConnection(MyTestJdbcDriver2.java:73) at org.apache.hive.jdbc.MyTestJdbcDriver2.lt;initgt;(MyTestJdbcDriver2.java:49) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:187) at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:236) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:233) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914) Caused by: org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:108) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:160) at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:147) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:327) ... 37 more {noformat} On code analysis, it looks like the 'client_protocol' scheme is a ThriftEnum, which doesn't seem to be backward-compatible. Look at the code path in the generated file 'TOpenSessionReq.java', method TOpenSessionReqStandardScheme.read(): 1. The method will call 'TProtocolVersion.findValue()' on the thrift protocol's byte stream, which returns null if the client is sending an enum value unknown to the server. (v4 is unknown to server) 2. The method will then call struct.validate(), which will throw the above exception because of null version. So doesn't look like the current backward-compatibility scheme will work. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6050) JDBC backward compatibility is broken
[ https://issues.apache.org/jira/browse/HIVE-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho updated HIVE-6050: Affects Version/s: 0.12.0 JDBC backward compatibility is broken - Key: HIVE-6050 URL: https://issues.apache.org/jira/browse/HIVE-6050 Project: Hive Issue Type: Bug Components: HiveServer2, JDBC Affects Versions: 0.13.0 Reporter: Szehon Ho Assignee: Carl Steinbach Priority: Blocker Connect from JDBC driver of Hive 0.13 (TProtocolVersion=v4) to HiveServer2 of Hive 0.10 (TProtocolVersion=v1), will return the following exception: {noformat} java.sql.SQLException: Could not establish connection to jdbc:hive2://localhost:1/default: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:336) at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:158) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:187) at org.apache.hive.jdbc.MyTestJdbcDriver2.getConnection(MyTestJdbcDriver2.java:73) at org.apache.hive.jdbc.MyTestJdbcDriver2.lt;initgt;(MyTestJdbcDriver2.java:49) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:187) at org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:236) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:233) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:523) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1063) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:914) Caused by: org.apache.thrift.TApplicationException: Required field 'client_protocol' is unset! Struct:TOpenSessionReq(client_protocol:null) at org.apache.thrift.TApplicationException.read(TApplicationException.java:108) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71) at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:160) at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:147) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:327) ... 37 more {noformat} On code analysis, it looks like the 'client_protocol' scheme is a ThriftEnum, which doesn't seem to be backward-compatible. Look at the code path in the generated file 'TOpenSessionReq.java', method TOpenSessionReqStandardScheme.read(): 1. The method will call 'TProtocolVersion.findValue()' on the thrift protocol's byte stream, which returns null if the client is sending an enum value unknown to the server. (v4 is unknown to server) 2. The method will then call struct.validate(), which will throw the above exception because of null version. So doesn't look like the current backward-compatibility scheme will work. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6122) Implement show grant on resource
[ https://issues.apache.org/jira/browse/HIVE-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-6122: Status: Patch Available (was: Open) Fixed test fails (cannot reproduce TestMinimrCliDriver.testCliDriver_schemeAuthority) Implement show grant on resource -- Key: HIVE-6122 URL: https://issues.apache.org/jira/browse/HIVE-6122 Project: Hive Issue Type: Improvement Components: Authorization Reporter: Navis Assignee: Navis Priority: Minor Attachments: HIVE-6122.1.patch.txt, HIVE-6122.2.patch.txt, HIVE-6122.3.patch.txt Currently, hive shows privileges owned by a principal. Reverse API is also needed, which shows all principals for a resource. {noformat} show grant user hive_test_user on database default; show grant user hive_test_user on table dummy; show grant user hive_test_user on all; {noformat} -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6167) Allow user-defined functions to be qualified with database name
[ https://issues.apache.org/jira/browse/HIVE-6167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-6167: - Status: Patch Available (was: Open) submitting patch to see how tests look Allow user-defined functions to be qualified with database name --- Key: HIVE-6167 URL: https://issues.apache.org/jira/browse/HIVE-6167 Project: Hive Issue Type: Sub-task Components: UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-6167.1.patch Function names in Hive are currently unqualified and there is a single namespace for all function names. This task would allow users to define temporary UDFs (and eventually permanent UDFs) with a database name, such as: CREATE TEMPORARY FUNCTION userdb.myfunc 'myudfclass'; -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6173) Beeline doesn't accept --hiveconf option as Hive CLI does
[ https://issues.apache.org/jira/browse/HIVE-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872955#comment-13872955 ] Alan Gates commented on HIVE-6173: -- One question here is what options users should be able to set. Certain options set in the server shouldn't be changeable by users, such as the authorization provider. Is any work being done here to consider which values the user should and shouldn't be able to set? Beeline doesn't accept --hiveconf option as Hive CLI does - Key: HIVE-6173 URL: https://issues.apache.org/jira/browse/HIVE-6173 Project: Hive Issue Type: Improvement Components: CLI Affects Versions: 0.10.0, 0.11.0, 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-6173.patch {code} beeline -u jdbc:hive2:// --hiveconf a=b Usage: java org.apache.hive.cli.beeline.BeeLine {code} Since Beeline is replacing Hive CLI, it should support this command line option as well. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6173) Beeline doesn't accept --hiveconf option as Hive CLI does
[ https://issues.apache.org/jira/browse/HIVE-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872961#comment-13872961 ] Navis commented on HIVE-6173: - HiveServer can set hive.conf.restricted.list to prohibit overriding critical configurations. Currently empty for default but it should include some security/metastore related configs, imho. Beeline doesn't accept --hiveconf option as Hive CLI does - Key: HIVE-6173 URL: https://issues.apache.org/jira/browse/HIVE-6173 Project: Hive Issue Type: Improvement Components: CLI Affects Versions: 0.10.0, 0.11.0, 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-6173.patch {code} beeline -u jdbc:hive2:// --hiveconf a=b Usage: java org.apache.hive.cli.beeline.BeeLine {code} Since Beeline is replacing Hive CLI, it should support this command line option as well. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HIVE-6173) Beeline doesn't accept --hiveconf option as Hive CLI does
[ https://issues.apache.org/jira/browse/HIVE-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872962#comment-13872962 ] Prasad Mujumdar commented on HIVE-6173: --- Hive supports a config parameter hive.conf.restricted.list for that purpose. It's a comma separated list of configs that can't be changed by users. Currently its empty by default, just restrict.list itself is implicitly added to it. Beeline doesn't accept --hiveconf option as Hive CLI does - Key: HIVE-6173 URL: https://issues.apache.org/jira/browse/HIVE-6173 Project: Hive Issue Type: Improvement Components: CLI Affects Versions: 0.10.0, 0.11.0, 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang Attachments: HIVE-6173.patch {code} beeline -u jdbc:hive2:// --hiveconf a=b Usage: java org.apache.hive.cli.beeline.BeeLine {code} Since Beeline is replacing Hive CLI, it should support this command line option as well. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (HIVE-6205) alter table partition column throws NPE in authorization
[ https://issues.apache.org/jira/browse/HIVE-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-6205: Status: Patch Available (was: Open) Fixed build fail alter table partition column throws NPE in authorization -- Key: HIVE-6205 URL: https://issues.apache.org/jira/browse/HIVE-6205 Project: Hive Issue Type: Bug Components: Authorization Reporter: Navis Assignee: Navis Attachments: HIVE-6205.1.patch.txt, HIVE-6205.2.patch.txt alter table alter_coltype partition column (dt int); {noformat} 2014-01-15 15:53:40,364 ERROR ql.Driver (SessionState.java:printError(457)) - FAILED: NullPointerException null java.lang.NullPointerException at org.apache.hadoop.hive.ql.Driver.doAuthorization(Driver.java:599) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:479) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:996) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1039) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:932) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:922) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:792) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:686) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:197) {noformat} Operation for TOK_ALTERTABLE_ALTERPARTS is not defined. -- This message was sent by Atlassian JIRA (v6.1.5#6160)