[jira] [Commented] (HIVE-7444) Update supported operating systems requirements in wikidoc
[ https://issues.apache.org/jira/browse/HIVE-7444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259623#comment-14259623 ] Lefty Leverenz commented on HIVE-7444: -- Requirements updated in Getting Started Installing Hive to include Java 8: * [Getting Started -- Requirements | https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-Requirements] * [Installing Hive | https://cwiki.apache.org/confluence/display/Hive/AdminManual+Installation#AdminManualInstallation-InstallingHive] (We just need another +1 to close this.) Update supported operating systems requirements in wikidoc Key: HIVE-7444 URL: https://issues.apache.org/jira/browse/HIVE-7444 Project: Hive Issue Type: Bug Components: Documentation Reporter: Lefty Leverenz The first sentence of Getting Started is outdated: {quote} DISCLAIMER: Hive has only been tested on Unix (Linux) and Mac systems using Java 1.6 for now – although it may very well work on other similar platforms. It does not work on Cygwin. {quote} The Requirements section also needs updating: {quote} Requirements * Java 1.6 * Hadoop 0.20.x, 0.23.x, or 2.0.x-alpha {quote} Quick reference: * [Getting Started | https://cwiki.apache.org/confluence/display/Hive/GettingStarted] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7444) Update supported operating systems requirements in wikidoc
[ https://issues.apache.org/jira/browse/HIVE-7444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lefty Leverenz updated HIVE-7444: - Assignee: Thejas M Nair Update supported operating systems requirements in wikidoc Key: HIVE-7444 URL: https://issues.apache.org/jira/browse/HIVE-7444 Project: Hive Issue Type: Bug Components: Documentation Reporter: Lefty Leverenz Assignee: Thejas M Nair The first sentence of Getting Started is outdated: {quote} DISCLAIMER: Hive has only been tested on Unix (Linux) and Mac systems using Java 1.6 for now – although it may very well work on other similar platforms. It does not work on Cygwin. {quote} The Requirements section also needs updating: {quote} Requirements * Java 1.6 * Hadoop 0.20.x, 0.23.x, or 2.0.x-alpha {quote} Quick reference: * [Getting Started | https://cwiki.apache.org/confluence/display/Hive/GettingStarted] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9218) Remove authorization_admin_almighty1 from spark tests [Spark Branch]
Brock Noland created HIVE-9218: -- Summary: Remove authorization_admin_almighty1 from spark tests [Spark Branch] Key: HIVE-9218 URL: https://issues.apache.org/jira/browse/HIVE-9218 Project: Hive Issue Type: Sub-task Reporter: Brock Noland The {{authorization_admin_almighty1}} test is authorization only so I don't think we need to run it on spark. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9157) Merge from trunk to spark 12/26/2014 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9157: --- Summary: Merge from trunk to spark 12/26/2014 [Spark Branch] (was: Merge from trunk to spark 12/17/2014 [Spark Branch]) Merge from trunk to spark 12/26/2014 [Spark Branch] --- Key: HIVE-9157 URL: https://issues.apache.org/jira/browse/HIVE-9157 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9219) Investigate differences for auto join tests in explain after merge from trunk
Brock Noland created HIVE-9219: -- Summary: Investigate differences for auto join tests in explain after merge from trunk Key: HIVE-9219 URL: https://issues.apache.org/jira/browse/HIVE-9219 Project: Hive Issue Type: Sub-task Reporter: Brock Noland {noformat} diff --git a/ql/src/test/results/clientpositive/spark/auto_join14.q.out b/ql/src/test/results/clientpositive/spark/auto_join14.q.out index cbca649..830314e 100644 --- a/ql/src/test/results/clientpositive/spark/auto_join14.q.out +++ b/ql/src/test/results/clientpositive/spark/auto_join14.q.out @@ -38,9 +38,6 @@ STAGE PLANS: predicate: (key 100) (type: boolean) Statistics: Num rows: 166 Data size: 1763 Basic stats: COMPLETE Column stats: NONE Spark HashTable Sink Operator - condition expressions: -0 -1 {value} keys: 0 key (type: string) 1 key (type: string) @@ -62,9 +59,6 @@ STAGE PLANS: Map Join Operator condition map: Inner Join 0 to 1 - condition expressions: -0 {key} -1 {value} keys: 0 key (type: string) 1 key (type: string) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9219) Investigate differences for auto join tests in explain after merge from trunk [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9219: --- Summary: Investigate differences for auto join tests in explain after merge from trunk [Spark Branch] (was: Investigate differences for auto join tests in explain after merge from trunk) Investigate differences for auto join tests in explain after merge from trunk [Spark Branch] Key: HIVE-9219 URL: https://issues.apache.org/jira/browse/HIVE-9219 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland {noformat} diff --git a/ql/src/test/results/clientpositive/spark/auto_join14.q.out b/ql/src/test/results/clientpositive/spark/auto_join14.q.out index cbca649..830314e 100644 --- a/ql/src/test/results/clientpositive/spark/auto_join14.q.out +++ b/ql/src/test/results/clientpositive/spark/auto_join14.q.out @@ -38,9 +38,6 @@ STAGE PLANS: predicate: (key 100) (type: boolean) Statistics: Num rows: 166 Data size: 1763 Basic stats: COMPLETE Column stats: NONE Spark HashTable Sink Operator - condition expressions: -0 -1 {value} keys: 0 key (type: string) 1 key (type: string) @@ -62,9 +59,6 @@ STAGE PLANS: Map Join Operator condition map: Inner Join 0 to 1 - condition expressions: -0 {key} -1 {value} keys: 0 key (type: string) 1 key (type: string) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-7128) Add direct support for creating and managing salted hbase tables
[ https://issues.apache.org/jira/browse/HIVE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-7128 started by Swarnim Kulkarni. -- Add direct support for creating and managing salted hbase tables Key: HIVE-7128 URL: https://issues.apache.org/jira/browse/HIVE-7128 Project: Hive Issue Type: New Feature Components: HBase Handler Affects Versions: 0.13.0 Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Salting is a very important technique in order to avoid hot-spotting in hbase. It will be very beneficial if with current hbase integration we can provide a direct support for salting. More information on salting can be found here[1] [1] http://blog.sematext.com/2012/04/09/hbasewd-avoid-regionserver-hotspotting-despite-writing-records-with-sequential-keys/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9195) CBO changes constant to column type
[ https://issues.apache.org/jira/browse/HIVE-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-9195: Attachment: HIVE-9195.3.patch.txt Updated gold files in contrib package CBO changes constant to column type --- Key: HIVE-9195 URL: https://issues.apache.org/jira/browse/HIVE-9195 Project: Hive Issue Type: Bug Components: CBO Reporter: Navis Attachments: HIVE-9195.1.patch.txt, HIVE-9195.2.patch.txt, HIVE-9195.3.patch.txt Making testcase for HIVE-8613, I've found CBO changes constant expr to column expr. For example (only in test mode). {code} CREATE TABLE bucket (key double, value string) CLUSTERED BY (key) SORTED BY (key DESC) INTO 4 BUCKETS STORED AS TEXTFILE; load data local inpath '../../data/files/srcsortbucket1outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket2outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket3outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket4outof4.txt' INTO TABLE bucket; select percentile_approx(case when key 100 then cast('NaN' as double) else key end, 0.5) from bucket; {code} It works in shell but in TestCliDriver, that induces argument type exception creating udaf evaluator, which expects constant OI for second argument. {noformat} 2014-12-22 17:03:31,433 ERROR parse.SemanticAnalyzer (SemanticAnalyzer.java:analyzeInternal(10102)) - CBO failed, skipping CBO. org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException: The second argument must be a constant, but double was passed instead. at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFPercentileApprox.getEvaluator(GenericUDAFPercentileApprox.java:146) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getGenericUDAFEvaluator(FunctionRegistry.java:1160) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getGenericUDAFEvaluator(SemanticAnalyzer.java:3794) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapGroupByOperator(SemanticAnalyzer.java:4467) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggrNoSkew(SemanticAnalyzer.java:5536) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8884) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9745) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9638) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10086) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:419) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1107) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1155) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1034) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:206) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:158) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:369) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:304) at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:877) at org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:136) at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23(TestCliDriver.java:120) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9213) Improve the mask pattern in QTestUtil to save partial directory info in test result
[ https://issues.apache.org/jira/browse/HIVE-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-9213: --- Description: NO PRECOMMIT TESTS The mask pattern in QTestUtil will mask directory in test result, since the directory varies in different test env. However, in Encryption test, the directory info is needed to verify the intermediate files are put in proper table. The whole directory is not necessary, and part of it is enough. was: The mask pattern in QTestUtil will mask directory in test result, since the directory varies in different test env. However, in Encryption test, the directory info is needed to verify the intermediate files are put in proper table. The whole directory is not necessary, and part of it is enough. Improve the mask pattern in QTestUtil to save partial directory info in test result --- Key: HIVE-9213 URL: https://issues.apache.org/jira/browse/HIVE-9213 Project: Hive Issue Type: Sub-task Reporter: Dong Chen Assignee: Dong Chen Fix For: encryption-branch Attachments: HIVE-9213.1.patch, HIVE-9213.patch NO PRECOMMIT TESTS The mask pattern in QTestUtil will mask directory in test result, since the directory varies in different test env. However, in Encryption test, the directory info is needed to verify the intermediate files are put in proper table. The whole directory is not necessary, and part of it is enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-9171) After use init file in beeline,the consoleReader is setted null.
[ https://issues.apache.org/jira/browse/HIVE-9171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu reassigned HIVE-9171: -- Assignee: Ferdinand Xu After use init file in beeline,the consoleReader is setted null. Key: HIVE-9171 URL: https://issues.apache.org/jira/browse/HIVE-9171 Project: Hive Issue Type: Bug Components: Beeline Affects Versions: 0.14.0 Reporter: Wang Hao Assignee: Ferdinand Xu when I use init file in beeline ,it has some exception: ./beeline -i init.sql -f /tmp/test.sql --verbose 0: jdbc:hive2://hadoop015.dx.momo.com:1 SELECT Error: Error while compiling statement: FAILED: ParseException line 1:6 cannot recognize input near 'EOF' 'EOF' 'EOF' in select clause (state=42000,code=4) org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:6 cannot recognize input near 'EOF' 'EOF' 'EOF' in select clause at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:231) at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:217) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:254) at org.apache.hive.beeline.Commands.execute(Commands.java:784) at org.apache.hive.beeline.Commands.sql(Commands.java:665) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:933) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:796) at org.apache.hive.beeline.BeeLine.executeFile(BeeLine.java:781) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:726) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:465) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:451) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:6 cannot recognize input near 'EOF' 'EOF' 'EOF' in select clause at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:314) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:102) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:171) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:256) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:376) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:363) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:79) at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:37) at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:64) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:536) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:60) at com.sun.proxy.$Proxy14.executeStatementAsync(Unknown Source) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:247) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:401) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[jira] [Updated] (HIVE-4776) Add option for skipping first n-rows of files
[ https://issues.apache.org/jira/browse/HIVE-4776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-4776: Resolution: Duplicate Status: Resolved (was: Patch Available) Add option for skipping first n-rows of files - Key: HIVE-4776 URL: https://issues.apache.org/jira/browse/HIVE-4776 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-4776.D11445.1.patch Some CSV file has header information which should be removed before used in hive. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files
[ https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259786#comment-14259786 ] Brock Noland commented on HIVE-9167: Hi [~spena], I see a RB item but I cannot remember if this patch was ready? Enhance encryption testing framework to allow create keys zones inside .q files - Key: HIVE-9167 URL: https://issues.apache.org/jira/browse/HIVE-9167 Project: Hive Issue Type: Sub-task Reporter: Sergio Peña Assignee: Sergio Peña The current implementation of the encryption testing framework on HIVE-8900 initializes a couple of encrypted databases to be used on .q test files. This is useful in order to make tests small, but it does not test all details found on the encryption implementation, such as: encrypted tables with different encryption strength in the same database. We need to allow this kind of encryption as it is how it will be used in the real world where a database will have a few encrypted tables (not all the DB). Also, we need to make this encryption framework flexible so that we can create/delete keys zones on demand when running the .q files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9215) Some mapjoin queries broken with IdentityProjectRemover with PPD
[ https://issues.apache.org/jira/browse/HIVE-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-9215: Attachment: HIVE-9215.1.patch.txt As commented in code, select operator in RS-SEL-RS should not be removed because current compilers are not expecting null reducer operator. We might modify compilers but seemed to much. Or We can remove identity select in physical optimizer. Some mapjoin queries broken with IdentityProjectRemover with PPD Key: HIVE-9215 URL: https://issues.apache.org/jira/browse/HIVE-9215 Project: Hive Issue Type: Bug Components: Logical Optimizer Affects Versions: 0.15.0 Reporter: Szehon Ho Attachments: HIVE-9215.1.patch.txt, auto_join_ppd.q IdentityProjectRemover (hive.optimize.remove.identity.project) with PPD will sometimes make mapjoin query that returns the wrong result in MR case as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9215) Some mapjoin queries broken with IdentityProjectRemover with PPD
[ https://issues.apache.org/jira/browse/HIVE-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-9215: Assignee: Navis Status: Patch Available (was: Open) Some mapjoin queries broken with IdentityProjectRemover with PPD Key: HIVE-9215 URL: https://issues.apache.org/jira/browse/HIVE-9215 Project: Hive Issue Type: Bug Components: Logical Optimizer Affects Versions: 0.15.0 Reporter: Szehon Ho Assignee: Navis Attachments: HIVE-9215.1.patch.txt, auto_join_ppd.q IdentityProjectRemover (hive.optimize.remove.identity.project) with PPD will sometimes make mapjoin query that returns the wrong result in MR case as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8155) In select statement after * any random characters are allowed in hive but in RDBMS its not allowed
[ https://issues.apache.org/jira/browse/HIVE-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259792#comment-14259792 ] Dong Chen commented on HIVE-8155: - Thanks very much for your review!! [~ashutoshc], [~sershe] In select statement after * any random characters are allowed in hive but in RDBMS its not allowed --- Key: HIVE-8155 URL: https://issues.apache.org/jira/browse/HIVE-8155 Project: Hive Issue Type: Improvement Reporter: Ferdinand Xu Assignee: Dong Chen Priority: Critical Attachments: HIVE-8155.1.patch, HIVE-8155.patch In select statement after * any random characters are allowed in hive but in RDBMS its not allowed. Steps: In the below query abcdef is random characters. In RDBMS(oracle): select *abcdef from mytable; Output: ERROR prepare() failed with: ORA-00923: FROM keyword not found where expected In Hive: select *abcdef from mytable; Output: Query worked fine and display all the records of mytable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9157) Merge from trunk to spark 12/26/2014 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9157: --- Status: Patch Available (was: Open) Merge from trunk to spark 12/26/2014 [Spark Branch] --- Key: HIVE-9157 URL: https://issues.apache.org/jira/browse/HIVE-9157 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch Attachments: HIVE-9157.1-spark.patch.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9157) Merge from trunk to spark 12/26/2014 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9157: --- Attachment: HIVE-9157.1-spark.patch.txt Merge from trunk to spark 12/26/2014 [Spark Branch] --- Key: HIVE-9157 URL: https://issues.apache.org/jira/browse/HIVE-9157 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch Attachments: HIVE-9157.1-spark.patch.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-8817) Create unit test where we insert into an encrypted table and then read from it with pig
[ https://issues.apache.org/jira/browse/HIVE-8817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dong Chen reassigned HIVE-8817: --- Assignee: Dong Chen Create unit test where we insert into an encrypted table and then read from it with pig --- Key: HIVE-8817 URL: https://issues.apache.org/jira/browse/HIVE-8817 Project: Hive Issue Type: Sub-task Affects Versions: encryption-branch Reporter: Brock Noland Assignee: Dong Chen Fix For: encryption-branch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-8818) Create unit test where we insert into an encrypted table and then read from it with hcatalog mapreduce
[ https://issues.apache.org/jira/browse/HIVE-8818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dong Chen reassigned HIVE-8818: --- Assignee: Dong Chen Create unit test where we insert into an encrypted table and then read from it with hcatalog mapreduce -- Key: HIVE-8818 URL: https://issues.apache.org/jira/browse/HIVE-8818 Project: Hive Issue Type: Sub-task Reporter: Brock Noland Assignee: Dong Chen -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9167) Enhance encryption testing framework to allow create keys zones inside .q files
[ https://issues.apache.org/jira/browse/HIVE-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259796#comment-14259796 ] Ferdinand Xu commented on HIVE-9167: Hi [~spena], I have a few comments left on your review board entry. By adding this crypto_helper command, how can we create two keys in different length since they are specified in the configuration? Enhance encryption testing framework to allow create keys zones inside .q files - Key: HIVE-9167 URL: https://issues.apache.org/jira/browse/HIVE-9167 Project: Hive Issue Type: Sub-task Reporter: Sergio Peña Assignee: Sergio Peña The current implementation of the encryption testing framework on HIVE-8900 initializes a couple of encrypted databases to be used on .q test files. This is useful in order to make tests small, but it does not test all details found on the encryption implementation, such as: encrypted tables with different encryption strength in the same database. We need to allow this kind of encryption as it is how it will be used in the real world where a database will have a few encrypted tables (not all the DB). Also, we need to make this encryption framework flexible so that we can create/delete keys zones on demand when running the .q files. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-7978) Implement SplitSampler for parallel orderby
[ https://issues.apache.org/jira/browse/HIVE-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis resolved HIVE-7978. - Resolution: Duplicate Implement SplitSampler for parallel orderby --- Key: HIVE-7978 URL: https://issues.apache.org/jira/browse/HIVE-7978 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Minor Currently for parallel order-by, random sampler is the only choice, which can be not good in some cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-1955) Support non-constant expressions for array indexes.
[ https://issues.apache.org/jira/browse/HIVE-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis resolved HIVE-1955. - Resolution: Fixed Support non-constant expressions for array indexes. --- Key: HIVE-1955 URL: https://issues.apache.org/jira/browse/HIVE-1955 Project: Hive Issue Type: Improvement Reporter: Adam Kramer Assignee: Navis FAILED: Error in semantic analysis: line 4:8 Non Constant Expressions for Array Indexes not Supported dut ...just wrote my own UDF to do this, and it is trivial. We should support this natively. Let foo have these rows: arr i [1,2,3] 1 [3,4,5] 2 [5,4,3] 2 [0,0,1] 0 Then, SELECT arr[i] FROM foo should return: 2 5 3 1 Similarly, for the same table, SELECT 3 IN arr FROM foo should return: true true true false ...these use cases are needless limitations of functionality. We shouldn't need UDFs to accomplish these goals. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9195) CBO changes constant to column type
[ https://issues.apache.org/jira/browse/HIVE-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259805#comment-14259805 ] Hive QA commented on HIVE-9195: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12689272/HIVE-9195.3.patch.txt {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6722 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2202/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2202/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2202/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12689272 - PreCommit-HIVE-TRUNK-Build CBO changes constant to column type --- Key: HIVE-9195 URL: https://issues.apache.org/jira/browse/HIVE-9195 Project: Hive Issue Type: Bug Components: CBO Reporter: Navis Attachments: HIVE-9195.1.patch.txt, HIVE-9195.2.patch.txt, HIVE-9195.3.patch.txt Making testcase for HIVE-8613, I've found CBO changes constant expr to column expr. For example (only in test mode). {code} CREATE TABLE bucket (key double, value string) CLUSTERED BY (key) SORTED BY (key DESC) INTO 4 BUCKETS STORED AS TEXTFILE; load data local inpath '../../data/files/srcsortbucket1outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket2outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket3outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket4outof4.txt' INTO TABLE bucket; select percentile_approx(case when key 100 then cast('NaN' as double) else key end, 0.5) from bucket; {code} It works in shell but in TestCliDriver, that induces argument type exception creating udaf evaluator, which expects constant OI for second argument. {noformat} 2014-12-22 17:03:31,433 ERROR parse.SemanticAnalyzer (SemanticAnalyzer.java:analyzeInternal(10102)) - CBO failed, skipping CBO. org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException: The second argument must be a constant, but double was passed instead. at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFPercentileApprox.getEvaluator(GenericUDAFPercentileApprox.java:146) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getGenericUDAFEvaluator(FunctionRegistry.java:1160) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getGenericUDAFEvaluator(SemanticAnalyzer.java:3794) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapGroupByOperator(SemanticAnalyzer.java:4467) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggrNoSkew(SemanticAnalyzer.java:5536) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8884) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9745) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9638) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10086) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:419) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1107) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1155) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1034) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:206) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:158) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:369) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:304) at
Review Request 29448: HIVE-6992 - Support for PreparedStatement.getMetadata in hive-jdbc and server
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/29448/ --- Review request for hive. Bugs: HIVE-6992 https://issues.apache.org/jira/browse/HIVE-6992 Repository: hive Description --- HIVE-6992 - Support for PreparedStatement.getMetadata in hive-jdbc and server This patch 1. Changes HiveSession/ICliService.executeStatement prototype to include two paramters. a. Boolean variable - prepareOnly to indicate if this execution request is only for preparing. b. existingOpHandle - to execute prepared operation. 2. Changes TExecuteStatementReq to support above two parameters. 3. Changes SQLOperation.java to support separate preparation step. 4. Adds new OperationState called PREPARED to indicate prepared operation. 5. Refactors HiveStatement.java/HiveQueryResult to support retrieving of schema from HiveStatement.java. 6. Changes HiveJdbc class to support PreparedStatement.getMetadata Also includes new (basic) unit-test for PreapredStatement.getMetadata Test for executeStatement will be added later after initial code-review I would like to know if my approach looks ok OR reviewers would like seperate PreapreStatement API like ExecuteStatement Diffs - trunk/itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcDriver2.java 1647912 trunk/itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHiveServer2.java 1647912 trunk/itests/hive-unit/src/test/java/org/apache/hive/jdbc/miniHS2/TestHiveServer2SessionTimeout.java 1647912 trunk/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java 1647912 trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java 1647912 trunk/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java 1647912 trunk/service/if/TCLIService.thrift 1647912 trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.h 1647912 trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.cpp 1647912 trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TExecuteStatementReq.java 1647912 trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOperationState.java 1647912 trunk/service/src/gen/thrift/gen-py/TCLIService/ttypes.py 1647912 trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb 1647912 trunk/service/src/java/org/apache/hive/service/cli/CLIService.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/EmbeddedCLIServiceClient.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/ICLIService.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/OperationState.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/operation/ExecuteStatementOperation.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/operation/HiveCommandOperation.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/operation/MetadataOperation.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/operation/Operation.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/session/HiveSession.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java 1647912 trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIServiceClient.java 1647912 trunk/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java 1647912 trunk/service/src/test/org/apache/hive/service/cli/operation/TestOperationLoggingAPI.java 1647912 trunk/service/src/test/org/apache/hive/service/cli/session/TestSessionGlobalInitFile.java 1647912 trunk/service/src/test/org/apache/hive/service/cli/thrift/ThriftCLIServiceTest.java 1647912 Diff: https://reviews.apache.org/r/29448/diff/ Testing --- Unit test added for PreparedStatement.getMetadata Also tested with simple JDBC program. Thanks, Prafulla
[jira] [Updated] (HIVE-6992) Implement PreparedStatement.getMetaData(), getParmeterMetaData()
[ https://issues.apache.org/jira/browse/HIVE-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prafulla T updated HIVE-6992: - Status: Patch Available (was: Open) Description of changes in attached patch HIVE-6992.1.patch HIVE-6992 - Support for PreparedStatement.getMetadata in hive-jdbc and server This patch 1. Changes HiveSession/ICliService.executeStatement prototype to include two paramters. a. Boolean variable - prepareOnly to indicate if this execution request is only for preparing. b. existingOpHandle - to execute prepared operation. 2. Changes TExecuteStatementReq to support above two parameters. 3. Changes SQLOperation.java to support separate preparation step. 4. Adds new OperationState called PREPARED to indicate prepared operation. 5. Refactors HiveStatement.java/HiveQueryResult to support retrieving of schema from HiveStatement.java. 6. Changes HiveJdbc class to support PreparedStatement.getMetadata Also includes new (basic) unit-test for PreapredStatement.getMetadata Test for executeStatement will be added later after initial code-review I would like to know if my approach looks ok OR reviewers would like seperate PreapreStatement API like ExecuteStatement Implement PreparedStatement.getMetaData(), getParmeterMetaData() Key: HIVE-6992 URL: https://issues.apache.org/jira/browse/HIVE-6992 Project: Hive Issue Type: Bug Components: JDBC Reporter: Bill Oliver It would be very helpful to have methods PreparedStatement.getMetaData() and also PreparedStatement.getParameterMetaData() implemented. I especially would like PreparedStatmeent.getMetaData() implemented, as I could prepare a SQL statement, and then get information about the result set, as well as information that the query is valid. I am pretty sure this information is available in some form. When you do an EXPLAIN query, the explain operation shows information about the result set including the column name/aliases and the column types. thanks you -bill -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9171) After use init file in beeline,the consoleReader is setted null.
[ https://issues.apache.org/jira/browse/HIVE-9171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259811#comment-14259811 ] Ferdinand Xu commented on HIVE-9171: Hi [~wh831019], can you provide your test.sql file? I can not reproduce this bug? Thank you! After use init file in beeline,the consoleReader is setted null. Key: HIVE-9171 URL: https://issues.apache.org/jira/browse/HIVE-9171 Project: Hive Issue Type: Bug Components: Beeline Affects Versions: 0.14.0 Reporter: Wang Hao Assignee: Ferdinand Xu when I use init file in beeline ,it has some exception: ./beeline -i init.sql -f /tmp/test.sql --verbose 0: jdbc:hive2://hadoop015.dx.momo.com:1 SELECT Error: Error while compiling statement: FAILED: ParseException line 1:6 cannot recognize input near 'EOF' 'EOF' 'EOF' in select clause (state=42000,code=4) org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:6 cannot recognize input near 'EOF' 'EOF' 'EOF' in select clause at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:231) at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:217) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:254) at org.apache.hive.beeline.Commands.execute(Commands.java:784) at org.apache.hive.beeline.Commands.sql(Commands.java:665) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:933) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:796) at org.apache.hive.beeline.BeeLine.executeFile(BeeLine.java:781) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:726) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:465) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:451) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:6 cannot recognize input near 'EOF' 'EOF' 'EOF' in select clause at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:314) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:102) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:171) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:256) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:376) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:363) at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:79) at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:37) at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:64) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614) at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:536) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:60) at com.sun.proxy.$Proxy14.executeStatementAsync(Unknown Source) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:247) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:401) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) at
[jira] [Updated] (HIVE-6992) Implement PreparedStatement.getMetaData(), getParmeterMetaData()
[ https://issues.apache.org/jira/browse/HIVE-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prafulla T updated HIVE-6992: - Attachment: HIVE-6992.1.patch Initial patch Implement PreparedStatement.getMetaData(), getParmeterMetaData() Key: HIVE-6992 URL: https://issues.apache.org/jira/browse/HIVE-6992 Project: Hive Issue Type: Bug Components: JDBC Reporter: Bill Oliver Attachments: HIVE-6992.1.patch It would be very helpful to have methods PreparedStatement.getMetaData() and also PreparedStatement.getParameterMetaData() implemented. I especially would like PreparedStatmeent.getMetaData() implemented, as I could prepare a SQL statement, and then get information about the result set, as well as information that the query is valid. I am pretty sure this information is available in some form. When you do an EXPLAIN query, the explain operation shows information about the result set including the column name/aliases and the column types. thanks you -bill -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6992) Implement PreparedStatement.getMetaData(), getParmeterMetaData()
[ https://issues.apache.org/jira/browse/HIVE-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259813#comment-14259813 ] Prafulla T commented on HIVE-6992: -- This patch is also available for review on reviewboard at following link. https://reviews.apache.org/r/29448/ Implement PreparedStatement.getMetaData(), getParmeterMetaData() Key: HIVE-6992 URL: https://issues.apache.org/jira/browse/HIVE-6992 Project: Hive Issue Type: Bug Components: JDBC Reporter: Bill Oliver Attachments: HIVE-6992.1.patch It would be very helpful to have methods PreparedStatement.getMetaData() and also PreparedStatement.getParameterMetaData() implemented. I especially would like PreparedStatmeent.getMetaData() implemented, as I could prepare a SQL statement, and then get information about the result set, as well as information that the query is valid. I am pretty sure this information is available in some form. When you do an EXPLAIN query, the explain operation shows information about the result set including the column name/aliases and the column types. thanks you -bill -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9157) Merge from trunk to spark 12/26/2014 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259821#comment-14259821 ] Hive QA commented on HIVE-9157: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12689279/HIVE-9157.1-spark.patch.txt {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 7281 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multi_insert_mixed org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parallel_join0 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parallel_join1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_cast_constant org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_cast_constant org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_admin_almighty1 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_6 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_8 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_pushdown org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_windowing {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/593/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/593/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-593/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12689279 - PreCommit-HIVE-SPARK-Build Merge from trunk to spark 12/26/2014 [Spark Branch] --- Key: HIVE-9157 URL: https://issues.apache.org/jira/browse/HIVE-9157 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch Attachments: HIVE-9157.1-spark.patch.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9157) Merge from trunk to spark 12/26/2014 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-9157: --- Attachment: HIVE-9157.1-spark.patch.txt Merge from trunk to spark 12/26/2014 [Spark Branch] --- Key: HIVE-9157 URL: https://issues.apache.org/jira/browse/HIVE-9157 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch Attachments: HIVE-9157.1-spark.patch.txt, HIVE-9157.1-spark.patch.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9206) Fix Desc Formatted related Java 8 ordering differences
[ https://issues.apache.org/jira/browse/HIVE-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259833#comment-14259833 ] Brock Noland commented on HIVE-9206: Sorry [~mohitsabharwal] looks like this got out of date. Could you rebase and I will commit right away. Fix Desc Formatted related Java 8 ordering differences -- Key: HIVE-9206 URL: https://issues.apache.org/jira/browse/HIVE-9206 Project: Hive Issue Type: Sub-task Components: Tests Reporter: Mohit Sabharwal Assignee: Mohit Sabharwal Attachments: HIVE-9206.1.patch, HIVE-9206.patch This patch fixes the following tests for Java 8: (1) list_bucket_dml_*.q {{DESC FORMATTED}} calls {{StorageDescriptor.getSkewedInfo()}} HMS API, which returns a thrift (unordered) map. Generate java version specific out file for these tests. (2) partitions_json.q {{SHOW PARTITIONS}} uses {{MapBuilder}} via {{JsonMetaDataFormatter}} which uses {{HashMap}}. Changed it to ordered map. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9215) Some mapjoin queries broken with IdentityProjectRemover with PPD
[ https://issues.apache.org/jira/browse/HIVE-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259834#comment-14259834 ] Hive QA commented on HIVE-9215: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12689276/HIVE-9215.1.patch.txt {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6722 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_join4 org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2203/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2203/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2203/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12689276 - PreCommit-HIVE-TRUNK-Build Some mapjoin queries broken with IdentityProjectRemover with PPD Key: HIVE-9215 URL: https://issues.apache.org/jira/browse/HIVE-9215 Project: Hive Issue Type: Bug Components: Logical Optimizer Affects Versions: 0.15.0 Reporter: Szehon Ho Assignee: Navis Attachments: HIVE-9215.1.patch.txt, auto_join_ppd.q IdentityProjectRemover (hive.optimize.remove.identity.project) with PPD will sometimes make mapjoin query that returns the wrong result in MR case as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9220) HIVE-9109 missed updating result of list_bucket_dml_10
Navis created HIVE-9220: --- Summary: HIVE-9109 missed updating result of list_bucket_dml_10 Key: HIVE-9220 URL: https://issues.apache.org/jira/browse/HIVE-9220 Project: Hive Issue Type: Sub-task Reporter: Navis Assignee: Navis Priority: Trivial list_bucket_dml_10.q.java1.7.out is missing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9220) HIVE-9109 missed updating result of list_bucket_dml_10
[ https://issues.apache.org/jira/browse/HIVE-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-9220: Status: Patch Available (was: Open) HIVE-9109 missed updating result of list_bucket_dml_10 -- Key: HIVE-9220 URL: https://issues.apache.org/jira/browse/HIVE-9220 Project: Hive Issue Type: Sub-task Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-9109.1.patch.txt list_bucket_dml_10.q.java1.7.out is missing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9220) HIVE-9109 missed updating result of list_bucket_dml_10
[ https://issues.apache.org/jira/browse/HIVE-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-9220: Attachment: HIVE-9109.1.patch.txt HIVE-9109 missed updating result of list_bucket_dml_10 -- Key: HIVE-9220 URL: https://issues.apache.org/jira/browse/HIVE-9220 Project: Hive Issue Type: Sub-task Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-9109.1.patch.txt list_bucket_dml_10.q.java1.7.out is missing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9195) CBO changes constant to column type
[ https://issues.apache.org/jira/browse/HIVE-9195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259836#comment-14259836 ] Navis commented on HIVE-9195: - Fail of list_bucket_dml_10 is due to HIVE-9109, which is fixed in HIVE-9220. CBO changes constant to column type --- Key: HIVE-9195 URL: https://issues.apache.org/jira/browse/HIVE-9195 Project: Hive Issue Type: Bug Components: CBO Reporter: Navis Attachments: HIVE-9195.1.patch.txt, HIVE-9195.2.patch.txt, HIVE-9195.3.patch.txt Making testcase for HIVE-8613, I've found CBO changes constant expr to column expr. For example (only in test mode). {code} CREATE TABLE bucket (key double, value string) CLUSTERED BY (key) SORTED BY (key DESC) INTO 4 BUCKETS STORED AS TEXTFILE; load data local inpath '../../data/files/srcsortbucket1outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket2outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket3outof4.txt' INTO TABLE bucket; load data local inpath '../../data/files/srcsortbucket4outof4.txt' INTO TABLE bucket; select percentile_approx(case when key 100 then cast('NaN' as double) else key end, 0.5) from bucket; {code} It works in shell but in TestCliDriver, that induces argument type exception creating udaf evaluator, which expects constant OI for second argument. {noformat} 2014-12-22 17:03:31,433 ERROR parse.SemanticAnalyzer (SemanticAnalyzer.java:analyzeInternal(10102)) - CBO failed, skipping CBO. org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException: The second argument must be a constant, but double was passed instead. at org.apache.hadoop.hive.ql.udf.generic.GenericUDAFPercentileApprox.getEvaluator(GenericUDAFPercentileApprox.java:146) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getGenericUDAFEvaluator(FunctionRegistry.java:1160) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getGenericUDAFEvaluator(SemanticAnalyzer.java:3794) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapGroupByOperator(SemanticAnalyzer.java:4467) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggrNoSkew(SemanticAnalyzer.java:5536) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8884) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9745) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9638) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10086) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:419) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1107) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1155) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1034) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:206) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:158) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:369) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:304) at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:877) at org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:136) at org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_udaf_percentile_approx_23(TestCliDriver.java:120) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9215) Some mapjoin queries broken with IdentityProjectRemover with PPD
[ https://issues.apache.org/jira/browse/HIVE-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-9215: Attachment: HIVE-9215.2.patch.txt Updated ppd_join4.q Some mapjoin queries broken with IdentityProjectRemover with PPD Key: HIVE-9215 URL: https://issues.apache.org/jira/browse/HIVE-9215 Project: Hive Issue Type: Bug Components: Logical Optimizer Affects Versions: 0.15.0 Reporter: Szehon Ho Assignee: Navis Attachments: HIVE-9215.1.patch.txt, HIVE-9215.2.patch.txt, auto_join_ppd.q IdentityProjectRemover (hive.optimize.remove.identity.project) with PPD will sometimes make mapjoin query that returns the wrong result in MR case as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 23351: Support direct fetch for lateral views, sub queries, etc.
On Nov. 3, 2014, 10:28 p.m., John Pullokkaran wrote: ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java, line 162 https://reviews.apache.org/r/23351/diff/1/?file=626500#file626500line162 Can't we use ParseContext.topToTable to get to Table given a TS object instead of walking the QB tree? Good point! Sorry for missing this comment. I'll update the patch shortly after. - Navis --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23351/#review59662 --- On July 9, 2014, 6:55 a.m., Navis Ryu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/23351/ --- (Updated July 9, 2014, 6:55 a.m.) Review request for hive. Bugs: HIVE-5718 https://issues.apache.org/jira/browse/HIVE-5718 Repository: hive-git Description --- Extend HIVE-2925 with LV and SubQ. Diffs - ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java 5d41fa1 ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java 7413d2b ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java 908db1e ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java 911ac8a ql/src/java/org/apache/hadoop/hive/ql/plan/FetchWork.java 32d84ea ql/src/test/queries/clientpositive/nonmr_fetch.q 2a92d17 ql/src/test/queries/clientpositive/nonmr_fetch_threshold.q e6343e2 ql/src/test/results/clientpositive/explain_logical.q.out bb26e8c ql/src/test/results/clientpositive/lateral_view_noalias.q.out d51b2de ql/src/test/results/clientpositive/nonmr_fetch.q.out 5a13e84 ql/src/test/results/clientpositive/nonmr_fetch_threshold.q.out 39cdfa6 ql/src/test/results/clientpositive/select_dummy_source.q.out 2742d56 ql/src/test/results/clientpositive/subquery_alias.q.out 37bc3a4 ql/src/test/results/clientpositive/udf_explode.q.out 4eeedeb ql/src/test/results/clientpositive/udf_inline.q.out e065bed ql/src/test/results/clientpositive/udf_reflect2.q.out 6b19277 ql/src/test/results/clientpositive/udf_to_unix_timestamp.q.out 447ef87 ql/src/test/results/clientpositive/udtf_explode.q.out ae95907 Diff: https://reviews.apache.org/r/23351/diff/ Testing --- Thanks, Navis Ryu
[jira] [Updated] (HIVE-5718) Support direct fetch for lateral views, sub queries, etc.
[ https://issues.apache.org/jira/browse/HIVE-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Navis updated HIVE-5718: Attachment: HIVE-5718.14.patch.txt Support direct fetch for lateral views, sub queries, etc. - Key: HIVE-5718 URL: https://issues.apache.org/jira/browse/HIVE-5718 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Navis Assignee: Navis Priority: Trivial Attachments: D13857.1.patch, D13857.2.patch, D13857.3.patch, HIVE-5718.10.patch.txt, HIVE-5718.11.patch.txt, HIVE-5718.12.patch.txt, HIVE-5718.13.patch.txt, HIVE-5718.14.patch.txt, HIVE-5718.4.patch.txt, HIVE-5718.5.patch.txt, HIVE-5718.6.patch.txt, HIVE-5718.7.patch.txt, HIVE-5718.8.patch.txt, HIVE-5718.9.patch.txt, HIVE-5718.diff-v11-v12.patch Extend HIVE-2925 with LV and SubQ. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9153) Evaluate CombineHiveInputFormat versus HiveInputFormat
[ https://issues.apache.org/jira/browse/HIVE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259849#comment-14259849 ] Xuefu Zhang commented on HIVE-9153: --- +1 Evaluate CombineHiveInputFormat versus HiveInputFormat -- Key: HIVE-9153 URL: https://issues.apache.org/jira/browse/HIVE-9153 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland Assignee: Rui Li Attachments: HIVE-9153.1-spark.patch, HIVE-9153.1-spark.patch, HIVE-9153.2.patch, HIVE-9153.3.patch, screenshot.PNG The default InputFormat is {{CombineHiveInputFormat}} and thus HOS uses this. However, Tez uses {{HiveInputFormat}}. Since tasks are relatively cheap in Spark, it might make sense for us to use {{HiveInputFormat}} as well. We should evaluate this on a query which has many input splits such as {{select count(\*) from store_sales where something is not null}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9216) Avoid redundant clone of JobConf [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-9216: -- Resolution: Fixed Fix Version/s: spark-branch Status: Resolved (was: Patch Available) Committed to Spark branch. Thanks, Rui. Avoid redundant clone of JobConf [Spark Branch] --- Key: HIVE-9216 URL: https://issues.apache.org/jira/browse/HIVE-9216 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Rui Li Assignee: Rui Li Priority: Minor Fix For: spark-branch Attachments: HIVE-9216.1-spark.patch Currently in SparkPlanGenerator, we clone job conf twice for each MapWork. Should avoid this as cloning job conf involves writing to HDFS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9157) Merge from trunk to spark 12/26/2014 [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259856#comment-14259856 ] Hive QA commented on HIVE-9157: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12689286/HIVE-9157.1-spark.patch.txt {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 7281 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_multi_insert_mixed org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parallel_join0 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_parallel_join1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_cast_constant org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_cast_constant org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_admin_almighty1 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_6 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_8 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_pushdown org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_optimize_nullscan org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_windowing org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/594/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/594/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-594/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12689286 - PreCommit-HIVE-SPARK-Build Merge from trunk to spark 12/26/2014 [Spark Branch] --- Key: HIVE-9157 URL: https://issues.apache.org/jira/browse/HIVE-9157 Project: Hive Issue Type: Sub-task Components: Spark Affects Versions: spark-branch Reporter: Brock Noland Assignee: Brock Noland Fix For: spark-branch Attachments: HIVE-9157.1-spark.patch.txt, HIVE-9157.1-spark.patch.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6992) Implement PreparedStatement.getMetaData(), getParmeterMetaData()
[ https://issues.apache.org/jira/browse/HIVE-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259857#comment-14259857 ] Hive QA commented on HIVE-6992: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12689283/HIVE-6992.1.patch {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 6722 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_list_bucket_dml_10 org.apache.hive.hcatalog.streaming.TestStreaming.testTransactionBatchEmptyCommit org.apache.hive.jdbc.miniHS2.TestHiveServer2.testConnection org.apache.hive.jdbc.miniHS2.TestHiveServer2.testGetVariableValue org.apache.hive.jdbc.miniHS2.TestHiveServer2SessionTimeout.testConnection org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testConfOverlay org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatement org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementAsync org.apache.hive.service.cli.operation.TestOperationLoggingAPI.testFetchResultsOfLog org.apache.hive.service.cli.operation.TestOperationLoggingAPI.testFetchResultsOfLogAsync org.apache.hive.service.cli.operation.TestOperationLoggingAPI.testFetchResultsOfLogCleanup org.apache.hive.service.cli.operation.TestOperationLoggingAPI.testFetchResultsOfLogWithOrientation org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitDir org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitFile org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitFileAndConfOverlay org.apache.hive.service.cli.session.TestSessionGlobalInitFile.testSessionGlobalInitFileWithUser org.apache.hive.service.cli.thrift.TestThriftBinaryCLIService.testExecuteStatement org.apache.hive.service.cli.thrift.TestThriftBinaryCLIService.testExecuteStatementAsync org.apache.hive.service.cli.thrift.TestThriftHttpCLIService.testExecuteStatement org.apache.hive.service.cli.thrift.TestThriftHttpCLIService.testExecuteStatementAsync {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2204/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2204/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2204/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12689283 - PreCommit-HIVE-TRUNK-Build Implement PreparedStatement.getMetaData(), getParmeterMetaData() Key: HIVE-6992 URL: https://issues.apache.org/jira/browse/HIVE-6992 Project: Hive Issue Type: Bug Components: JDBC Reporter: Bill Oliver Attachments: HIVE-6992.1.patch It would be very helpful to have methods PreparedStatement.getMetaData() and also PreparedStatement.getParameterMetaData() implemented. I especially would like PreparedStatmeent.getMetaData() implemented, as I could prepare a SQL statement, and then get information about the result set, as well as information that the query is valid. I am pretty sure this information is available in some form. When you do an EXPLAIN query, the explain operation shows information about the result set including the column name/aliases and the column types. thanks you -bill -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-8410) Typo in DOAP - incorrect category URL
[ https://issues.apache.org/jira/browse/HIVE-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu reassigned HIVE-8410: -- Assignee: Ferdinand Xu Typo in DOAP - incorrect category URL - Key: HIVE-8410 URL: https://issues.apache.org/jira/browse/HIVE-8410 Project: Hive Issue Type: Bug Environment: http://svn.apache.org/repos/asf/hive/trunk/doap_Hive.rdf Reporter: Sebb Assignee: Ferdinand Xu The DOAP contains the following: {code} category rdf:resource=http://www.apache.org/category/database; / {code} However, the URL is incorrect; it must be {code} category rdf:resource=http://projects.apache.org/category/database; / {code} Please fix this -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8410) Typo in DOAP - incorrect category URL
[ https://issues.apache.org/jira/browse/HIVE-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-8410: --- Attachment: HIVE-8410.patch See http://projects.apache.org/categories.html Typo in DOAP - incorrect category URL - Key: HIVE-8410 URL: https://issues.apache.org/jira/browse/HIVE-8410 Project: Hive Issue Type: Bug Environment: http://svn.apache.org/repos/asf/hive/trunk/doap_Hive.rdf Reporter: Sebb Assignee: Ferdinand Xu Attachments: HIVE-8410.patch The DOAP contains the following: {code} category rdf:resource=http://www.apache.org/category/database; / {code} However, the URL is incorrect; it must be {code} category rdf:resource=http://projects.apache.org/category/database; / {code} Please fix this -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8410) Typo in DOAP - incorrect category URL
[ https://issues.apache.org/jira/browse/HIVE-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-8410: --- Status: Patch Available (was: Open) Typo in DOAP - incorrect category URL - Key: HIVE-8410 URL: https://issues.apache.org/jira/browse/HIVE-8410 Project: Hive Issue Type: Bug Environment: http://svn.apache.org/repos/asf/hive/trunk/doap_Hive.rdf Reporter: Sebb Assignee: Ferdinand Xu Attachments: HIVE-8410.patch The DOAP contains the following: {code} category rdf:resource=http://www.apache.org/category/database; / {code} However, the URL is incorrect; it must be {code} category rdf:resource=http://projects.apache.org/category/database; / {code} Please fix this -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9220) HIVE-9109 missed updating result of list_bucket_dml_10
[ https://issues.apache.org/jira/browse/HIVE-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14259904#comment-14259904 ] Hive QA commented on HIVE-9220: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12689287/HIVE-9109.1.patch.txt {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 6722 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_optimize_nullscan org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2205/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2205/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-2205/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12689287 - PreCommit-HIVE-TRUNK-Build HIVE-9109 missed updating result of list_bucket_dml_10 -- Key: HIVE-9220 URL: https://issues.apache.org/jira/browse/HIVE-9220 Project: Hive Issue Type: Sub-task Reporter: Navis Assignee: Navis Priority: Trivial Attachments: HIVE-9109.1.patch.txt list_bucket_dml_10.q.java1.7.out is missing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)