[jira] [Commented] (HIVE-6617) Reduce ambiguity in grammar
[ https://issues.apache.org/jira/browse/HIVE-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351760#comment-14351760 ] Pengcheng Xiong commented on HIVE-6617: --- [~ashutoshc], i just noticed that CURRENT_DATE and CURRENT_TIMESTAMP are reserved keywords in SQL11 too. I need to add them. Reduce ambiguity in grammar --- Key: HIVE-6617 URL: https://issues.apache.org/jira/browse/HIVE-6617 Project: Hive Issue Type: Task Reporter: Ashutosh Chauhan Assignee: Pengcheng Xiong Attachments: HIVE-6617.01.patch, HIVE-6617.02.patch, HIVE-6617.03.patch, HIVE-6617.04.patch, HIVE-6617.05.patch, HIVE-6617.06.patch, HIVE-6617.07.patch, HIVE-6617.08.patch, HIVE-6617.09.patch, HIVE-6617.10.patch, HIVE-6617.11.patch, HIVE-6617.12.patch, HIVE-6617.13.patch, HIVE-6617.14.patch, HIVE-6617.15.patch, HIVE-6617.16.patch, HIVE-6617.17.patch, HIVE-6617.18.patch, HIVE-6617.19.patch, HIVE-6617.20.patch, HIVE-6617.21.patch, HIVE-6617.22.patch, HIVE-6617.23.patch, parser.png CLEAR LIBRARY CACHE As of today, antlr reports 214 warnings. Need to bring down this number, ideally to 0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9851) org.apache.hadoop.hive.serde2.avro.AvroSerializer should use org.apache.avro.generic.GenericData.Array when serializing a list
[ https://issues.apache.org/jira/browse/HIVE-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351478#comment-14351478 ] Mark Wagner commented on HIVE-9851: --- Sorry, I just realized that this is limited to the serialization side, so maybe my previous comment isn't as applicable. org.apache.hadoop.hive.serde2.avro.AvroSerializer should use org.apache.avro.generic.GenericData.Array when serializing a list -- Key: HIVE-9851 URL: https://issues.apache.org/jira/browse/HIVE-9851 Project: Hive Issue Type: Bug Components: Hive, Serializers/Deserializers Reporter: Ratandeep Ratti Attachments: HIVE-9851.patch Currently AvroSerializer uses java.util.ArrayList for serializing a list in Hive. This causes problems when we need to convert the avro object into some other representation say a tuple in Pig. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9302) Beeline add commands to register local jdbc driver names and jars
[ https://issues.apache.org/jira/browse/HIVE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351657#comment-14351657 ] Xuefu Zhang commented on HIVE-9302: --- [~Ferd], I think I didn't check in the jar files. Could you please specify which jar(s) you need and the locations? Thanks. Beeline add commands to register local jdbc driver names and jars - Key: HIVE-9302 URL: https://issues.apache.org/jira/browse/HIVE-9302 Project: Hive Issue Type: New Feature Reporter: Brock Noland Assignee: Ferdinand Xu Labels: TODOC1.2 Fix For: 1.2.0 Attachments: DummyDriver-1.0-SNAPSHOT.jar, HIVE-9302.1.patch, HIVE-9302.2.patch, HIVE-9302.3.patch, HIVE-9302.3.patch, HIVE-9302.4.patch, HIVE-9302.patch, mysql-connector-java-bin.jar, postgresql-9.3.jdbc3.jar At present if a beeline user uses {{add jar}} the path they give is actually on the HS2 server. It'd be great to allow beeline users to add local jdbc driver jars and register custom jdbc driver names. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9851) org.apache.hadoop.hive.serde2.avro.AvroSerializer should use org.apache.avro.generic.GenericData.Array when serializing a list
[ https://issues.apache.org/jira/browse/HIVE-9851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351477#comment-14351477 ] Mark Wagner commented on HIVE-9851: --- [~rdsr], I have a rebased and updated version of HIVE-4734 which I was preparing to post. If this is going to also be an issue for Records, Byte arrays, fixeds, enums, etc. then we may just want to wait for that, which will pass the full Avro record all the way through the serializer. I should have some time to finish testing and post that next week. org.apache.hadoop.hive.serde2.avro.AvroSerializer should use org.apache.avro.generic.GenericData.Array when serializing a list -- Key: HIVE-9851 URL: https://issues.apache.org/jira/browse/HIVE-9851 Project: Hive Issue Type: Bug Components: Hive, Serializers/Deserializers Reporter: Ratandeep Ratti Attachments: HIVE-9851.patch Currently AvroSerializer uses java.util.ArrayList for serializing a list in Hive. This causes problems when we need to convert the avro object into some other representation say a tuple in Pig. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9302) Beeline add commands to register local jdbc driver names and jars
[ https://issues.apache.org/jira/browse/HIVE-9302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351774#comment-14351774 ] Xuefu Zhang commented on HIVE-9302: --- These two jar files are added to the trunk. Beeline add commands to register local jdbc driver names and jars - Key: HIVE-9302 URL: https://issues.apache.org/jira/browse/HIVE-9302 Project: Hive Issue Type: New Feature Reporter: Brock Noland Assignee: Ferdinand Xu Labels: TODOC1.2 Fix For: 1.2.0 Attachments: DummyDriver-1.0-SNAPSHOT.jar, HIVE-9302.1.patch, HIVE-9302.2.patch, HIVE-9302.3.patch, HIVE-9302.3.patch, HIVE-9302.4.patch, HIVE-9302.patch, mysql-connector-java-bin.jar, postgresql-9.3.jdbc3.jar At present if a beeline user uses {{add jar}} the path they give is actually on the HS2 server. It'd be great to allow beeline users to add local jdbc driver jars and register custom jdbc driver names. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9886) Hive on tez: NPE when converting join to SMB in sub-query
[ https://issues.apache.org/jira/browse/HIVE-9886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14351829#comment-14351829 ] Vikram Dixit K commented on HIVE-9886: -- Looks like test results are not getting posted: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/2973/testReport/ Hive on tez: NPE when converting join to SMB in sub-query - Key: HIVE-9886 URL: https://issues.apache.org/jira/browse/HIVE-9886 Project: Hive Issue Type: Bug Components: Tez Affects Versions: 1.0.0, 1.1.0 Reporter: Vikram Dixit K Assignee: Vikram Dixit K Priority: Critical Attachments: HIVE-9886.1.patch, HIVE-9886.2.patch, HIVE-9886.3.patch, HIVE-9886.4.patch, HIVE-9886.5.patch, HIVE-9886.6.patch {code} set hive.auto.convert.sortmerge.join = true; create table t1( id string, od string); create table t2( id string, od string); select vt1.id from (select rt1.id from (select t1.id, row_number() over (partition by id order by od desc) as row_no from t1) rt1 where rt1.row_no=1) vt1 join (select rt2.id from (select t2.id, row_number() over (partition by id order by od desc) as row_no from t2) rt2 where rt2.row_no=1) vt2 where vt1.id=vt2.id; {code} throws NPE: {code} at org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.init(ReduceRecordProcessor.java:146) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:162) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:138) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:176) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:168) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:163) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.exec.AbstractMapJoinOperator.getValueObjectInspectors(AbstractMapJoinOperator.java:96) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.getJoinOutputObjectInspector(CommonJoinOperator.java:167) at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.initializeOp(CommonJoinOperator.java:310) at org.apache.hadoop.hive.ql.exec.AbstractMapJoinOperator.initializeOp(AbstractMapJoinOperator.java:72) at org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.initializeOp(CommonMergeJoinOperator.java:89) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469) at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425) at org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:65) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469) at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425) at org.apache.hadoop.hive.ql.exec.FilterOperator.initializeOp(FilterOperator.java:66) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469) at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425) at org.apache.hadoop.hive.ql.exec.Operator.initializeOp(Operator.java:410) at org.apache.hadoop.hive.ql.exec.PTFOperator.initializeOp(PTFOperator.java:89) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:469) at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:425) at org.apache.hadoop.hive.ql.exec.ExtractOperator.initializeOp(ExtractOperator.java:40) at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:385) at