[jira] [Resolved] (HIVE-5223) explain doesn't show serde used for table

2013-09-25 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-5223.


   Resolution: Fixed
Fix Version/s: 0.13.0

Committed to trunk. Thanks, Thejas for review!

 explain doesn't show serde used for table
 -

 Key: HIVE-5223
 URL: https://issues.apache.org/jira/browse/HIVE-5223
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.13.0

 Attachments: HIVE-5223.1.patch, HIVE-5223.2.patch, HIVE-5223.3.patch, 
 HIVE-5223.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5357) ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr scenario when there are distinct keys in child GBY

2013-09-25 Thread Chun Chen (JIRA)
Chun Chen created HIVE-5357:
---

 Summary: ReduceSinkDeDuplication optimizer pick the wrong keys in 
pRS-cGBYm-cRS-cGBYr scenario when there are distinct keys in child GBY
 Key: HIVE-5357
 URL: https://issues.apache.org/jira/browse/HIVE-5357
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Chun Chen
Assignee: Chun Chen
 Fix For: 0.12.0


{code}
select key, count(distinct value) from (select key, value from src group by 
key, value) t group by key;

//result
0 0 NULL
10  10  NULL
100 100 NULL
103 103 NULL
104 104 NULL
{code}

Obviously the result is wrong.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5357) ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr scenario when there are distinct keys in child GBY

2013-09-25 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated HIVE-5357:


Attachment: HIVE-5357.patch

 ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr 
 scenario when there are distinct keys in child GBY
 ---

 Key: HIVE-5357
 URL: https://issues.apache.org/jira/browse/HIVE-5357
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Chun Chen
Assignee: Chun Chen
 Fix For: 0.12.0

 Attachments: HIVE-5357.patch


 {code}
 select key, count(distinct value) from (select key, value from src group by 
 key, value) t group by key;
 //result
 0 0 NULL
 10  10  NULL
 100 100 NULL
 103 103 NULL
 104 104 NULL
 {code}
 Obviously the result is wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5357) ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr scenario when there are distinct keys in child GBY

2013-09-25 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated HIVE-5357:


Release Note: ReduceSinkDeDuplication optimizer pick the wrong keys in 
pRS-cGBYm-cRS-cGBYr scenario when there are distinct keys in child GBY
  Status: Patch Available  (was: Open)

 ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr 
 scenario when there are distinct keys in child GBY
 ---

 Key: HIVE-5357
 URL: https://issues.apache.org/jira/browse/HIVE-5357
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Chun Chen
Assignee: Chun Chen
 Fix For: 0.12.0

 Attachments: HIVE-5357.patch


 {code}
 select key, count(distinct value) from (select key, value from src group by 
 key, value) t group by key;
 //result
 0 0 NULL
 10  10  NULL
 100 100 NULL
 103 103 NULL
 104 104 NULL
 {code}
 Obviously the result is wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5357) ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr scenario when there are distinct keys in child GBY

2013-09-25 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777203#comment-13777203
 ] 

Thejas M Nair commented on HIVE-5357:
-

[~ashutoshc] [~hagleitn] [~navis] Can one of you please review this patch ? 
This is something I would like to include in 0.12 .


 ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr 
 scenario when there are distinct keys in child GBY
 ---

 Key: HIVE-5357
 URL: https://issues.apache.org/jira/browse/HIVE-5357
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Chun Chen
Assignee: Chun Chen
 Fix For: 0.12.0

 Attachments: HIVE-5357.patch


 {code}
 select key, count(distinct value) from (select key, value from src group by 
 key, value) t group by key;
 //result
 0 0 NULL
 10  10  NULL
 100 100 NULL
 103 103 NULL
 104 104 NULL
 {code}
 Obviously the result is wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc

2013-09-25 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-5279.


   Resolution: Fixed
Fix Version/s: 0.13.0

Committed to trunk. Thanks, Navis!

 Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
 ---

 Key: HIVE-5279
 URL: https://issues.apache.org/jira/browse/HIVE-5279
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Critical
 Fix For: 0.13.0

 Attachments: 5279.patch, D12963.1.patch, D12963.2.patch, 
 D12963.3.patch, D12963.4.patch, D12963.5.patch


 We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how 
 previous serialization mechanism solved this but, kryo complaints that it's 
 not Serializable and fails the query.
 The log below is the example, 
 {noformat}
 java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class 
 cannot be created (missing no-arg constructor): 
 org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector
 Serialization trace:
 inputOI 
 (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval)
 genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc)
 aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc)
 conf (org.apache.hadoop.hive.ql.exec.GroupByOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
   at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312)
   at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383)
   at org.apache.h
 {noformat}
 If this cannot be fixed in somehow, some UDAFs should be modified to be run 
 on hive-0.13.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5341) Link doesn't work. Needs to be updated as mentioned in the Description

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5341:


Priority: Major  (was: Blocker)

 Link doesn't work. Needs to be updated as mentioned in the Description
 --

 Key: HIVE-5341
 URL: https://issues.apache.org/jira/browse/HIVE-5341
 Project: Hive
  Issue Type: Bug
  Components: Documentation
Reporter: Rakesh Chouhan
Assignee: Lefty Leverenz
  Labels: documentation

 Go to.. Apache HIVE Getting Started Documentation
 https://cwiki.apache.org/confluence/display/Hive/GettingStarted
 Under Section ...
 Simple Example Use Cases
 MovieLens User Ratings
 wget http://www.grouplens.org/system/files/ml-data.tar+0.gz
 The link mentioned as per the document does not work. It needs to be updated 
 to the below URL.
 http://www.grouplens.org/sites/www.grouplens.org/external_files/data/ml-data.tar.gz
 I am setting this defect's priority as a Blocker because, user's will not be 
 able to continue their hands on exercises, unless they find the correct URL 
 to download the mentioned file.
 Referenced from:
 http://mail-archives.apache.org/mod_mbox/hive-user/201302.mbox/%3c8a0c145b-4db9-4d26-8613-8ca1bd741...@daum.net%3E.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-5341) Link doesn't work. Needs to be updated as mentioned in the Description

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair resolved HIVE-5341.
-

Resolution: Fixed

Thanks Lefty for fixing the doc.
[~chouhan], please feel free to edit the wiki directly. If you don't have write 
access, you can create an account on the wiki and send a request to hive-dev 
mailing list.


 Link doesn't work. Needs to be updated as mentioned in the Description
 --

 Key: HIVE-5341
 URL: https://issues.apache.org/jira/browse/HIVE-5341
 Project: Hive
  Issue Type: Bug
  Components: Documentation
Reporter: Rakesh Chouhan
Assignee: Lefty Leverenz
  Labels: documentation

 Go to.. Apache HIVE Getting Started Documentation
 https://cwiki.apache.org/confluence/display/Hive/GettingStarted
 Under Section ...
 Simple Example Use Cases
 MovieLens User Ratings
 wget http://www.grouplens.org/system/files/ml-data.tar+0.gz
 The link mentioned as per the document does not work. It needs to be updated 
 to the below URL.
 http://www.grouplens.org/sites/www.grouplens.org/external_files/data/ml-data.tar.gz
 I am setting this defect's priority as a Blocker because, user's will not be 
 able to continue their hands on exercises, unless they find the correct URL 
 to download the mentioned file.
 Referenced from:
 http://mail-archives.apache.org/mod_mbox/hive-user/201302.mbox/%3c8a0c145b-4db9-4d26-8613-8ca1bd741...@daum.net%3E.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5357) ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr scenario when there are distinct keys in child GBY

2013-09-25 Thread Chun Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777213#comment-13777213
 ] 

Chun Chen commented on HIVE-5357:
-

review https://reviews.facebook.net/D13089

 ReduceSinkDeDuplication optimizer pick the wrong keys in pRS-cGBYm-cRS-cGBYr 
 scenario when there are distinct keys in child GBY
 ---

 Key: HIVE-5357
 URL: https://issues.apache.org/jira/browse/HIVE-5357
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Chun Chen
Assignee: Chun Chen
 Fix For: 0.12.0

 Attachments: HIVE-5357.patch


 {code}
 select key, count(distinct value) from (select key, value from src group by 
 key, value) t group by key;
 //result
 0 0 NULL
 10  10  NULL
 100 100 NULL
 103 103 NULL
 104 104 NULL
 {code}
 Obviously the result is wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5235) Infinite loop with ORC file and Hive 0.11

2013-09-25 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777215#comment-13777215
 ] 

Thejas M Nair commented on HIVE-5235:
-

Pere, We would like to get this fixed for 0.12 release if possible. Can you 
please give any additional information you have for Owen ?


 Infinite loop with ORC file and Hive 0.11
 -

 Key: HIVE-5235
 URL: https://issues.apache.org/jira/browse/HIVE-5235
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
 Environment: Gentoo linux with Hortonworks Hadoop 
 hadoop-1.1.2.23.tar.gz and Apache Hive 0.11d
Reporter: Iván de Prado
Priority: Blocker

 We are using Hive 0.11 with ORC file format and we get some tasks blocked in 
 some kind of infinite loop. They keep working indefinitely when we set a huge 
 task expiry timeout. If we the expiry time to 600 second, the taks fail 
 because of not reporting progress, and finally, the Job fails. 
 That is not consistent, and some times between jobs executions the behavior 
 changes. It happen for different queries.
 We are using Hive 0.11 with Hadoop hadoop-1.1.2.23 from Hortonworks. The taks 
 that is blocked keeps consuming 100% of CPU usage, and the stack trace is 
 always the same consistently. Everything points to some kind of infinite 
 loop. My guessing is that it has some relation to the ORC file. Maybe some 
 pointer is not right when writing generating some kind of infinite loop when 
 reading.  Or maybe there is a bug in the reading stage.
 More information below. The stack trace:
 {noformat} 
 main prio=10 tid=0x7f20a000a800 nid=0x1ed2 runnable [0x7f20a8136000]
java.lang.Thread.State: RUNNABLE
   at java.util.zip.Inflater.inflateBytes(Native Method)
   at java.util.zip.Inflater.inflate(Inflater.java:256)
   - locked 0xf42a6ca0 (a java.util.zip.ZStreamRef)
   at 
 org.apache.hadoop.hive.ql.io.orc.ZlibCodec.decompress(ZlibCodec.java:64)
   at 
 org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:128)
   at 
 org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:143)
   at 
 org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readVulong(SerializationUtils.java:54)
   at 
 org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readVslong(SerializationUtils.java:65)
   at 
 org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReader.readValues(RunLengthIntegerReader.java:66)
   at 
 org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReader.next(RunLengthIntegerReader.java:81)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$IntTreeReader.next(RecordReaderImpl.java:332)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:802)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1214)
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:71)
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:46)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:300)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:218)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:236)
   - eliminated 0xe1459700 (a 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)
   - locked 0xe1459700 (a 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1178)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 {noformat} 
 We have seen the same stack trace repeatedly for several 

[jira] [Resolved] (HIVE-4891) Distinct includes duplicate records

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair resolved HIVE-4891.
-

Resolution: Cannot Reproduce

This could not be reproduced with more recent hive. Marking it as cannot 
reproduce. 

Fengdong, Please let us know if you feel that there is anything missing in the 
steps followed by Harish, or if you are able to reproduce the issue with hive 
0.12 branch or trunk.


 Distinct includes duplicate records
 ---

 Key: HIVE-4891
 URL: https://issues.apache.org/jira/browse/HIVE-4891
 Project: Hive
  Issue Type: Bug
  Components: File Formats, HiveServer2, Query Processor
Affects Versions: 0.10.0
Reporter: Fengdong Yu
Priority: Blocker
 Fix For: 0.12.0


 I have two partitions, one is sequence file, another is RCFile, but they are 
 the same data(only different file format).
 I have the following SQL:
 {code}
 select distinct uid from test where (dt ='20130718' or dt ='20130718_1') and 
 cur_url like '%cq.aa.com%';
 {code}
 dt ='20130718' is sequence file,(default input format, which specified when 
 create table)
  
 dt ='20130718_1' is RCFile.
 {code}
 ALTER TABLE test ADD IF NOT EXISTS PARTITION (dt='20130718_1') LOCATION 
 '/user/test/test-data'
 ALTER TABLE test PARTITION(dt='20130718_1') SET FILEFORMAT RCFILE;
 {code}
 but there are duplicate recoreds in the result.
 If two partitions with the same input format, then there are no duplicate 
 records.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5301) Add a schema tool for offline metastore schema upgrade

2013-09-25 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777221#comment-13777221
 ] 

Hive QA commented on HIVE-5301:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12604890/HIVE-5301.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 3161 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.listener.TestNotificationListener.testAMQListener
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/879/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/879/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

 Add a schema tool for offline metastore schema upgrade
 --

 Key: HIVE-5301
 URL: https://issues.apache.org/jira/browse/HIVE-5301
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.12.0

 Attachments: HIVE-5301.1.patch, HIVE-5301.3.patch, HIVE-5301.3.patch, 
 HIVE-5301-with-HIVE-3764.0.patch


 HIVE-3764 is addressing metastore version consistency.
 Besides it would be helpful to add a tool that can leverage this version 
 information to figure out the required set of upgrade scripts, and execute 
 those against the configured metastore. Now that Hive includes Beeline 
 client, it can be used to execute the scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5207) Support data encryption for Hive tables

2013-09-25 Thread Jerry Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Chen updated HIVE-5207:
-

Attachment: HIVE-5207.patch

Attach patch for reference. It depends on hadoop crypto feature.

 Support data encryption for Hive tables
 ---

 Key: HIVE-5207
 URL: https://issues.apache.org/jira/browse/HIVE-5207
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.12.0
Reporter: Jerry Chen
  Labels: Rhino
 Attachments: HIVE-5207.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 For sensitive and legally protected data such as personal information, it is 
 a common practice that the data is stored encrypted in the file system. To 
 enable Hive with the ability to store and query the encrypted data is very 
 crucial for Hive data analysis in enterprise. 
  
 When creating table, user can specify whether a table is an encrypted table 
 or not by specify a property in TBLPROPERTIES. Once an encrypted table is 
 created, query on the encrypted table is transparent as long as the 
 corresponding key management facilities are set in the running environment of 
 query. We can use hadoop crypto provided by HADOOP-9331 for underlying data 
 encryption and decryption. 
  
 As to key management, we would support several common key management use 
 cases. First, the table key (data key) can be stored in the Hive metastore 
 associated with the table in properties. The table key can be explicit 
 specified or auto generated and will be encrypted with a master key. There 
 are cases that the data being processed is generated by other applications, 
 we need to support externally managed or imported table keys. Also, the data 
 generated by Hive may be consumed by other applications in the system. We 
 need to a tool or command for exporting the table key to a java keystore for 
 using externally.
  
 To handle versions of Hadoop that do not have crypto support, we can avoid 
 compilation problems by segregating crypto API usage into separate files 
 (shims) to be included only if a flag is defined on the Ant command line 
 (something like –Dcrypto=true).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5358) ReduceSinkDeDuplication should ignore column orders when check overlapping part of keys between parent and child

2013-09-25 Thread Chun Chen (JIRA)
Chun Chen created HIVE-5358:
---

 Summary: ReduceSinkDeDuplication should ignore column orders when 
check overlapping part of keys between parent and child
 Key: HIVE-5358
 URL: https://issues.apache.org/jira/browse/HIVE-5358
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Chun Chen
Assignee: Chun Chen


{code}
select key, value from (select key, value from src group by key, value) t group 
by key, value;
{code}
This can be optimized by ReduceSinkDeDuplication

{code}
select key, value from (select key, value from src group by key, value) t group 
by value, key;
{code}
However the sql above can't be optimized by ReduceSinkDeDuplication currently 
due to different column orders of parent and child operator.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5301) Add a schema tool for offline metastore schema upgrade

2013-09-25 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5301:
---

   Resolution: Fixed
Fix Version/s: (was: 0.12.0)
   0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Prasad!

 Add a schema tool for offline metastore schema upgrade
 --

 Key: HIVE-5301
 URL: https://issues.apache.org/jira/browse/HIVE-5301
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-5301.1.patch, HIVE-5301.3.patch, HIVE-5301.3.patch, 
 HIVE-5301-with-HIVE-3764.0.patch


 HIVE-3764 is addressing metastore version consistency.
 Besides it would be helpful to add a tool that can leverage this version 
 information to figure out the required set of upgrade scripts, and execute 
 those against the configured metastore. Now that Hive includes Beeline 
 client, it can be used to execute the scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5283) Merge vectorization branch to trunk

2013-09-25 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777252#comment-13777252
 ] 

Hive QA commented on HIVE-5283:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12604927/HIVE-5283.3.patch

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/884/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/884/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-884/source-prep.txt
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java'
Reverted 
'service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf build hcatalog/build hcatalog/core/build 
hcatalog/storage-handlers/hbase/build hcatalog/server-extensions/build 
hcatalog/webhcat/svr/build hcatalog/webhcat/java-client/build 
hcatalog/hcatalog-pig-adapter/build common/src/gen
+ svn update
Abeeline/src/test/org/apache/hive/beeline/src/test/TestSchemaTool.java
Ubeeline/src/java/org/apache/hive/beeline/BeeLineOpts.java
Abeeline/src/java/org/apache/hive/beeline/HiveSchemaTool.java
Abeeline/src/java/org/apache/hive/beeline/HiveSchemaHelper.java
Ubeeline/src/java/org/apache/hive/beeline/Commands.java
Ubeeline/src/java/org/apache/hive/beeline/BeeLine.java
Ubuild.xml
Umetastore/scripts/upgrade/derby/014-HIVE-3764.derby.sql
Umetastore/scripts/upgrade/mysql/014-HIVE-3764.mysql.sql
Umetastore/scripts/upgrade/oracle/014-HIVE-3764.oracle.sql
Umetastore/scripts/upgrade/postgres/014-HIVE-3764.postgres.sql
Abin/schematool
Abin/ext/schemaTool.sh

Fetching external item into 'hcatalog/src/test/e2e/harness'
Updated external to revision 1526125.

Updated to revision 1526125.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0 to p2
+ exit 1
'
{noformat}

This message is automatically generated.

 Merge vectorization branch to trunk
 ---

 Key: HIVE-5283
 URL: https://issues.apache.org/jira/browse/HIVE-5283
 Project: Hive
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-5283.1.patch, HIVE-5283.2.patch, HIVE-5283.3.patch


 The purpose of this jira is to upload vectorization patch, run tests etc. The 
 actual work will continue under HIVE-4160 umbrella jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4629) HS2 should support an API to retrieve query logs

2013-09-25 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777270#comment-13777270
 ] 

Hive QA commented on HIVE-4629:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12604928/HIVE-4629.1.patch

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/886/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/886/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-886/source-prep.txt
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToInteger.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToLong.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToByte.java'
Reverted 'ql/src/java/org/apache/hadoop/hive/ql/udf/UDFToShort.java'
++ egrep -v '^X|^Performing status on external'
++ awk '{print $2}'
++ svn status --no-ignore
+ rm -rf build hcatalog/build hcatalog/core/build 
hcatalog/storage-handlers/hbase/build hcatalog/server-extensions/build 
hcatalog/webhcat/svr/build hcatalog/webhcat/java-client/build 
hcatalog/hcatalog-pig-adapter/build common/src/gen 
ql/src/test/results/clientpositive/cast_to_int.q.out 
ql/src/test/queries/clientpositive/cast_to_int.q
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1526135.

At revision 1526135.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0 to p2
+ exit 1
'
{noformat}

This message is automatically generated.

 HS2 should support an API to retrieve query logs
 

 Key: HIVE-4629
 URL: https://issues.apache.org/jira/browse/HIVE-4629
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Reporter: Shreepadma Venugopalan
Assignee: Shreepadma Venugopalan
 Attachments: HIVE-4629.1.patch, HIVE-4629-no_thrift.1.patch


 HiveServer2 should support an API to retrieve query logs. This is 
 particularly relevant because HiveServer2 supports async execution but 
 doesn't provide a way to report progress. Providing an API to retrieve query 
 logs will help report progress to the client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2843) UDAF to convert an aggregation to a map

2013-09-25 Thread Nepomuk Seiler (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777277#comment-13777277
 ] 

Nepomuk Seiler commented on HIVE-2843:
--

Hi guys,

what is the status of this?

cheers,
Muki

 UDAF to convert an aggregation to a map
 ---

 Key: HIVE-2843
 URL: https://issues.apache.org/jira/browse/HIVE-2843
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.9.0, 0.10.0
Reporter: David Worms
Priority: Minor
  Labels: features, udf
 Attachments: HIVE-2843.1.patch.txt, HIVE-2843.D8745.1.patch, 
 hive-2843-dev.git.patch


 I propose the addition of two new Hive UDAF to help with maps in Apache Hive. 
 The source code is available on GitHub at https://github.com/wdavidw/hive-udf 
 in two Java classes: UDAFToMap and UDAFToOrderedMap. The first function 
 convert an aggregation into a map and is internally using a Java `HashMap`. 
 The second function extends the first one. It convert an aggregation into an 
 ordered map and is internally using a Java `TreeMap`. They both extends the 
 `AbstractGenericUDAFResolver` class.
 Also, I have covered the motivations and usages of those UDAF in a blog post 
 at http://adaltas.com/blog/2012/03/06/hive-udaf-map-conversion/
 The full patch is available with tests as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4763) add support for thrift over http transport in HS2

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-4763:
---

Status: Open  (was: Patch Available)

 add support for thrift over http transport in HS2
 -

 Key: HIVE-4763
 URL: https://issues.apache.org/jira/browse/HIVE-4763
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: HIVE-4763.1.patch, HIVE-4763.2.patch, 
 HIVE-4763.D12855.1.patch, HIVE-4763.D12951.1.patch


 Subtask for adding support for http transport mode for thrift api in hive 
 server2.
 Support for the different authentication modes will be part of another 
 subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4763) add support for thrift over http transport in HS2

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-4763:
---

Attachment: HIVE-4763.D12951.2.patch

 add support for thrift over http transport in HS2
 -

 Key: HIVE-4763
 URL: https://issues.apache.org/jira/browse/HIVE-4763
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: HIVE-4763.1.patch, HIVE-4763.2.patch, 
 HIVE-4763.D12855.1.patch, HIVE-4763.D12951.1.patch, HIVE-4763.D12951.2.patch


 Subtask for adding support for http transport mode for thrift api in hive 
 server2.
 Support for the different authentication modes will be part of another 
 subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4763) add support for thrift over http transport in HS2

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-4763:
---

Status: Patch Available  (was: Open)

 add support for thrift over http transport in HS2
 -

 Key: HIVE-4763
 URL: https://issues.apache.org/jira/browse/HIVE-4763
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: HIVE-4763.1.patch, HIVE-4763.2.patch, 
 HIVE-4763.D12855.1.patch, HIVE-4763.D12951.1.patch, HIVE-4763.D12951.2.patch


 Subtask for adding support for http transport mode for thrift api in hive 
 server2.
 Support for the different authentication modes will be part of another 
 subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5295:
---

Status: Patch Available  (was: Open)

 HiveConnection#configureConnection tries to execute statement even after it 
 is closed
 -

 Key: HIVE-5295
 URL: https://issues.apache.org/jira/browse/HIVE-5295
 Project: Hive
  Issue Type: Bug
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D12957.1.patch, D12957.2.patch, D12957.3.patch, 
 HIVE-5295.D12957.3.patch


 HiveConnection#configureConnection tries to execute statement even after it 
 is closed. For remote JDBC client, it tries to set the conf var using 'set 
 foo=bar' by calling HiveStatement.execute for each conf var pair, but closes 
 the statement after the 1st iteration through the conf var pairs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5295:
---

Attachment: HIVE-5295.D12957.3.patch

 HiveConnection#configureConnection tries to execute statement even after it 
 is closed
 -

 Key: HIVE-5295
 URL: https://issues.apache.org/jira/browse/HIVE-5295
 Project: Hive
  Issue Type: Bug
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D12957.1.patch, D12957.2.patch, D12957.3.patch, 
 HIVE-5295.D12957.3.patch


 HiveConnection#configureConnection tries to execute statement even after it 
 is closed. For remote JDBC client, it tries to set the conf var using 'set 
 foo=bar' by calling HiveStatement.execute for each conf var pair, but closes 
 the statement after the 1st iteration through the conf var pairs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5295:
---

Status: Open  (was: Patch Available)

 HiveConnection#configureConnection tries to execute statement even after it 
 is closed
 -

 Key: HIVE-5295
 URL: https://issues.apache.org/jira/browse/HIVE-5295
 Project: Hive
  Issue Type: Bug
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D12957.1.patch, D12957.2.patch, D12957.3.patch, 
 HIVE-5295.D12957.3.patch


 HiveConnection#configureConnection tries to execute statement even after it 
 is closed. For remote JDBC client, it tries to set the conf var using 'set 
 foo=bar' by calling HiveStatement.execute for each conf var pair, but closes 
 the statement after the 1st iteration through the conf var pairs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections.

2013-09-25 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-5296:
-

Attachment: HIVE-5296.patch

I've attached a modified patch.

 Memory leak: OOM Error after multiple open/closed JDBC connections. 
 

 Key: HIVE-5296
 URL: https://issues.apache.org/jira/browse/HIVE-5296
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0
 Environment: Hive 0.12.0, Hadoop 1.1.2, Debian.
Reporter: Douglas
  Labels: hiveserver
 Fix For: 0.12.0

 Attachments: HIVE-5296.patch, HIVE-5296.patch, HIVE-5296.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481
 However, on inspection of the related patch and my built version of Hive 
 (patch carried forward to 0.12.0), I am still seeing the described behaviour.
 Multiple connections to Hiveserver2, all of which are closed and disposed of 
 properly show the Java heap size to grow extremely quickly. 
 This issue can be recreated using the following code
 {code}
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 import org.apache.hive.service.cli.HiveSQLException;
 import org.apache.log4j.Logger;
 /*
  * Class which encapsulates the lifecycle of a query or statement.
  * Provides functionality which allows you to create a connection
  */
 public class HiveClient {
   
   Connection con;
   Logger logger;
   private static String driverName = org.apache.hive.jdbc.HiveDriver;   
   private String db;
   
   
   public HiveClient(String db)
   {   
   logger = Logger.getLogger(HiveClient.class);
   this.db=db;
   
   try{
Class.forName(driverName);
   }catch(ClassNotFoundException e){
   logger.info(Can't find Hive driver);
   }
   
   String hiveHost = GlimmerServer.config.getString(hive/host);
   String hivePort = GlimmerServer.config.getString(hive/port);
   String connectionString = jdbc:hive2://+hiveHost+:+hivePort 
 +/default;
   logger.info(String.format(Attempting to connect to 
 %s,connectionString));
   try{
   con = 
 DriverManager.getConnection(connectionString,,);  
 
   }catch(Exception e){
   logger.error(Problem instantiating the 
 connection+e.getMessage());
   }   
   }
   
   public int update(String query) 
   {
   Integer res = 0;
   Statement stmt = null;
   try{
   stmt = con.createStatement();
   String switchdb = USE +db;
   logger.info(switchdb);  
   stmt.executeUpdate(switchdb);
   logger.info(query);
   res = stmt.executeUpdate(query);
   logger.info(Query passed to server);  
   stmt.close();
   }catch(HiveSQLException e){
   logger.info(String.format(HiveSQLException thrown, 
 this can be valid,  +
   but check the error: %s from the query 
 %s,query,e.toString()));
   }catch(SQLException e){
   logger.error(String.format(Unable to execute query 
 SQLException %s. Error: %s,query,e));
   }catch(Exception e){
   logger.error(String.format(Unable to execute query %s. 
 Error: %s,query,e));
   }
   
   if(stmt!=null)
   try{
   stmt.close();
   }catch(SQLException e){
   logger.error(Cannot close the statment, 
 potentially memory leak +e);
   }
   
   return res;
   }
   
   public void close()
   {
   if(con!=null){
   try {
   con.close();
   } catch (SQLException e) {  
   logger.info(Problem closing connection +e);
   }
   }
   }
   
   
   
 }
 {code}
 And by creating and closing many HiveClient objects. The heap space used by 
 the 

Re: Review Request 14298: Memory leak when using JDBC connections.

2013-09-25 Thread Kousuke Saruta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14298/
---

(Updated Sept. 25, 2013, 9:09 a.m.)


Review request for hive.


Changes
---

I found using FileSystem.closeAll is a bad idea and FIleSystem$Cache problem 
will be addressed HIVE-4501 so I try to address another problem that opHandle 
will not be released when Exception occurred during executing query or command.


Bugs: HIVE-5296
https://issues.apache.org/jira/browse/HIVE-5296


Repository: hive-git


Description
---

Hiveserver2 will occur memory leak caused by increasing Hashtable$Entry at 
least 2 situation as follows.

1. When Exceptions are thrown during executing commmand or query, operation 
handle will not release.
2. Hiveserver2 calls FileSystem#get method and never call FileSystem#close or 
FileSystem.closeAll so FileSystem$Cache will continue to increase.

I've modified HiveSessionImpl and HiveStatement not to lose operation handle. 
Operation handle is needed by OperationManager to remove from handleToOpration.
Also, I've modified HiveSessionImpl to close FileSystem object at the end of 
session.


Diffs (updated)
-

  jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java 2912ece 
  service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java 
11c96b2 

Diff: https://reviews.apache.org/r/14298/diff/


Testing
---

I confirmed only not increasing Hashtable$Entry by jmap.


Thanks,

Kousuke Saruta



[jira] [Commented] (HIVE-4822) implement vectorized math functions

2013-09-25 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777316#comment-13777316
 ] 

Hive QA commented on HIVE-4822:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12604935/HIVE-4822.7-vectorization.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4027 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestHCatExternalHCatNonPartitioned.testHCatNonPartitionedTable
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/887/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/887/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

 implement vectorized math functions
 ---

 Key: HIVE-4822
 URL: https://issues.apache.org/jira/browse/HIVE-4822
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: vectorization-branch

 Attachments: HIVE-4822.1.patch, HIVE-4822.4.patch, 
 HIVE-4822.5-vectorization.patch, HIVE-4822.6.patch.txt, 
 HIVE-4822.7-vectorization.patch


 Implement vectorized support for the all the built-in math functions. This 
 includes implementing the vectorized operation, and tying it all together in 
 VectorizationContext so it runs end-to-end. These functions include:
 round(Col)
 Round(Col, N)
 Floor(Col)
 Ceil(Col)
 Rand(), Rand(seed)
 Exp(Col)
 Ln(Col)
 Log10(Col)
 Log2(Col)
 Log(base, Col)
 Pow(col, p), Power(col, p)
 Sqrt(Col)
 Bin(Col)
 Hex(Col)
 Unhex(Col)
 Conv(Col, from_base, to_base)
 Abs(Col)
 Pmod(arg1, arg2)
 Sin(Col)
 Asin(Col)
 Cos(Col)
 ACos(Col)
 Atan(Col)
 Degrees(Col)
 Radians(Col)
 Positive(Col)
 Negative(Col)
 Sign(Col)
 E()
 Pi()
 To reduce the total code volume, do an implicit type cast from non-double 
 input types to double. 
 Also, POSITITVE and NEGATIVE are syntactic sugar for unary + and unary -, so 
 reuse code for those as appropriate.
 Try to call the function directly in the inner loop and avoid new() or 
 expensive operations, as appropriate.
 Templatize the code where appropriate, e.g. all the unary function of form 
 DOUBLE func(DOUBLE)
 can probably be done with a template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5223) explain doesn't show serde used for table

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777401#comment-13777401
 ] 

Hudson commented on HIVE-5223:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5223 : explain doesn't show serde used for table (Ashutosh Chauhan via 
Thejas Nair) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526116)
* /hive/trunk/contrib/src/test/results/clientpositive/dboutput.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes2.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes3.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes4.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes5.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_avg.q.out
* 
/hive/trunk/contrib/src/test/results/clientpositive/udaf_example_group_concat.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_max.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_max_n.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_min.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_min_n.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udf_example_add.q.out
* 
/hive/trunk/contrib/src/test/results/clientpositive/udf_example_arraymapstruct.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udf_example_format.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udf_row_sequence.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/external_table_ppd.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_ppd_key_range.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_pushdown.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/ppd_key_ranges.q.out
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/PTFRowContainer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PTFDeserializer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableDesc.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/TestSerDe.java
* /hive/trunk/ql/src/test/results/clientnegative/bucket_mapjoin_mismatch1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/script_error.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/sortmerge_mapjoin_mismatch_1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_assert_true.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_assert_true2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alias_casted_column.q.out
* /hive/trunk/ql/src/test/results/clientpositive/allcolref_in_udf.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_partition_coltype.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ambiguous_col.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join0.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join10.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join15.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join16.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join18.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join21.q.out
* 

[jira] [Commented] (HIVE-5274) HCatalog package renaming backward compatibility follow-up

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777400#comment-13777400
 ] 

Hudson commented on HIVE-5274:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5274 : HCatalog package renaming backward compatibility follow-up 
(Sushanth Sowmyan) (khorgath: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526094)
* /hive/trunk/hcatalog/build-support/ant/checkstyle.xml
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/mapreduce/HCatStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseBaseOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseConstants.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseHCatStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseRevisionManagerUtil.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HbaseSnapshotRecordReader.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/ResultConverter.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseBulkOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseDirectOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseHCatStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHCatHBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHiveHBaseStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHiveHBaseTableOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestPigHBaseStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestSnapshots.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/snapshot/TestZNodeSetUp.java
* /hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/ManyMiniCluster.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/SkeletonHBaseTest.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHiveHBaseStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHiveHBaseTableOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestPigHBaseStorageHandler.java


 HCatalog package renaming backward compatibility follow-up
 --

 Key: HIVE-5274
 URL: https://issues.apache.org/jira/browse/HIVE-5274
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.12.0

 Attachments: HIVE-5274.2.patch, HIVE-5274.3.patch, HIVE-5274.4.patch


 As part of HIVE-4869, the hbase storage handler in hcat was moved to 
 org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it 
 was intended to be deprecated as well.
 However, it imports and uses several org.apache.hive.hcatalog classes. This 
 needs to be changed to use org.apache.hcatalog classes.
 ==
 Note : The above is a complete description of this issue in and of by itself, 
 the following is more details on the backward-compatibility goal I have(not 
 saying that each of these things are violated) : 
 a) People using org.apache.hcatalog packages should continue being able to 
 use that package, and see no difference at compile time or runtime. All code 
 here is considered deprecated, and will be gone by the time hive 0.14 rolls 
 around. Additionally, org.apache.hcatalog should behave as if it were 0.11 
 for all compatibility purposes.
 b) People using org.apache.hive.hcatalog packages should never have an 
 org.apache.hcatalog dependency injected in.
 Thus,
 It is okay for 

[jira] [Commented] (HIVE-5202) Support for SettableUnionObjectInspector and implement isSettable/hasAllFieldsSettable APIs for all data types.

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777399#comment-13777399
 ] 

Hudson commented on HIVE-5202:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5202 : Support for SettableUnionObjectInspector and implement 
isSettable/hasAllFieldsSettable APIs for all data types. (Hari Sankar via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525804)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomNonSettableUnionObjectInspector1.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe4.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe5.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_wise_fileformat18.q
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat18.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorConverters.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorUtils.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/SettableUnionObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/StandardUnionObjectInspector.java


 Support for SettableUnionObjectInspector and implement 
 isSettable/hasAllFieldsSettable APIs for all data types.
 ---

 Key: HIVE-5202
 URL: https://issues.apache.org/jira/browse/HIVE-5202
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Fix For: 0.13.0

 Attachments: HIVE-5202.2.patch.txt, HIVE-5202.patch


 These 3 tasks should be accomplished as part of the following jira:
 1. The current implementation lacks settable union object inspector. We can 
 run into exception inside ObjectInspectorConverters.getConvertedOI() if there 
 is a union.
 2. Implement the following public functions for all datatypes: 
 isSettable()- Perform shallow check to see if an object inspector is 
 inherited from settableOI type and 
 hasAllFieldsSettable() - Perform deep check to see if this objectInspector 
 and all the underlying object inspectors are inherited from settableOI type.
 3. ObjectInspectorConverters.getConvertedOI() is inefficient. Once (1) and 
 (2) are implemented, add the following check: outputOI.hasAllSettableFields() 
 should be added to return outputOI immediately if the object is entirely 
 settable in order to prevent redundant object instantiation.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777395#comment-13777395
 ] 

Hudson commented on HIVE-5279:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5279 : Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc (Navis 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526117)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/UDAF.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/AggregationDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectList.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFMkCollectionEvaluator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSumList.java
* /hive/trunk/ql/src/test/queries/clientpositive/udaf_sum_list.q
* /hive/trunk/ql/src/test/results/clientpositive/udaf_sum_list.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml


 Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
 ---

 Key: HIVE-5279
 URL: https://issues.apache.org/jira/browse/HIVE-5279
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Critical
 Fix For: 0.13.0

 Attachments: 5279.patch, D12963.1.patch, D12963.2.patch, 
 D12963.3.patch, D12963.4.patch, D12963.5.patch


 We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how 
 previous serialization mechanism solved this but, kryo complaints that it's 
 not Serializable and fails the query.
 The log below is the example, 
 {noformat}
 java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class 
 cannot be created (missing no-arg constructor): 
 org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector
 Serialization trace:
 inputOI 
 (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval)
 genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc)
 aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc)
 conf (org.apache.hadoop.hive.ql.exec.GroupByOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
   at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312)
   at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383)
   at org.apache.h
 {noformat}
 If this cannot be fixed in somehow, some UDAFs should be modified to be run 
 on hive-0.13.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5329) Date and timestamp type converts invalid strings to '1970-01-01'

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777397#comment-13777397
 ] 

Hudson commented on HIVE-5329:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5329 : Date and timestamp type converts invalid strings to 1970-01-01 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526102)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDate.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_date.q
* /hive/trunk/ql/src/test/queries/clientpositive/type_conversions_1.q
* /hive/trunk/ql/src/test/results/clientpositive/partition_date.q.out
* /hive/trunk/ql/src/test/results/clientpositive/type_conversions_1.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaDateObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaTimestampObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/WritableDateObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/WritableTimestampObjectInspector.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/primitive/TestPrimitiveObjectInspectorUtils.java


 Date and timestamp type converts invalid strings to '1970-01-01'
 

 Key: HIVE-5329
 URL: https://issues.apache.org/jira/browse/HIVE-5329
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.12.0
Reporter: Vikram Dixit K
Assignee: Jason Dere
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-5329.1.patch, HIVE-5329.2.patch, HIVE-5329.3.patch


 {noformat}
 select
   cast('abcd' as date),
   cast('abcd' as timestamp)
 from src limit 1;
 {noformat}
 returns '1970-01-01'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4531) [WebHCat] Collecting task logs to hdfs

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777393#comment-13777393
 ] 

Hudson commented on HIVE-4531:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-4531: [WebHCat] Collecting task logs to hdfs - add missing files (Daniel 
Dai via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525997)
* /hive/trunk/hcatalog/webhcat/svr/src/test/data
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/hive
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/hive/stderr
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/jar
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/jar/stderr
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/pig
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/pig/stderr
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/streaming
* /hive/trunk/hcatalog/webhcat/svr/src/test/data/status/streaming/stderr
HIVE-4531: [WebHCat] Collecting task logs to hdfs (Daniel Dai via Thejas Nair) 
(thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525807)
* /hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/hive.xml
* /hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/mapreducejar.xml
* 
/hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/mapreducestreaming.xml
* /hive/trunk/hcatalog/src/docs/src/documentation/content/xdocs/pig.xml
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/JarDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/LauncherDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Server.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/StreamingDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HiveJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JarJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/JobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/LogRetriever.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/PigJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestJobIDParser.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTempletonUtils.java


 [WebHCat] Collecting task logs to hdfs
 --

 Key: HIVE-4531
 URL: https://issues.apache.org/jira/browse/HIVE-4531
 Project: Hive
  Issue Type: New Feature
  Components: HCatalog, WebHCat
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-4531-10.patch, HIVE-4531-11.patch, 
 HIVE-4531-1.patch, HIVE-4531-2.patch, HIVE-4531-3.patch, HIVE-4531-4.patch, 
 HIVE-4531-5.patch, HIVE-4531-6.patch, HIVE-4531-7.patch, HIVE-4531-8.patch, 
 HIVE-4531-9.patch, samplestatusdirwithlist.tar.gz


 It would be nice we collect task logs after job finish. This is similar to 
 what Amazon EMR does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5345) Operator::close() leaks Operator::out, holding reference to buffers

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777392#comment-13777392
 ] 

Hudson commented on HIVE-5345:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5345 : Operator::close() leaks Operator::out, holding reference to buffers 
(Gopal V via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526100)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java


 Operator::close() leaks Operator::out, holding reference to buffers
 ---

 Key: HIVE-5345
 URL: https://issues.apache.org/jira/browse/HIVE-5345
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
 Environment: Ubuntu, LXC, jdk6-x86_64
Reporter: Gopal V
Assignee: Gopal V
  Labels: memory-leak
 Fix For: 0.13.0

 Attachments: HIVE-5345.01.patch, out-leak.png


 When processing multiple splits on the same operator pipeline, the output 
 collector in Operator has a held reference, which causes issues.
 Operator::close() does not de-reference the OutputCollector object 
 Operator::out held by the object.
 This means that trying to allocate space for a new OutputCollector causes an 
 OOM because the old one is still reachable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4914) filtering via partition name should be done inside metastore server (implementation)

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777394#comment-13777394
 ] 

Hudson commented on HIVE-4914:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-4914 : filtering via partition name should be done inside metastore server 
(implementation) (Sergey Shelukhin via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526106)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprRequest.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
* /hive/trunk/metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
* /hive/trunk/metastore/src/gen/thrift/gen-php/metastore/Types.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
* /hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb
* /hive/trunk/metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/PartitionExpressionProxy.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionExpressionForMetastore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/VerifyingObjectStore.java


 filtering via partition name should be done inside metastore server 
 (implementation)
 

 Key: HIVE-4914
 URL: https://issues.apache.org/jira/browse/HIVE-4914
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: D12561.5.patch, D12561.6.patch, D12561.7.patch, 
 HIVE-4914.01.patch, HIVE-4914.02.patch, HIVE-4914.03.patch, 
 HIVE-4914.04.patch, HIVE-4914.05.patch, HIVE-4914.06.patch, 
 HIVE-4914.07.patch, HIVE-4914.D12561.1.patch, HIVE-4914.D12561.2.patch, 
 HIVE-4914.D12561.3.patch, HIVE-4914.D12561.4.patch, HIVE-4914.D12645.1.patch, 
 HIVE-4914-only-no-gen.patch, HIVE-4914-only.patch, HIVE-4914.patch, 
 HIVE-4914.patch, HIVE-4914.patch


 Currently, if the filter pushdown is impossible (which is most cases), the 
 client gets all partition names from metastore, filters them, and asks for 
 partitions by names for the filtered set.
 Metastore server code should do that instead; it should check if pushdown is 
 possible and do it if so; otherwise it should do name-based filtering.
 Saves the roundtrip with all partition names from the server to client, and 
 also removes the need to have pushdown viability checking on both sides.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

[jira] [Commented] (HIVE-5301) Add a schema tool for offline metastore schema upgrade

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777398#comment-13777398
 ] 

Hudson commented on HIVE-5301:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5301 : Add a schema tool for offline metastore schema upgrade (Prasad 
Mujumdar via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526122)
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/Commands.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/HiveSchemaHelper.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/HiveSchemaTool.java
* 
/hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test/TestSchemaTool.java
* /hive/trunk/bin/ext/schemaTool.sh
* /hive/trunk/bin/schematool
* /hive/trunk/build.xml
* /hive/trunk/metastore/scripts/upgrade/derby/014-HIVE-3764.derby.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/014-HIVE-3764.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/014-HIVE-3764.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/014-HIVE-3764.postgres.sql


 Add a schema tool for offline metastore schema upgrade
 --

 Key: HIVE-5301
 URL: https://issues.apache.org/jira/browse/HIVE-5301
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-5301.1.patch, HIVE-5301.3.patch, HIVE-5301.3.patch, 
 HIVE-5301-with-HIVE-3764.0.patch


 HIVE-3764 is addressing metastore version consistency.
 Besides it would be helpful to add a tool that can leverage this version 
 information to figure out the required set of upgrade scripts, and execute 
 those against the configured metastore. Now that Hive includes Beeline 
 client, it can be used to execute the scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5181) RetryingRawStore should not retry on logical failures (e.g. from commit)

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777396#comment-13777396
 ] 

Hudson commented on HIVE-5181:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #181 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/181/])
HIVE-5181 : RetryingRawStore should not retry on logical failures (e.g. from 
commit) (Prasad Mujumdar via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526107)
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingRawStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRawStoreTxn.java


 RetryingRawStore should not retry on logical failures (e.g. from commit)
 

 Key: HIVE-5181
 URL: https://issues.apache.org/jira/browse/HIVE-5181
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Prasad Mujumdar
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5181.1.patch, HIVE-5181.3.patch


 RetryingRawStore retries calls. Some method (e.g. drop_table_core in 
 HiveMetaStore) explicitly call openTransaction and commitTransaction on 
 RawStore.
 When the commit call fails due to some real issue, it is retried, and instead 
 of a real cause for failure one gets some bogus exception about transaction 
 open count.
 I doesn't make sense to retry logical errors, especially not from 
 commitTransaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5181) RetryingRawStore should not retry on logical failures (e.g. from commit)

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777412#comment-13777412
 ] 

Hudson commented on HIVE-5181:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #115 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/115/])
HIVE-5181 : RetryingRawStore should not retry on logical failures (e.g. from 
commit) (Prasad Mujumdar via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526107)
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingRawStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRawStoreTxn.java


 RetryingRawStore should not retry on logical failures (e.g. from commit)
 

 Key: HIVE-5181
 URL: https://issues.apache.org/jira/browse/HIVE-5181
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Prasad Mujumdar
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5181.1.patch, HIVE-5181.3.patch


 RetryingRawStore retries calls. Some method (e.g. drop_table_core in 
 HiveMetaStore) explicitly call openTransaction and commitTransaction on 
 RawStore.
 When the commit call fails due to some real issue, it is retried, and instead 
 of a real cause for failure one gets some bogus exception about transaction 
 open count.
 I doesn't make sense to retry logical errors, especially not from 
 commitTransaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4914) filtering via partition name should be done inside metastore server (implementation)

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777410#comment-13777410
 ] 

Hudson commented on HIVE-4914:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #115 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/115/])
HIVE-4914 : filtering via partition name should be done inside metastore server 
(implementation) (Sergey Shelukhin via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526106)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprRequest.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PartitionsByExprResult.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
* /hive/trunk/metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php
* /hive/trunk/metastore/src/gen/thrift/gen-php/metastore/Types.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
* /hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb
* /hive/trunk/metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/PartitionExpressionProxy.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionExpressionForMetastore.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/metastore/VerifyingObjectStore.java


 filtering via partition name should be done inside metastore server 
 (implementation)
 

 Key: HIVE-4914
 URL: https://issues.apache.org/jira/browse/HIVE-4914
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: D12561.5.patch, D12561.6.patch, D12561.7.patch, 
 HIVE-4914.01.patch, HIVE-4914.02.patch, HIVE-4914.03.patch, 
 HIVE-4914.04.patch, HIVE-4914.05.patch, HIVE-4914.06.patch, 
 HIVE-4914.07.patch, HIVE-4914.D12561.1.patch, HIVE-4914.D12561.2.patch, 
 HIVE-4914.D12561.3.patch, HIVE-4914.D12561.4.patch, HIVE-4914.D12645.1.patch, 
 HIVE-4914-only-no-gen.patch, HIVE-4914-only.patch, HIVE-4914.patch, 
 HIVE-4914.patch, HIVE-4914.patch


 Currently, if the filter pushdown is impossible (which is most cases), the 
 client gets all partition names from metastore, filters them, and asks for 
 partitions by names for the filtered set.
 Metastore server code should do that instead; it should check if pushdown is 
 possible and do it if so; otherwise it should do name-based filtering.
 Saves the roundtrip with all partition names from the server to client, and 
 also removes the need to have pushdown viability checking on both sides.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

[jira] [Commented] (HIVE-5329) Date and timestamp type converts invalid strings to '1970-01-01'

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777413#comment-13777413
 ] 

Hudson commented on HIVE-5329:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #115 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/115/])
HIVE-5329 : Date and timestamp type converts invalid strings to 1970-01-01 
(Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526102)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToDate.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_date.q
* /hive/trunk/ql/src/test/queries/clientpositive/type_conversions_1.q
* /hive/trunk/ql/src/test/results/clientpositive/partition_date.q.out
* /hive/trunk/ql/src/test/results/clientpositive/type_conversions_1.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaDateObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaTimestampObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/WritableDateObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/WritableTimestampObjectInspector.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/primitive/TestPrimitiveObjectInspectorUtils.java


 Date and timestamp type converts invalid strings to '1970-01-01'
 

 Key: HIVE-5329
 URL: https://issues.apache.org/jira/browse/HIVE-5329
 Project: Hive
  Issue Type: Bug
  Components: Types
Affects Versions: 0.12.0
Reporter: Vikram Dixit K
Assignee: Jason Dere
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-5329.1.patch, HIVE-5329.2.patch, HIVE-5329.3.patch


 {noformat}
 select
   cast('abcd' as date),
   cast('abcd' as timestamp)
 from src limit 1;
 {noformat}
 returns '1970-01-01'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5301) Add a schema tool for offline metastore schema upgrade

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777414#comment-13777414
 ] 

Hudson commented on HIVE-5301:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #115 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/115/])
HIVE-5301 : Add a schema tool for offline metastore schema upgrade (Prasad 
Mujumdar via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526122)
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/Commands.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/HiveSchemaHelper.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/HiveSchemaTool.java
* 
/hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test/TestSchemaTool.java
* /hive/trunk/bin/ext/schemaTool.sh
* /hive/trunk/bin/schematool
* /hive/trunk/build.xml
* /hive/trunk/metastore/scripts/upgrade/derby/014-HIVE-3764.derby.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/014-HIVE-3764.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/014-HIVE-3764.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/014-HIVE-3764.postgres.sql


 Add a schema tool for offline metastore schema upgrade
 --

 Key: HIVE-5301
 URL: https://issues.apache.org/jira/browse/HIVE-5301
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-5301.1.patch, HIVE-5301.3.patch, HIVE-5301.3.patch, 
 HIVE-5301-with-HIVE-3764.0.patch


 HIVE-3764 is addressing metastore version consistency.
 Besides it would be helpful to add a tool that can leverage this version 
 information to figure out the required set of upgrade scripts, and execute 
 those against the configured metastore. Now that Hive includes Beeline 
 client, it can be used to execute the scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5223) explain doesn't show serde used for table

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777415#comment-13777415
 ] 

Hudson commented on HIVE-5223:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #115 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/115/])
HIVE-5223 : explain doesn't show serde used for table (Ashutosh Chauhan via 
Thejas Nair) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526116)
* /hive/trunk/contrib/src/test/results/clientpositive/dboutput.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes2.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes3.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes4.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes5.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_avg.q.out
* 
/hive/trunk/contrib/src/test/results/clientpositive/udaf_example_group_concat.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_max.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_max_n.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_min.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udaf_example_min_n.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udf_example_add.q.out
* 
/hive/trunk/contrib/src/test/results/clientpositive/udf_example_arraymapstruct.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udf_example_format.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/udf_row_sequence.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/external_table_ppd.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_ppd_key_range.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_pushdown.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/ppd_key_ranges.q.out
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/PTFRowContainer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PTFDeserializer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableDesc.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/TestSerDe.java
* /hive/trunk/ql/src/test/results/clientnegative/bucket_mapjoin_mismatch1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/script_error.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/sortmerge_mapjoin_mismatch_1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_assert_true.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_assert_true2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alias_casted_column.q.out
* /hive/trunk/ql/src/test/results/clientpositive/allcolref_in_udf.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_partition_coltype.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ambiguous_col.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join0.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join10.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join15.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join16.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join18.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join21.q.out
* 

[jira] [Commented] (HIVE-5279) Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777411#comment-13777411
 ] 

Hudson commented on HIVE-5279:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #115 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/115/])
HIVE-5279 : Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc (Navis 
via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526117)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/UDAF.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/AggregationDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectList.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFMkCollectionEvaluator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSumList.java
* /hive/trunk/ql/src/test/queries/clientpositive/udaf_sum_list.q
* /hive/trunk/ql/src/test/results/clientpositive/udaf_sum_list.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml


 Kryo cannot instantiate GenericUDAFEvaluator in GroupByDesc
 ---

 Key: HIVE-5279
 URL: https://issues.apache.org/jira/browse/HIVE-5279
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Critical
 Fix For: 0.13.0

 Attachments: 5279.patch, D12963.1.patch, D12963.2.patch, 
 D12963.3.patch, D12963.4.patch, D12963.5.patch


 We didn't forced GenericUDAFEvaluator to be Serializable. I don't know how 
 previous serialization mechanism solved this but, kryo complaints that it's 
 not Serializable and fails the query.
 The log below is the example, 
 {noformat}
 java.lang.RuntimeException: com.esotericsoftware.kryo.KryoException: Class 
 cannot be created (missing no-arg constructor): 
 org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector
 Serialization trace:
 inputOI 
 (org.apache.hadoop.hive.ql.udf.generic.GenericUDAFGroupOn$VersionedFloatGroupOnEval)
 genericUDAFEvaluator (org.apache.hadoop.hive.ql.plan.AggregationDesc)
 aggregators (org.apache.hadoop.hive.ql.plan.GroupByDesc)
 conf (org.apache.hadoop.hive.ql.exec.GroupByOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.SelectOperator)
 childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator)
 aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork)
   at 
 org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:312)
   at 
 org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:261)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:256)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:383)
   at org.apache.h
 {noformat}
 If this cannot be fixed in somehow, some UDAFs should be modified to be run 
 on hive-0.13.0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5345) Operator::close() leaks Operator::out, holding reference to buffers

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777409#comment-13777409
 ] 

Hudson commented on HIVE-5345:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #115 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/115/])
HIVE-5345 : Operator::close() leaks Operator::out, holding reference to buffers 
(Gopal V via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526100)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java


 Operator::close() leaks Operator::out, holding reference to buffers
 ---

 Key: HIVE-5345
 URL: https://issues.apache.org/jira/browse/HIVE-5345
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
 Environment: Ubuntu, LXC, jdk6-x86_64
Reporter: Gopal V
Assignee: Gopal V
  Labels: memory-leak
 Fix For: 0.13.0

 Attachments: HIVE-5345.01.patch, out-leak.png


 When processing multiple splits on the same operator pipeline, the output 
 collector in Operator has a held reference, which causes issues.
 Operator::close() does not de-reference the OutputCollector object 
 Operator::out held by the object.
 This means that trying to allocate space for a new OutputCollector causes an 
 OOM because the old one is still reachable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5207) Support data encryption for Hive tables

2013-09-25 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777416#comment-13777416
 ] 

Larry McCay commented on HIVE-5207:
---

Hi Jerry - I have taken a high level look through the patch. Lots of good stuff 
there - good work! A couple things that I would like to see more javadocs on 
and perhaps a document that describe the usecases:

1. TwoTieredKey - exactly the purpose, how it's used what the tiers are, etc
2. External KeyManagement integration - where and what is the expected contract 
for this integration
3. A specific usecase description for exporting keys into an external keystore 
and who has the authority to initiate the export and where the password comes 
from
4. An explanation as to why we should ever store the key with the data which 
seems like a bad idea. I understand that it is encrypted with the master secret 
- which takes me to the next question. :)
5. Where is the master secret established and stored and how is it protected

There is a minor typo/spelling error that you probably want to fix now rather 
than later:

+public interface HiveKeyResolver  {
+  void init(Configuration conf) throws CryptoException;
+
+  /**
+   * Resolve the key meta information of a table
+   * @param tableDesc The table descriptor
+   */
+  KeyMeta resovleKey(TableDesc tableDesc);
+}

change resovleKey to resolveKey here and in the interface implementation and 
consumer of the method - I think there were 3 instances.

Again, nice work here!
Let's get some higher level descriptions in code javadocs and/or separate 
documents.
Thanks!


 Support data encryption for Hive tables
 ---

 Key: HIVE-5207
 URL: https://issues.apache.org/jira/browse/HIVE-5207
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.12.0
Reporter: Jerry Chen
  Labels: Rhino
 Attachments: HIVE-5207.patch

   Original Estimate: 504h
  Remaining Estimate: 504h

 For sensitive and legally protected data such as personal information, it is 
 a common practice that the data is stored encrypted in the file system. To 
 enable Hive with the ability to store and query the encrypted data is very 
 crucial for Hive data analysis in enterprise. 
  
 When creating table, user can specify whether a table is an encrypted table 
 or not by specify a property in TBLPROPERTIES. Once an encrypted table is 
 created, query on the encrypted table is transparent as long as the 
 corresponding key management facilities are set in the running environment of 
 query. We can use hadoop crypto provided by HADOOP-9331 for underlying data 
 encryption and decryption. 
  
 As to key management, we would support several common key management use 
 cases. First, the table key (data key) can be stored in the Hive metastore 
 associated with the table in properties. The table key can be explicit 
 specified or auto generated and will be encrypted with a master key. There 
 are cases that the data being processed is generated by other applications, 
 we need to support externally managed or imported table keys. Also, the data 
 generated by Hive may be consumed by other applications in the system. We 
 need to a tool or command for exporting the table key to a java keystore for 
 using externally.
  
 To handle versions of Hadoop that do not have crypto support, we can avoid 
 compilation problems by segregating crypto API usage into separate files 
 (shims) to be included only if a flag is defined on the Ant command line 
 (something like –Dcrypto=true).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4822) implement vectorized math functions

2013-09-25 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4822:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to branch. Thanks, Eric!

 implement vectorized math functions
 ---

 Key: HIVE-4822
 URL: https://issues.apache.org/jira/browse/HIVE-4822
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: vectorization-branch

 Attachments: HIVE-4822.1.patch, HIVE-4822.4.patch, 
 HIVE-4822.5-vectorization.patch, HIVE-4822.6.patch.txt, 
 HIVE-4822.7-vectorization.patch


 Implement vectorized support for the all the built-in math functions. This 
 includes implementing the vectorized operation, and tying it all together in 
 VectorizationContext so it runs end-to-end. These functions include:
 round(Col)
 Round(Col, N)
 Floor(Col)
 Ceil(Col)
 Rand(), Rand(seed)
 Exp(Col)
 Ln(Col)
 Log10(Col)
 Log2(Col)
 Log(base, Col)
 Pow(col, p), Power(col, p)
 Sqrt(Col)
 Bin(Col)
 Hex(Col)
 Unhex(Col)
 Conv(Col, from_base, to_base)
 Abs(Col)
 Pmod(arg1, arg2)
 Sin(Col)
 Asin(Col)
 Cos(Col)
 ACos(Col)
 Atan(Col)
 Degrees(Col)
 Radians(Col)
 Positive(Col)
 Negative(Col)
 Sign(Col)
 E()
 Pi()
 To reduce the total code volume, do an implicit type cast from non-double 
 input types to double. 
 Also, POSITITVE and NEGATIVE are syntactic sugar for unary + and unary -, so 
 reuse code for those as appropriate.
 Try to call the function directly in the inner loop and avoid new() or 
 expensive operations, as appropriate.
 Templatize the code where appropriate, e.g. all the unary function of form 
 DOUBLE func(DOUBLE)
 can probably be done with a template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5253) Create component to compile and jar dynamic code

2013-09-25 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777606#comment-13777606
 ] 

Edward Capriolo commented on HIVE-5253:
---

[~brocknoland] Did this not patch?

 Create component to compile and jar dynamic code
 

 Key: HIVE-5253
 URL: https://issues.apache.org/jira/browse/HIVE-5253
 Project: Hive
  Issue Type: Sub-task
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-5253.1.patch.txt, HIVE-5253.3.patch.txt, 
 HIVE-5253.3.patch.txt, HIVE-5253.3.patch.txt, HIVE-5253.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5318) Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10

2013-09-25 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777636#comment-13777636
 ] 

Xuefu Zhang commented on HIVE-5318:
---

[~ashutoshc] Would you like to take a look at the patch again? Thanks.

 Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10
 

 Key: HIVE-5318
 URL: https://issues.apache.org/jira/browse/HIVE-5318
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.9.0, 0.10.0
Reporter: Brad Ruderman
Assignee: Xuefu Zhang
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-5318.1.patch, HIVE-5318.patch


 When Exporting hive tables using the hive command in Hive 0.9 EXPORT table 
 TO 'hdfs_path' then importing to another hive 0.10 instance using IMPORT 
 FROM 'hdfs_path', hive throws this error:
 13/09/18 13:14:02 ERROR ql.Driver: FAILED: SemanticException Exception while 
 processing
 org.apache.hadoop.hive.ql.parse.SemanticException: Exception while processing
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: java.lang.NullPointerException
   at java.util.ArrayList.init(ArrayList.java:131)
   at 
 org.apache.hadoop.hive.ql.plan.CreateTableDesc.init(CreateTableDesc.java:128)
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:99)
   ... 16 more
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=compile 
 start=1379535241411 end=1379535242332 duration=921
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242332 end=1379535242332 duration=0
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242333 end=1379535242333 duration=0
 This is probably a critical blocker for people who are trying to test Hive 
 0.10 in their staging environments prior to the upgrade from 0.9

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5253) Create component to compile and jar dynamic code

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777667#comment-13777667
 ] 

Brock Noland commented on HIVE-5253:


One of the huge downsides of the current build is that we cannot detect the 
difference between something that fails to compile and a check style violation. 
Since your patch does not introduce them, it looks like there are violations on 
trunk at present. See below.

{noformat}
checkstyle:
 [echo] hcatalog
[checkstyle] Running Checkstyle 5.5 on 613 files
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
 method call child at indentation level 6 not at correct indentation, 8
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method call child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
 method def child at indentation level 3 not at correct indentation, 4
  [for] hcatalog: The following error occurred while executing this line:
  [for] /data/hive-ptest/working/apache-svn-trunk-source/build.xml:355: The 
following error occurred while executing this line:
  [for] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/build.xml:127: The 
following error occurred while executing this line:
  [for] 
/data/hive-ptest/working/apache-svn-trunk-source/hcatalog/build-support/ant/checkstyle.xml:32:
 Got 8 errors and 0 warnings.
{noformat}

 Create component to compile and jar dynamic code
 

 Key: HIVE-5253
 URL: https://issues.apache.org/jira/browse/HIVE-5253
 Project: Hive
  Issue Type: Sub-task
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-5253.1.patch.txt, HIVE-5253.3.patch.txt, 
 HIVE-5253.3.patch.txt, HIVE-5253.3.patch.txt, HIVE-5253.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5358) ReduceSinkDeDuplication should ignore column orders when check overlapping part of keys between parent and child

2013-09-25 Thread Chun Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated HIVE-5358:


Attachment: HIVE-5358.patch

 ReduceSinkDeDuplication should ignore column orders when check overlapping 
 part of keys between parent and child
 

 Key: HIVE-5358
 URL: https://issues.apache.org/jira/browse/HIVE-5358
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Chun Chen
Assignee: Chun Chen
 Attachments: HIVE-5358.patch


 {code}
 select key, value from (select key, value from src group by key, value) t 
 group by key, value;
 {code}
 This can be optimized by ReduceSinkDeDuplication
 {code}
 select key, value from (select key, value from src group by key, value) t 
 group by value, key;
 {code}
 However the sql above can't be optimized by ReduceSinkDeDuplication currently 
 due to different column orders of parent and child operator.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5318) Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10

2013-09-25 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777687#comment-13777687
 ] 

Ashutosh Chauhan commented on HIVE-5318:


+1

 Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10
 

 Key: HIVE-5318
 URL: https://issues.apache.org/jira/browse/HIVE-5318
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.9.0, 0.10.0
Reporter: Brad Ruderman
Assignee: Xuefu Zhang
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-5318.1.patch, HIVE-5318.patch


 When Exporting hive tables using the hive command in Hive 0.9 EXPORT table 
 TO 'hdfs_path' then importing to another hive 0.10 instance using IMPORT 
 FROM 'hdfs_path', hive throws this error:
 13/09/18 13:14:02 ERROR ql.Driver: FAILED: SemanticException Exception while 
 processing
 org.apache.hadoop.hive.ql.parse.SemanticException: Exception while processing
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: java.lang.NullPointerException
   at java.util.ArrayList.init(ArrayList.java:131)
   at 
 org.apache.hadoop.hive.ql.plan.CreateTableDesc.init(CreateTableDesc.java:128)
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:99)
   ... 16 more
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=compile 
 start=1379535241411 end=1379535242332 duration=921
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242332 end=1379535242332 duration=0
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242333 end=1379535242333 duration=0
 This is probably a critical blocker for people who are trying to test Hive 
 0.10 in their staging environments prior to the upgrade from 0.9

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5274) HCatalog package renaming backward compatibility follow-up

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777692#comment-13777692
 ] 

Brock Noland commented on HIVE-5274:


[~sushanth] Did this patch cause some checkstyle violations seen here 
https://issues.apache.org/jira/browse/HIVE-5253?focusedCommentId=13777667page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13777667

 HCatalog package renaming backward compatibility follow-up
 --

 Key: HIVE-5274
 URL: https://issues.apache.org/jira/browse/HIVE-5274
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.12.0

 Attachments: HIVE-5274.2.patch, HIVE-5274.3.patch, HIVE-5274.4.patch


 As part of HIVE-4869, the hbase storage handler in hcat was moved to 
 org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it 
 was intended to be deprecated as well.
 However, it imports and uses several org.apache.hive.hcatalog classes. This 
 needs to be changed to use org.apache.hcatalog classes.
 ==
 Note : The above is a complete description of this issue in and of by itself, 
 the following is more details on the backward-compatibility goal I have(not 
 saying that each of these things are violated) : 
 a) People using org.apache.hcatalog packages should continue being able to 
 use that package, and see no difference at compile time or runtime. All code 
 here is considered deprecated, and will be gone by the time hive 0.14 rolls 
 around. Additionally, org.apache.hcatalog should behave as if it were 0.11 
 for all compatibility purposes.
 b) People using org.apache.hive.hcatalog packages should never have an 
 org.apache.hcatalog dependency injected in.
 Thus,
 It is okay for org.apache.hcatalog to use org.apache.hive.hcatalog packages 
 internally (say HCatUtil, for example), as long as any interfaces only expose 
 org.apache.hcatalog.\* For tests that test org.apache.hcatalog.\*, we must be 
 capable of testing it from a pure org.apache.hcatalog.\* world.
 It is never okay for org.apache.hive.hcatalog to use org.apache.hcatalog, 
 even in tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-09-25 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-4898:
--

Assignee: Eric Hanson

 make vectorized math functions work end-to-end (update 
 VectorizationContext.java)
 -

 Key: HIVE-4898
 URL: https://issues.apache.org/jira/browse/HIVE-4898
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Eric Hanson
Assignee: Eric Hanson

 The vectorized math function VectorExpression classes were added in 
 HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
 end-to-end. This requires updating VectorizationContext to use the new 
 classes in vectorized expression creation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HIVE-4898) make vectorized math functions work end-to-end (update VectorizationContext.java)

2013-09-25 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-4898 started by Eric Hanson.

 make vectorized math functions work end-to-end (update 
 VectorizationContext.java)
 -

 Key: HIVE-4898
 URL: https://issues.apache.org/jira/browse/HIVE-4898
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Eric Hanson
Assignee: Eric Hanson

 The vectorized math function VectorExpression classes were added in 
 HIVE-4822. This JIRA is to allow those to actually be used in a SQL query 
 end-to-end. This requires updating VectorizationContext to use the new 
 classes in vectorized expression creation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5235) Infinite loop with ORC file and Hive 0.11

2013-09-25 Thread Pere Ferrera Bertran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1307#comment-1307
 ] 

Pere Ferrera Bertran commented on HIVE-5235:


Hi guys,

Here the OS details (uname -a): 

Linux  3.6.11-gentoo-xxx #1 SMP Wed Jan 23 12:25:47 EST 2013 x86_64 
Intel(R) Xeon(R) CPU L5520 @ 2.27GHz GenuineIntel GNU/Linux

The Java version at the moment of the crashes was the Java SE 7 Update 25. We 
are currently not using the ORC file format in this cluster, and we can't 
distribute data from it, however we will try to launch a query with fake data 
to try to reproduce it.

 Infinite loop with ORC file and Hive 0.11
 -

 Key: HIVE-5235
 URL: https://issues.apache.org/jira/browse/HIVE-5235
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
 Environment: Gentoo linux with Hortonworks Hadoop 
 hadoop-1.1.2.23.tar.gz and Apache Hive 0.11d
Reporter: Iván de Prado
Priority: Blocker

 We are using Hive 0.11 with ORC file format and we get some tasks blocked in 
 some kind of infinite loop. They keep working indefinitely when we set a huge 
 task expiry timeout. If we the expiry time to 600 second, the taks fail 
 because of not reporting progress, and finally, the Job fails. 
 That is not consistent, and some times between jobs executions the behavior 
 changes. It happen for different queries.
 We are using Hive 0.11 with Hadoop hadoop-1.1.2.23 from Hortonworks. The taks 
 that is blocked keeps consuming 100% of CPU usage, and the stack trace is 
 always the same consistently. Everything points to some kind of infinite 
 loop. My guessing is that it has some relation to the ORC file. Maybe some 
 pointer is not right when writing generating some kind of infinite loop when 
 reading.  Or maybe there is a bug in the reading stage.
 More information below. The stack trace:
 {noformat} 
 main prio=10 tid=0x7f20a000a800 nid=0x1ed2 runnable [0x7f20a8136000]
java.lang.Thread.State: RUNNABLE
   at java.util.zip.Inflater.inflateBytes(Native Method)
   at java.util.zip.Inflater.inflate(Inflater.java:256)
   - locked 0xf42a6ca0 (a java.util.zip.ZStreamRef)
   at 
 org.apache.hadoop.hive.ql.io.orc.ZlibCodec.decompress(ZlibCodec.java:64)
   at 
 org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:128)
   at 
 org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:143)
   at 
 org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readVulong(SerializationUtils.java:54)
   at 
 org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readVslong(SerializationUtils.java:65)
   at 
 org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReader.readValues(RunLengthIntegerReader.java:66)
   at 
 org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReader.next(RunLengthIntegerReader.java:81)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$IntTreeReader.next(RecordReaderImpl.java:332)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:802)
   at 
 org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1214)
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:71)
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:46)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
   at 
 org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:300)
   at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:218)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:236)
   - eliminated 0xe1459700 (a 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)
   - locked 0xe1459700 (a 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at 

[jira] [Updated] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-5264:
--

Attachment: D12993.2.patch

sershe updated the revision HIVE-5264 [jira] SQL generated by 
MetaStoreDirectSql.java not compliant with Postgres..

  small rebase (conflict with other patch)

Reviewers: ashutoshc, JIRA

REVISION DETAIL
  https://reviews.facebook.net/D12993

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D12993?vs=40107id=40443#toc

AFFECTED FILES
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
  metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java

To: JIRA, ashutoshc, sershe


 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4910) Hadoop 2 archives broken

2013-09-25 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1337#comment-1337
 ] 

Jason Dere commented on HIVE-4910:
--

FYI it looks like an issue has been created for this on the Hadoop end at 
HADOOP-9776

 Hadoop 2 archives broken
 

 Key: HIVE-4910
 URL: https://issues.apache.org/jira/browse/HIVE-4910
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Tests
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4910.patch, HIVE-4910.patch


 Hadoop 2 archive tests are broken. The issue stems from the fact that har uri 
 construction does not really have a port in the URI when unit tests are run. 
 This means that an invalid uri is constructed resulting in failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-5273) Subsequent use of Mapper yields 0 results

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair resolved HIVE-5273.
-

Resolution: Cannot Reproduce

I am unable to reproduce this issue with hive trunk and branch 0.12 . Please 
let me know if I am not following the right steps here.

By local task tracker, I assume you meant local mode jobtracker. To run in 
local mode, I used - 
echo $HIVE_OPTS 
-hiveconf mapred.job.tracker=local -hiveconf fs.default.name=file:///tmp 
-hiveconf hive.metastore.warehouse.dir=file:///tmp/warehouse -hiveconf 
javax.jdo.option.ConnectionURL=jdbc:derby:;databaseName=/tmp/metastore_db;create=true


This is what i tried -
//create table
{code}
hive create table ts(s string);
OK
Time taken: 0.02 seconds
hive select s from ts limit 5;
{code}

//adding data to table
{code}
$ perl -e 'for (my $i=0; $i1; $i++){ print 
asdfasdfasdfasdfasdfasdfasdfasdfasd\n;}'/tmp/warehouse/ts/input
$ du -hs /tmp/warehouse/ts/input
3.4G/tmp/warehouse/ts/input

{code}


//running the test
{code}
hive select s from ts limit 5;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Execution log at: /tmp/thejas/.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-09-25 09:47:25,276 null map = 0%,  reduce = 0%
2013-09-25 09:47:28,278 null map = 100%,  reduce = 0%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
OK
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
Time taken: 14.622 seconds, Fetched: 5 row(s)


hive select s from ts limit 5;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Execution log at: /tmp/thejas/.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-09-25 09:58:00,492 null map = 0%,  reduce = 0%
2013-09-25 09:58:03,493 null map = 100%,  reduce = 0%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
OK
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
asdfasdfasdfasdfasdfasdfasdfasdfasd
Time taken: 11.825 seconds, Fetched: 5 row(s)

{code}





 Subsequent use of Mapper yields 0 results
 -

 Key: HIVE-5273
 URL: https://issues.apache.org/jira/browse/HIVE-5273
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 0.13.0
Reporter: Mike Lewis
Priority: Blocker

 First noticed this when using local task tracker (and is easiest to reproduce 
 with it).
 Created a table with one column (uuid).  Ran
 {code}
 SELECT uuid FROM test_foo LIMIT 5;
 {code}
 Results are as expected:
 {code}
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 Time taken: 40.172 seconds, Fetched: 5 row(s)
 {code}
 Then I run it again.
 The results are not as expected:
 {code}
 Time taken: 55.498 seconds
 {code}
 The table I am querying is
 {code}
 hive describe extended test_foo;
 OK
 uuid  string  None

 Detailed Table InformationTable(tableName:test_foo, dbName:default, 
 owner:lewis, createTime:1378934838, lastAccessTime:0, retention:0, 
 sd:StorageDescriptor(cols:[FieldSchema(name:uuid, type:string, 
 comment:null)], 
 location:hdfs://gun1.sjc1c.square:8020/user/hive/warehouse/test_foo, 
 inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
 outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
 compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
 serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
 parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
 parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
 skewedColValueLocationMaps:{}), storedAsSubDirectories:false), 
 partitionKeys:[], parameters:{numPartitions=0, numFiles=37, 
 transient_lastDdlTime=1378934838, numRows=0, totalSize=44600654909, 
 rawDataSize=0}, viewOriginalText:null, viewExpandedText:null, 
 tableType:MANAGED_TABLE) 
 {code}
 With non-local tasktracker subsequent queries work, but when doing a 
 {{count(* )}} over a large data set, 0.12.0 returns only a subset of results 
 that 0.10.0 returns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, 

[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1351#comment-1351
 ] 

Sergey Shelukhin commented on HIVE-5264:


I don't understand why hiveqa is not picking up this jira... tests passed here 
before the recent rebase, I just kicked them off again, but they will take a 
while.

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5273) Subsequent use of Mapper yields 0 results

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5273:


Assignee: Thejas M Nair

 Subsequent use of Mapper yields 0 results
 -

 Key: HIVE-5273
 URL: https://issues.apache.org/jira/browse/HIVE-5273
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0, 0.13.0
Reporter: Mike Lewis
Assignee: Thejas M Nair
Priority: Blocker

 First noticed this when using local task tracker (and is easiest to reproduce 
 with it).
 Created a table with one column (uuid).  Ran
 {code}
 SELECT uuid FROM test_foo LIMIT 5;
 {code}
 Results are as expected:
 {code}
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 ace7265d-49bf-4c11-af67-0cd0a33c690e
 Time taken: 40.172 seconds, Fetched: 5 row(s)
 {code}
 Then I run it again.
 The results are not as expected:
 {code}
 Time taken: 55.498 seconds
 {code}
 The table I am querying is
 {code}
 hive describe extended test_foo;
 OK
 uuid  string  None

 Detailed Table InformationTable(tableName:test_foo, dbName:default, 
 owner:lewis, createTime:1378934838, lastAccessTime:0, retention:0, 
 sd:StorageDescriptor(cols:[FieldSchema(name:uuid, type:string, 
 comment:null)], 
 location:hdfs://gun1.sjc1c.square:8020/user/hive/warehouse/test_foo, 
 inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
 outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
 compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
 serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
 parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
 parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
 skewedColValueLocationMaps:{}), storedAsSubDirectories:false), 
 partitionKeys:[], parameters:{numPartitions=0, numFiles=37, 
 transient_lastDdlTime=1378934838, numRows=0, totalSize=44600654909, 
 rawDataSize=0}, viewOriginalText:null, viewExpandedText:null, 
 tableType:MANAGED_TABLE) 
 {code}
 With non-local tasktracker subsequent queries work, but when doing a 
 {{count(* )}} over a large data set, 0.12.0 returns only a subset of results 
 that 0.10.0 returns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5264:
---

Attachment: HIVE-5264.02.patch

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1357#comment-1357
 ] 

Brock Noland commented on HIVE-5264:


Phabricator is giving the filename D12993.2.patch which is different than 
what it used to be so hiveqa doesn't test the file, see 
https://cwiki.apache.org/confluence/display/Hive/Hive+PreCommit+Patch+Testing. 
Do you know if this change in filename is caused by Phabricator or something 
you are inadvertently doing?

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5274) HCatalog package renaming backward compatibility follow-up

2013-09-25 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1359#comment-1359
 ] 

Thejas M Nair commented on HIVE-5274:
-

Looks like its not this patch, its HIVE-5223 that caused the checkstyle 
violations. [~ashutoshc] Can you please take a look ?

 HCatalog package renaming backward compatibility follow-up
 --

 Key: HIVE-5274
 URL: https://issues.apache.org/jira/browse/HIVE-5274
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.12.0

 Attachments: HIVE-5274.2.patch, HIVE-5274.3.patch, HIVE-5274.4.patch


 As part of HIVE-4869, the hbase storage handler in hcat was moved to 
 org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it 
 was intended to be deprecated as well.
 However, it imports and uses several org.apache.hive.hcatalog classes. This 
 needs to be changed to use org.apache.hcatalog classes.
 ==
 Note : The above is a complete description of this issue in and of by itself, 
 the following is more details on the backward-compatibility goal I have(not 
 saying that each of these things are violated) : 
 a) People using org.apache.hcatalog packages should continue being able to 
 use that package, and see no difference at compile time or runtime. All code 
 here is considered deprecated, and will be gone by the time hive 0.14 rolls 
 around. Additionally, org.apache.hcatalog should behave as if it were 0.11 
 for all compatibility purposes.
 b) People using org.apache.hive.hcatalog packages should never have an 
 org.apache.hcatalog dependency injected in.
 Thus,
 It is okay for org.apache.hcatalog to use org.apache.hive.hcatalog packages 
 internally (say HCatUtil, for example), as long as any interfaces only expose 
 org.apache.hcatalog.\* For tests that test org.apache.hcatalog.\*, we must be 
 capable of testing it from a pure org.apache.hcatalog.\* world.
 It is never okay for org.apache.hive.hcatalog to use org.apache.hcatalog, 
 even in tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1362#comment-1362
 ] 

Sergey Shelukhin commented on HIVE-5264:


Yeah, but I keep uploading the properly-named file after :)

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1364#comment-1364
 ] 

Sergey Shelukhin commented on HIVE-5264:


As for Ph patches I am updating manually because I had some issues with local 
branches for this issue. Maybe the review is attached to the jira wrong. But 
anyway I am hoping it would pick up my patched

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections.

2013-09-25 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-5296:
-

Affects Version/s: 0.13.0

 Memory leak: OOM Error after multiple open/closed JDBC connections. 
 

 Key: HIVE-5296
 URL: https://issues.apache.org/jira/browse/HIVE-5296
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0
 Environment: Hive 0.12.0, Hadoop 1.1.2, Debian.
Reporter: Douglas
  Labels: hiveserver
 Fix For: 0.12.0

 Attachments: HIVE-5296.patch, HIVE-5296.patch, HIVE-5296.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481
 However, on inspection of the related patch and my built version of Hive 
 (patch carried forward to 0.12.0), I am still seeing the described behaviour.
 Multiple connections to Hiveserver2, all of which are closed and disposed of 
 properly show the Java heap size to grow extremely quickly. 
 This issue can be recreated using the following code
 {code}
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 import org.apache.hive.service.cli.HiveSQLException;
 import org.apache.log4j.Logger;
 /*
  * Class which encapsulates the lifecycle of a query or statement.
  * Provides functionality which allows you to create a connection
  */
 public class HiveClient {
   
   Connection con;
   Logger logger;
   private static String driverName = org.apache.hive.jdbc.HiveDriver;   
   private String db;
   
   
   public HiveClient(String db)
   {   
   logger = Logger.getLogger(HiveClient.class);
   this.db=db;
   
   try{
Class.forName(driverName);
   }catch(ClassNotFoundException e){
   logger.info(Can't find Hive driver);
   }
   
   String hiveHost = GlimmerServer.config.getString(hive/host);
   String hivePort = GlimmerServer.config.getString(hive/port);
   String connectionString = jdbc:hive2://+hiveHost+:+hivePort 
 +/default;
   logger.info(String.format(Attempting to connect to 
 %s,connectionString));
   try{
   con = 
 DriverManager.getConnection(connectionString,,);  
 
   }catch(Exception e){
   logger.error(Problem instantiating the 
 connection+e.getMessage());
   }   
   }
   
   public int update(String query) 
   {
   Integer res = 0;
   Statement stmt = null;
   try{
   stmt = con.createStatement();
   String switchdb = USE +db;
   logger.info(switchdb);  
   stmt.executeUpdate(switchdb);
   logger.info(query);
   res = stmt.executeUpdate(query);
   logger.info(Query passed to server);  
   stmt.close();
   }catch(HiveSQLException e){
   logger.info(String.format(HiveSQLException thrown, 
 this can be valid,  +
   but check the error: %s from the query 
 %s,query,e.toString()));
   }catch(SQLException e){
   logger.error(String.format(Unable to execute query 
 SQLException %s. Error: %s,query,e));
   }catch(Exception e){
   logger.error(String.format(Unable to execute query %s. 
 Error: %s,query,e));
   }
   
   if(stmt!=null)
   try{
   stmt.close();
   }catch(SQLException e){
   logger.error(Cannot close the statment, 
 potentially memory leak +e);
   }
   
   return res;
   }
   
   public void close()
   {
   if(con!=null){
   try {
   con.close();
   } catch (SQLException e) {  
   logger.info(Problem closing connection +e);
   }
   }
   }
   
   
   
 }
 {code}
 And by creating and closing many HiveClient objects. The heap space used by 
 the hiveserver2 runjar process is seen to increase 

[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1382#comment-1382
 ] 

Brock Noland commented on HIVE-5264:


It's currently running:

https://builds.apache.org/user/brock/my-views/view/hive/job/PreCommit-HIVE-Build/896/console

but I think it will fail due to:

https://issues.apache.org/jira/browse/HIVE-5274?focusedCommentId=1359page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-1359

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5274) HCatalog package renaming backward compatibility follow-up

2013-09-25 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777801#comment-13777801
 ] 

Sushanth Sowmyan commented on HIVE-5274:


Hmm, I checked trunk with this patch for checkstyle violations before going 
ahead with it. I can verify(and also check the other one Thejas mentioned)

 HCatalog package renaming backward compatibility follow-up
 --

 Key: HIVE-5274
 URL: https://issues.apache.org/jira/browse/HIVE-5274
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.12.0

 Attachments: HIVE-5274.2.patch, HIVE-5274.3.patch, HIVE-5274.4.patch


 As part of HIVE-4869, the hbase storage handler in hcat was moved to 
 org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it 
 was intended to be deprecated as well.
 However, it imports and uses several org.apache.hive.hcatalog classes. This 
 needs to be changed to use org.apache.hcatalog classes.
 ==
 Note : The above is a complete description of this issue in and of by itself, 
 the following is more details on the backward-compatibility goal I have(not 
 saying that each of these things are violated) : 
 a) People using org.apache.hcatalog packages should continue being able to 
 use that package, and see no difference at compile time or runtime. All code 
 here is considered deprecated, and will be gone by the time hive 0.14 rolls 
 around. Additionally, org.apache.hcatalog should behave as if it were 0.11 
 for all compatibility purposes.
 b) People using org.apache.hive.hcatalog packages should never have an 
 org.apache.hcatalog dependency injected in.
 Thus,
 It is okay for org.apache.hcatalog to use org.apache.hive.hcatalog packages 
 internally (say HCatUtil, for example), as long as any interfaces only expose 
 org.apache.hcatalog.\* For tests that test org.apache.hcatalog.\*, we must be 
 capable of testing it from a pure org.apache.hcatalog.\* world.
 It is never okay for org.apache.hive.hcatalog to use org.apache.hcatalog, 
 even in tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777802#comment-13777802
 ] 

Sergey Shelukhin commented on HIVE-5264:


Hmm, it seems to have run with phabricator patch and discarded it. Let me try 
to cancel and resubmit

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5264:
---

Status: Open  (was: Patch Available)

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777804#comment-13777804
 ] 

Brock Noland commented on HIVE-5264:


No, it failed because trunk checkstyle fails at present. See 
https://issues.apache.org/jira/browse/HIVE-5274?focusedCommentId=1359page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-1359

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5264:
---

Status: Patch Available  (was: Open)

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.03.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5352) cast('1.0' as int) returns null

2013-09-25 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5352:
-

Attachment: HIVE-5352.3.patch

A better approach (.3) per review comments.

 cast('1.0' as int) returns null
 ---

 Key: HIVE-5352
 URL: https://issues.apache.org/jira/browse/HIVE-5352
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-5352.1.patch, HIVE-5352.2.patch, HIVE-5352.3.patch


 Casting strings to int/smallint/bigint/tinyint yields null if the string 
 isn't a 'pure' integer. '1.0', '2.4', '1e5' all return null. I think for 
 those cases the cast should return the truncated int (i.e.: if c is string, 
 cast(c as int) should be the same as cast(cast(c as float) as int).
 This is in line with the standard and is the same behavior as mysql and 
 oracle. (postgres and sql server throw error, see first answer here: 
 http://social.msdn.microsoft.com/Forums/sqlserver/en-US/af3eff9c-737b-42fe-9016-05da9203a667/oracle-does-understand-cast10-as-int-why-sql-server-does-not)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5264:
---

Attachment: HIVE-5264.03.patch

Exactly the same patch. Some random shamanic dancing to maybe appease HiveQA

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.03.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777809#comment-13777809
 ] 

Brock Noland commented on HIVE-5264:


That is going to fail with the same error message! See my earlier comments :)

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.03.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-09-25 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777816#comment-13777816
 ] 

Konstantin Boudnik commented on HIVE-4501:
--

Provided patch doesn't solve the problem though, to my understanding. It makes 
it less pronounced for sure, but it doesn't go away.

 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-4501.1.patch


 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to false.
 Users should not have to bother with this extra configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777847#comment-13777847
 ] 

Sergey Shelukhin commented on HIVE-5264:


Which message? You mean checkstyle? Wouldn't we at least see the tests result?
The previous HiveQA for this JIRA picked phabricator patch and said not a 
patch

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.03.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5264) SQL generated by MetaStoreDirectSql.java not compliant with Postgres.

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777859#comment-13777859
 ] 

Brock Noland commented on HIVE-5264:


If you look all the way at the bottom of this:

https://issues.apache.org/jira/browse/HIVE-5264?focusedCommentId=1388page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-1388

you will see what I am referring to. The issue is at present that HiveQA cannot 
detect the difference between a checkstyle failure and a compile failure since 
ant exists with 1 in both cases. This means when trunk fails with checkstyle 
errors no tests will be run by Hive QA.

 SQL generated by MetaStoreDirectSql.java not compliant with Postgres.
 -

 Key: HIVE-5264
 URL: https://issues.apache.org/jira/browse/HIVE-5264
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
 Environment: Ubuntu 12.04
 PostgreSQL 9.1.8
Reporter: Alexander Behm
Assignee: Sergey Shelukhin
 Attachments: D12993.1.patch, D12993.2.patch, HIVE-5264.01.patch, 
 HIVE-5264.02.patch, HIVE-5264.03.patch, HIVE-5264.patch


 Some operations against the Hive Metastore seem broken
 against Postgres.
 For example, when using HiveMetastoreClient.listPartitions()
 the Postgres logs show queries such as:
 2013-09-09 19:10:01 PDT STATEMENT:  select PARTITIONS.PART_ID from
 PARTITIONS  inner join TBLS on PARTITIONS.TBL_ID = TBLS.TBL_ID   inner
 join DBS on TBLS.DB_ID = DBS.DB_ID  where TBLS.TBL_NAME = $1 and
 DBS.NAME = $2 order by PART_NAME asc
 with a somewhat cryptic (but correct) error:
 ERROR:  relation partitions does not exist at character 32
 Postgres identifiers are somewhat unusual. Unquoted identifiers are 
 interpreted as lower case (there is no Postgres option to change this). Since 
 the Metastore table schema uses upper case table names, the correct SQL 
 requires escaped identifiers to those tables, i.e.,
 select PARTITIONS.PART_ID from PARTITIONS...
 Hive sets metastore.try.direct.sql=true by default, so the above SQL is 
 generated by hive/metastore/MetaStoreDirectSql.java, i.e., this is not a 
 Datanucleus problem.
 When I set metastore.try.direct.sql=false, then the Metastore backed by 
 Postgres works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5274) HCatalog package renaming backward compatibility follow-up

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777867#comment-13777867
 ] 

Hudson commented on HIVE-5274:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2357 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2357/])
HIVE-5274 : HCatalog package renaming backward compatibility follow-up 
(Sushanth Sowmyan) (khorgath: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526094)
* /hive/trunk/hcatalog/build-support/ant/checkstyle.xml
* 
/hive/trunk/hcatalog/core/src/main/java/org/apache/hcatalog/mapreduce/HCatStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseBaseOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseConstants.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseHCatStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HBaseRevisionManagerUtil.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/HbaseSnapshotRecordReader.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/java/org/apache/hcatalog/hbase/ResultConverter.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseBulkOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseDirectOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseHCatStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHCatHBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHiveHBaseStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestHiveHBaseTableOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestPigHBaseStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/TestSnapshots.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hcatalog/hbase/snapshot/TestZNodeSetUp.java
* /hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/ManyMiniCluster.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/SkeletonHBaseTest.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHBaseInputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHiveHBaseStorageHandler.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestHiveHBaseTableOutputFormat.java
* 
/hive/trunk/hcatalog/storage-handlers/hbase/src/test/org/apache/hive/hcatalog/hbase/TestPigHBaseStorageHandler.java


 HCatalog package renaming backward compatibility follow-up
 --

 Key: HIVE-5274
 URL: https://issues.apache.org/jira/browse/HIVE-5274
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.12.0

 Attachments: HIVE-5274.2.patch, HIVE-5274.3.patch, HIVE-5274.4.patch


 As part of HIVE-4869, the hbase storage handler in hcat was moved to 
 org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it 
 was intended to be deprecated as well.
 However, it imports and uses several org.apache.hive.hcatalog classes. This 
 needs to be changed to use org.apache.hcatalog classes.
 ==
 Note : The above is a complete description of this issue in and of by itself, 
 the following is more details on the backward-compatibility goal I have(not 
 saying that each of these things are violated) : 
 a) People using org.apache.hcatalog packages should continue being able to 
 use that package, and see no difference at compile time or runtime. All code 
 here is considered deprecated, and will be gone by the time hive 0.14 rolls 
 around. Additionally, org.apache.hcatalog should behave as if it were 0.11 
 for all compatibility purposes.
 b) People using org.apache.hive.hcatalog packages should never have an 
 org.apache.hcatalog dependency injected in.
 Thus,
 It is okay for org.apache.hcatalog to use 

[jira] [Commented] (HIVE-5345) Operator::close() leaks Operator::out, holding reference to buffers

2013-09-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777866#comment-13777866
 ] 

Hudson commented on HIVE-5345:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2357 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2357/])
HIVE-5345 : Operator::close() leaks Operator::out, holding reference to buffers 
(Gopal V via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1526100)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java


 Operator::close() leaks Operator::out, holding reference to buffers
 ---

 Key: HIVE-5345
 URL: https://issues.apache.org/jira/browse/HIVE-5345
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
 Environment: Ubuntu, LXC, jdk6-x86_64
Reporter: Gopal V
Assignee: Gopal V
  Labels: memory-leak
 Fix For: 0.13.0

 Attachments: HIVE-5345.01.patch, out-leak.png


 When processing multiple splits on the same operator pipeline, the output 
 collector in Operator has a held reference, which causes issues.
 Operator::close() does not de-reference the OutputCollector object 
 Operator::out held by the object.
 This means that trying to allocate space for a new OutputCollector causes an 
 OOM because the old one is still reachable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5359) HiveAuthFactory does not honor the hive configuration passed while creating HiveServer2

2013-09-25 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-5359:
--

 Summary: HiveAuthFactory does not honor the hive configuration 
passed while creating HiveServer2
 Key: HIVE-5359
 URL: https://issues.apache.org/jira/browse/HIVE-5359
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta


When HS2 is brought up, the server can be inited with the given hive config: 
HiveServer2#init(HiveConf hiveConf). That configuration should be applied 
through the entire setup process for all services. However, while starting 
ThriftBinaryCLIService, it creates a new HiveAuthFactory object, whose 
constructor creates a new HiveConf object and ends up using it rather than 
using the HiveConf passed during HS2 bootstrap.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-09-25 Thread Henry Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Wang updated HIVE-4501:
-

Priority: Critical  (was: Major)

 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Critical
 Attachments: HIVE-4501.1.patch


 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to false.
 Users should not have to bother with this extra configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5359) HiveAuthFactory does not honor the hive configuration passed while creating HiveServer2

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5359:
---

Description: When HS2 is brought up, the server can be inited with the 
given hive config: HiveServer2#init(HiveConf hiveConf). That configuration 
should be applied through the entire setup process for all services. However, 
while starting ThriftCLIService, it creates a new HiveAuthFactory object, whose 
constructor creates a new HiveConf object and ends up using it rather than 
using the HiveConf passed during HS2 bootstrap.  (was: When HS2 is brought up, 
the server can be inited with the given hive config: HiveServer2#init(HiveConf 
hiveConf). That configuration should be applied through the entire setup 
process for all services. However, while starting ThriftBinaryCLIService, it 
creates a new HiveAuthFactory object, whose constructor creates a new HiveConf 
object and ends up using it rather than using the HiveConf passed during HS2 
bootstrap.)

 HiveAuthFactory does not honor the hive configuration passed while creating 
 HiveServer2
 ---

 Key: HIVE-5359
 URL: https://issues.apache.org/jira/browse/HIVE-5359
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta

 When HS2 is brought up, the server can be inited with the given hive config: 
 HiveServer2#init(HiveConf hiveConf). That configuration should be applied 
 through the entire setup process for all services. However, while starting 
 ThriftCLIService, it creates a new HiveAuthFactory object, whose constructor 
 creates a new HiveConf object and ends up using it rather than using the 
 HiveConf passed during HS2 bootstrap.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5360:
---

 Summary: fix hcatalog checkstyle issue  introduced in HIVE-5223
 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair


The trunk and 0.12 branch have checkstyle failures right now.

{code}
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
 method call child at indentation level 6 not at correct indentation, 8
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method call child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
 method def child at indentation level 3 not at correct indentation, 4
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5274) HCatalog package renaming backward compatibility follow-up

2013-09-25 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777879#comment-13777879
 ] 

Thejas M Nair commented on HIVE-5274:
-

I am fixing it, will upload a patch soon to HIVE-5360

 HCatalog package renaming backward compatibility follow-up
 --

 Key: HIVE-5274
 URL: https://issues.apache.org/jira/browse/HIVE-5274
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.12.0

 Attachments: HIVE-5274.2.patch, HIVE-5274.3.patch, HIVE-5274.4.patch


 As part of HIVE-4869, the hbase storage handler in hcat was moved to 
 org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it 
 was intended to be deprecated as well.
 However, it imports and uses several org.apache.hive.hcatalog classes. This 
 needs to be changed to use org.apache.hcatalog classes.
 ==
 Note : The above is a complete description of this issue in and of by itself, 
 the following is more details on the backward-compatibility goal I have(not 
 saying that each of these things are violated) : 
 a) People using org.apache.hcatalog packages should continue being able to 
 use that package, and see no difference at compile time or runtime. All code 
 here is considered deprecated, and will be gone by the time hive 0.14 rolls 
 around. Additionally, org.apache.hcatalog should behave as if it were 0.11 
 for all compatibility purposes.
 b) People using org.apache.hive.hcatalog packages should never have an 
 org.apache.hcatalog dependency injected in.
 Thus,
 It is okay for org.apache.hcatalog to use org.apache.hive.hcatalog packages 
 internally (say HCatUtil, for example), as long as any interfaces only expose 
 org.apache.hcatalog.\* For tests that test org.apache.hcatalog.\*, we must be 
 capable of testing it from a pure org.apache.hcatalog.\* world.
 It is never okay for org.apache.hive.hcatalog to use org.apache.hcatalog, 
 even in tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5360:


Status: Patch Available  (was: Open)

 fix hcatalog checkstyle issue  introduced in HIVE-5223
 --

 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5360.1.patch


 The trunk and 0.12 branch have checkstyle failures right now.
 {code}
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
  method call child at indentation level 6 not at correct indentation, 8
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method call child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
  method def child at indentation level 3 not at correct indentation, 4
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5360:


Attachment: HIVE-5360.1.patch

HIVE-5360.1.patch - fixes checkstyle issue.

I think we can forgo the 24hr embargo for this build fix.

I have verified that this fixes the checkstyle issue
{code}

checkstyle-init:
[mkdir] Created dir: 
...

checkstyle:
 [echo] hcatalog
[checkstyle] Running Checkstyle 5.5 on 613 files

BUILD SUCCESSFUL
{code}


 fix hcatalog checkstyle issue  introduced in HIVE-5223
 --

 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5360.1.patch


 The trunk and 0.12 branch have checkstyle failures right now.
 {code}
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
  method call child at indentation level 6 not at correct indentation, 8
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method call child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
  method def child at indentation level 3 not at correct indentation, 4
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777890#comment-13777890
 ] 

Thejas M Nair commented on HIVE-5360:
-

cc [~ashutoshc] [~brocknoland] [~sushanth]

 fix hcatalog checkstyle issue  introduced in HIVE-5223
 --

 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5360.1.patch


 The trunk and 0.12 branch have checkstyle failures right now.
 {code}
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
  method call child at indentation level 6 not at correct indentation, 8
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method call child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
  method def child at indentation level 3 not at correct indentation, 4
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777896#comment-13777896
 ] 

Brock Noland commented on HIVE-5360:


+1

 fix hcatalog checkstyle issue  introduced in HIVE-5223
 --

 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-5360.1.patch


 The trunk and 0.12 branch have checkstyle failures right now.
 {code}
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
  method call child at indentation level 6 not at correct indentation, 8
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method call child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
  method def child at indentation level 3 not at correct indentation, 4
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5360:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Thank you very much! I have committed this to trunk.

 fix hcatalog checkstyle issue  introduced in HIVE-5223
 --

 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-5360.1.patch


 The trunk and 0.12 branch have checkstyle failures right now.
 {code}
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
  method call child at indentation level 6 not at correct indentation, 8
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method call child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
  method def child at indentation level 3 not at correct indentation, 4
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5361) PTest2 should allow a different JVM for compilation versus execution

2013-09-25 Thread Brock Noland (JIRA)
Brock Noland created HIVE-5361:
--

 Summary: PTest2 should allow a different JVM for compilation 
versus execution
 Key: HIVE-5361
 URL: https://issues.apache.org/jira/browse/HIVE-5361
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-09-25 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777903#comment-13777903
 ] 

Thejas M Nair commented on HIVE-4501:
-

[~cos] Can you please elaborate ? The problem is memory leak in HS2 caused by 
FileSystem objects being cached in FileSystem.CACHE. Why won't disabling the 
cache stop the leak ?


 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Critical
 Attachments: HIVE-4501.1.patch


 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to false.
 Users should not have to bother with this extra configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections.

2013-09-25 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777901#comment-13777901
 ] 

Kousuke Saruta commented on HIVE-5296:
--

The error above may be caused by HIVE-5360. So, I've re-submitted a patch.

 Memory leak: OOM Error after multiple open/closed JDBC connections. 
 

 Key: HIVE-5296
 URL: https://issues.apache.org/jira/browse/HIVE-5296
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0
 Environment: Hive 0.12.0, Hadoop 1.1.2, Debian.
Reporter: Douglas
  Labels: hiveserver
 Fix For: 0.12.0

 Attachments: HIVE-5296.1.patch, HIVE-5296.patch, HIVE-5296.patch, 
 HIVE-5296.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481
 However, on inspection of the related patch and my built version of Hive 
 (patch carried forward to 0.12.0), I am still seeing the described behaviour.
 Multiple connections to Hiveserver2, all of which are closed and disposed of 
 properly show the Java heap size to grow extremely quickly. 
 This issue can be recreated using the following code
 {code}
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 import org.apache.hive.service.cli.HiveSQLException;
 import org.apache.log4j.Logger;
 /*
  * Class which encapsulates the lifecycle of a query or statement.
  * Provides functionality which allows you to create a connection
  */
 public class HiveClient {
   
   Connection con;
   Logger logger;
   private static String driverName = org.apache.hive.jdbc.HiveDriver;   
   private String db;
   
   
   public HiveClient(String db)
   {   
   logger = Logger.getLogger(HiveClient.class);
   this.db=db;
   
   try{
Class.forName(driverName);
   }catch(ClassNotFoundException e){
   logger.info(Can't find Hive driver);
   }
   
   String hiveHost = GlimmerServer.config.getString(hive/host);
   String hivePort = GlimmerServer.config.getString(hive/port);
   String connectionString = jdbc:hive2://+hiveHost+:+hivePort 
 +/default;
   logger.info(String.format(Attempting to connect to 
 %s,connectionString));
   try{
   con = 
 DriverManager.getConnection(connectionString,,);  
 
   }catch(Exception e){
   logger.error(Problem instantiating the 
 connection+e.getMessage());
   }   
   }
   
   public int update(String query) 
   {
   Integer res = 0;
   Statement stmt = null;
   try{
   stmt = con.createStatement();
   String switchdb = USE +db;
   logger.info(switchdb);  
   stmt.executeUpdate(switchdb);
   logger.info(query);
   res = stmt.executeUpdate(query);
   logger.info(Query passed to server);  
   stmt.close();
   }catch(HiveSQLException e){
   logger.info(String.format(HiveSQLException thrown, 
 this can be valid,  +
   but check the error: %s from the query 
 %s,query,e.toString()));
   }catch(SQLException e){
   logger.error(String.format(Unable to execute query 
 SQLException %s. Error: %s,query,e));
   }catch(Exception e){
   logger.error(String.format(Unable to execute query %s. 
 Error: %s,query,e));
   }
   
   if(stmt!=null)
   try{
   stmt.close();
   }catch(SQLException e){
   logger.error(Cannot close the statment, 
 potentially memory leak +e);
   }
   
   return res;
   }
   
   public void close()
   {
   if(con!=null){
   try {
   con.close();
   } catch (SQLException e) {  
   logger.info(Problem closing connection +e);
   }
   }
   }
   
   
   
 }
 

[jira] [Updated] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections.

2013-09-25 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-5296:
-

Attachment: HIVE-5296.1.patch

 Memory leak: OOM Error after multiple open/closed JDBC connections. 
 

 Key: HIVE-5296
 URL: https://issues.apache.org/jira/browse/HIVE-5296
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0
 Environment: Hive 0.12.0, Hadoop 1.1.2, Debian.
Reporter: Douglas
  Labels: hiveserver
 Fix For: 0.12.0

 Attachments: HIVE-5296.1.patch, HIVE-5296.patch, HIVE-5296.patch, 
 HIVE-5296.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481
 However, on inspection of the related patch and my built version of Hive 
 (patch carried forward to 0.12.0), I am still seeing the described behaviour.
 Multiple connections to Hiveserver2, all of which are closed and disposed of 
 properly show the Java heap size to grow extremely quickly. 
 This issue can be recreated using the following code
 {code}
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 import org.apache.hive.service.cli.HiveSQLException;
 import org.apache.log4j.Logger;
 /*
  * Class which encapsulates the lifecycle of a query or statement.
  * Provides functionality which allows you to create a connection
  */
 public class HiveClient {
   
   Connection con;
   Logger logger;
   private static String driverName = org.apache.hive.jdbc.HiveDriver;   
   private String db;
   
   
   public HiveClient(String db)
   {   
   logger = Logger.getLogger(HiveClient.class);
   this.db=db;
   
   try{
Class.forName(driverName);
   }catch(ClassNotFoundException e){
   logger.info(Can't find Hive driver);
   }
   
   String hiveHost = GlimmerServer.config.getString(hive/host);
   String hivePort = GlimmerServer.config.getString(hive/port);
   String connectionString = jdbc:hive2://+hiveHost+:+hivePort 
 +/default;
   logger.info(String.format(Attempting to connect to 
 %s,connectionString));
   try{
   con = 
 DriverManager.getConnection(connectionString,,);  
 
   }catch(Exception e){
   logger.error(Problem instantiating the 
 connection+e.getMessage());
   }   
   }
   
   public int update(String query) 
   {
   Integer res = 0;
   Statement stmt = null;
   try{
   stmt = con.createStatement();
   String switchdb = USE +db;
   logger.info(switchdb);  
   stmt.executeUpdate(switchdb);
   logger.info(query);
   res = stmt.executeUpdate(query);
   logger.info(Query passed to server);  
   stmt.close();
   }catch(HiveSQLException e){
   logger.info(String.format(HiveSQLException thrown, 
 this can be valid,  +
   but check the error: %s from the query 
 %s,query,e.toString()));
   }catch(SQLException e){
   logger.error(String.format(Unable to execute query 
 SQLException %s. Error: %s,query,e));
   }catch(Exception e){
   logger.error(String.format(Unable to execute query %s. 
 Error: %s,query,e));
   }
   
   if(stmt!=null)
   try{
   stmt.close();
   }catch(SQLException e){
   logger.error(Cannot close the statment, 
 potentially memory leak +e);
   }
   
   return res;
   }
   
   public void close()
   {
   if(con!=null){
   try {
   con.close();
   } catch (SQLException e) {  
   logger.info(Problem closing connection +e);
   }
   }
   }
   
   
   
 }
 {code}
 And by creating and closing many HiveClient objects. The heap space used by 
 the hiveserver2 runjar 

[jira] [Updated] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5295:
---

Status: Open  (was: Patch Available)

 HiveConnection#configureConnection tries to execute statement even after it 
 is closed
 -

 Key: HIVE-5295
 URL: https://issues.apache.org/jira/browse/HIVE-5295
 Project: Hive
  Issue Type: Bug
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D12957.1.patch, D12957.2.patch, D12957.3.patch, 
 HIVE-5295.D12957.3.patch


 HiveConnection#configureConnection tries to execute statement even after it 
 is closed. For remote JDBC client, it tries to set the conf var using 'set 
 foo=bar' by calling HiveStatement.execute for each conf var pair, but closes 
 the statement after the 1st iteration through the conf var pairs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4763) add support for thrift over http transport in HS2

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-4763:
---

Status: Patch Available  (was: Open)

 add support for thrift over http transport in HS2
 -

 Key: HIVE-4763
 URL: https://issues.apache.org/jira/browse/HIVE-4763
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: HIVE-4763.1.patch, HIVE-4763.2.patch, 
 HIVE-4763.D12855.1.patch, HIVE-4763.D12951.1.patch, HIVE-4763.D12951.2.patch


 Subtask for adding support for http transport mode for thrift api in hive 
 server2.
 Support for the different authentication modes will be part of another 
 subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4763) add support for thrift over http transport in HS2

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-4763:
---

Status: Open  (was: Patch Available)

 add support for thrift over http transport in HS2
 -

 Key: HIVE-4763
 URL: https://issues.apache.org/jira/browse/HIVE-4763
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: HIVE-4763.1.patch, HIVE-4763.2.patch, 
 HIVE-4763.D12855.1.patch, HIVE-4763.D12951.1.patch, HIVE-4763.D12951.2.patch


 Subtask for adding support for http transport mode for thrift api in hive 
 server2.
 Support for the different authentication modes will be part of another 
 subtask.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5295) HiveConnection#configureConnection tries to execute statement even after it is closed

2013-09-25 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-5295:
---

Status: Patch Available  (was: Open)

 HiveConnection#configureConnection tries to execute statement even after it 
 is closed
 -

 Key: HIVE-5295
 URL: https://issues.apache.org/jira/browse/HIVE-5295
 Project: Hive
  Issue Type: Bug
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D12957.1.patch, D12957.2.patch, D12957.3.patch, 
 HIVE-5295.D12957.3.patch


 HiveConnection#configureConnection tries to execute statement even after it 
 is closed. For remote JDBC client, it tries to set the conf var using 'set 
 foo=bar' by calling HiveStatement.execute for each conf var pair, but closes 
 the statement after the 1st iteration through the conf var pairs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13777912#comment-13777912
 ] 

Ashutosh Chauhan commented on HIVE-5360:


Thanks [~thejas] for quick fix. Apologies to others for breaking build. 

 fix hcatalog checkstyle issue  introduced in HIVE-5223
 --

 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-5360.1.patch


 The trunk and 0.12 branch have checkstyle failures right now.
 {code}
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
  method call child at indentation level 6 not at correct indentation, 8
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method call child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
  method def child at indentation level 3 not at correct indentation, 4
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5360) fix hcatalog checkstyle issue introduced in HIVE-5223

2013-09-25 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5360:


Description: 
The trunk has checkstyle failures right now.

{code}
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
 method call child at indentation level 6 not at correct indentation, 8
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method call child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
 method def child at indentation level 3 not at correct indentation, 4
{code}


  was:
The trunk and 0.12 branch have checkstyle failures right now.

{code}
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
 method def child at indentation level 6 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
 method call child at indentation level 6 not at correct indentation, 8
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method call child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
 method def child at indentation level 3 not at correct indentation, 4
[checkstyle] 
/home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:488:
 method def child at indentation level 3 not at correct indentation, 4
{code}



 fix hcatalog checkstyle issue  introduced in HIVE-5223
 --

 Key: HIVE-5360
 URL: https://issues.apache.org/jira/browse/HIVE-5360
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-5360.1.patch


 The trunk has checkstyle failures right now.
 {code}
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:453:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:454:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:455:
  method def child at indentation level 6 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hcatalog/common/HCatUtil.java:456:
  method call child at indentation level 6 not at correct indentation, 8
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:486:
  method def child at indentation level 3 not at correct indentation, 4
 [checkstyle] 
 /home/hortonth/hive_apache/hcatalog/core/src/main/java/org/apache/hive/hcatalog/common/HCatUtil.java:487:
  method call child at indentation level 3 

  1   2   3   >