Re: Review Request: when writing data into filesystem from queries , the output files could contain a line of column names

2013-05-08 Thread fangkun cao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10474/
---

(Updated May 8, 2013, 6:53 a.m.)


Review request for hive.


Changes
---

User can define separators in _metadata file 


Description
---

https://issues.apache.org/jira/browse/HIVE-4346


This addresses bug HIVE-4346.
https://issues.apache.org/jira/browse/HIVE-4346


Diffs (updated)
-

  
http://svn.apache.org/repos/asf/hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
 1480161 
  http://svn.apache.org/repos/asf/hive/trunk/conf/hive-default.xml.template 
1480161 
  
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java
 1480161 
  
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
 1480161 
  
http://svn.apache.org/repos/asf/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/LoadFileDesc.java
 1480161 

Diff: https://reviews.apache.org/r/10474/diff/


Testing
---


Thanks,

fangkun cao



[jira] [Updated] (HIVE-4346) when writing data into filesystem from queries ,the output files could contain a line of column names

2013-05-08 Thread caofangkun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caofangkun updated HIVE-4346:
-

Attachment: HIVE-4346-3.patch

 when writing data into filesystem from queries ,the output files could 
 contain a line of column names 
 --

 Key: HIVE-4346
 URL: https://issues.apache.org/jira/browse/HIVE-4346
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: caofangkun
Priority: Minor
 Attachments: HIVE-4346-1.patch, HIVE-4346-3.patch


 For example :
 hivedesc src;
 key string
 value string
 hiveselect * from src;
 1 10
 2 20
 hiveset hive.output.markschema=true;
 hiveinsert overwrite local directory './test1' select * from src ;
 hive!ls -l './test1';
 ./test1/_metadata
 ./test1/00_0
 hive!cat './test1/_metadata'
 key^Avalue
 hive!cat './test1/00_0';
 1^A10
 2^A20

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4115) Introduce cube abstraction in hive

2013-05-08 Thread Amareshwari Sriramadasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amareshwari Sriramadasu updated HIVE-4115:
--

Attachment: cube-design-2.pdf

Attaching the updated design doc

 Introduce cube abstraction in hive
 --

 Key: HIVE-4115
 URL: https://issues.apache.org/jira/browse/HIVE-4115
 Project: Hive
  Issue Type: New Feature
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Attachments: cube-design-2.pdf, cube-design.docx


 We would like to define a cube abstraction so that user can query at cube 
 layer and do not know anything about storage and rollups. 
 Will describe the model more in following comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-hadoop2 - Build # 187 - Still Failing

2013-05-08 Thread Apache Jenkins Server
Changes for Build #163

Changes for Build #164

Changes for Build #165
[hashutosh] HIVE-4278 : HCat needs to get current Hive jars instead of pulling 
them from maven repo (Sushanth Sowmyan via Ashutosh Chauhan)


Changes for Build #166
[khorgath] HCATALOG-621 bin/hcat should include hbase jar and dependencies in 
the classpath (Nick Dimiduk via Sushanth Sowmyan)

[omalley] HIVE-4178 : ORC fails with files with different numbers of columns


Changes for Build #167
[hashutosh] HIVE-4356 :  remove duplicate impersonation parameters for 
hiveserver2 (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4318 : OperatorHooks hit performance even when not used 
(Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4378 : Counters hit performance even when not used (Gunther 
Hagleitner via Ashutosh Chauhan)

[omalley] HIVE-4189 : ORC fails with String column that ends in lots of nulls 
(Kevin
Wilfong)


Changes for Build #168
[hashutosh] 4248 : Implement a memory manager for ORC. Missed one test file. 
(Owen Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-4248 : Implement a memory manager for ORC (Owen Omalley via 
Ashutosh Chauhan)

[hashutosh] HIVE-4103 : Remove System.gc() call from the map-join local-task 
loop (Gopal V via Ashutosh Chauhan)

[hashutosh] HIVE-4304 : Remove unused builtins and pdk submodules (Travis 
Crawford via Ashutosh Chauhan)

[namit] HIVE-4310 optimize count(distinct) with hive.map.groupby.sorted
(Namit Jain via Gang Tim Liu)


Changes for Build #169
[hashutosh] HIVE-4333 : most windowing tests fail on hadoop 2 (Harish Butani 
via Ashutosh Chauhan)

[namit] HIVE-4342 NPE for query involving UNION ALL with nested JOIN and UNION 
ALL
(Navis via namit)

[hashutosh] HIVE-4364 : beeline always exits with 0 status, should exit with 
non-zero status on error (Rob Weltman via Ashutosh Chauhan)

[hashutosh] HIVE-4130 : Bring the Lead/Lag UDFs interface in line with Lead/Lag 
UDAFs (Harish Butani via Ashutosh Chauhan)


Changes for Build #170
[hashutosh] HIVE-4295 : Lateral view makes invalid result if CP is disabled 
(Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4365 : wrong result in left semi join (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3861 : Upgrade hbase dependency to 0.94 (Gunther Hagleitner 
via Ashutosh Chauhan)

[namit] HIVE-4371 some issue with merging join trees
(Navis via namit)


Changes for Build #171
[hashutosh] HIVE-2379 : Hive/HBase integration could be improved (Navis via 
Ashutosh Chauhan)


Changes for Build #172
[hashutosh] HIVE-4394 : test leadlag.q fails (Ashutosh Chauhan)

[namit] HIVE-4018 MapJoin failing with Distributed Cache error
(Amareshwari Sriramadasu via Namit Jain)


Changes for Build #173
[namit] HIVE-4300 ant thriftif generated code that is checkedin is not 
up-to-date
(Roshan Naik via namit)

[hashutosh] HIVE-3891 : physical optimizer changes for auto sort-merge join 
(Namit Jain via Ashutosh Chauhan)

[namit] HIVE-4393 Make the deleteData flag accessable from DropTable/Partition 
events
(Morgan Philips via namit)


Changes for Build #174
[khorgath] HIVE-4419 : webhcat - support ${WEBHCAT_PREFIX}/conf/ as config 
directory (Thejas M Nair via Sushanth Sowmyan)

[namit] HIVE-4181 Star argument without table alias for UDTF is not working
(Navis via namit)

[hashutosh] HIVE-4407 : TestHCatStorer.testStoreFuncAllSimpleTypes fails 
because of null case difference (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4369 : Many new failures on hadoop 2 (Vikram Dixit via 
Ashutosh Chauhan)


Changes for Build #175
[hashutosh] HIVE-4358 : Check for Map side processing in PTFOp is no longer 
valid (Harish Butani via Ashutosh Chauhan)

[namit] HIVE-4409 Prevent incompatible column type changes
(Dilip Joseph via namit)

[namit] HIVE-4095 Add exchange partition in Hive
(Dheeraj Kumar Singh via namit)

[namit] HIVE-4005 Column truncation
(Kevin Wilfong via namit)

[namit] HIVE-3952 merge map-job followed by map-reduce job
(Vinod Kumar Vavilapalli via namit)

[hashutosh] HIVE-4412 : PTFDesc tries serialize transient fields like OIs, etc. 
(Navis via Ashutosh Chauhan)


Changes for Build #176
[hashutosh] HIVE-3708 : Add mapreduce workflow information to job configuration 
(Billie Rinaldi via Ashutosh Chauhan)

[namit] HIVE-4424 MetaStoreUtils.java.orig checked in mistakenly by HIVE-4409
(Namit Jain)


Changes for Build #177
[navis] HIVE-4068 Size of aggregation buffer which uses non-primitive type is 
not estimated correctly (Navis)

[khorgath] HIVE-4420 : HCatalog unit tests stop after a failure (Alan Gates via 
Sushanth Sowmyan)


Changes for Build #178

Changes for Build #179
[hashutosh] HIVE-4423 : Improve RCFile::sync(long) 10x (Gopal V via Ashutosh 
Chauhan)

[hashutosh] HIVE-4398 : HS2 Resource leak: operation handles not cleaned when 
originating session is closed (Ashish Vaidya via Ashutosh Chauhan)

[hashutosh] HIVE-4019 : Ability to create and drop temporary partition function 
(Brock Noland via Ashutosh Chauhan)


Changes for Build #180

[jira] [Commented] (HIVE-3983) Select on table with hbase storage handler fails with an SASL error

2013-05-08 Thread Alexander Alten-Lorenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13651833#comment-13651833
 ] 

Alexander Alten-Lorenz commented on HIVE-3983:
--

Add the hbase-config into hive-site.xml:

property
namehive.aux.jars.path/name
valuefile:///etc/hbase/conf//value
/property



 Select on table with hbase storage handler fails with an SASL error
 ---

 Key: HIVE-3983
 URL: https://issues.apache.org/jira/browse/HIVE-3983
 Project: Hive
  Issue Type: Bug
 Environment: hive-0.10
 hbase-0.94.5.5
 hadoop-0.23.3.1
 hcatalog-0.5
Reporter: Arup Malakar

 The table is created using the following query:
 {code}
 CREATE TABLE hbase_table_1(key int, value string) 
 STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
 WITH SERDEPROPERTIES (hbase.columns.mapping = :key,cf1:val)
 TBLPROPERTIES (hbase.table.name = xyz); 
 {code}
 Doing a select on the table launches a map-reduce job. But the job fails with 
 the following error:
 {code}
 2013-02-02 01:31:07,500 FATAL [IPC Server handler 3 on 40118] 
 org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
 attempt_1348093718159_1501_m_00_0 - exited : java.io.IOException: 
 java.lang.RuntimeException: SASL authentication failed. The most likely cause 
 is missing or invalid credentials. Consider 'kinit'.
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
   at 
 org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:243)
   at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:522)
   at 
 org.apache.hadoop.mapred.MapTask$TrackedRecordReader.init(MapTask.java:160)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:381)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
 Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
 likely cause is missing or invalid credentials. Consider 'kinit'.
   at 
 org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:242)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
   at org.apache.hadoop.hbase.security.User.call(User.java:590)
   at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
   at 
 org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:444)
   at 
 org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.handleSaslConnectionFailure(SecureClient.java:203)
   at 
 org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:291)
   at 
 org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
   at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:104)
   at $Proxy12.getProtocolVersion(Unknown Source)
   at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine.getProxy(SecureRpcEngine.java:146)
   at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1335)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1291)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1278)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:987)
   at 
 

[jira] [Commented] (HIVE-4421) Improve memory usage by ORC dictionaries

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13651881#comment-13651881
 ] 

Hudson commented on HIVE-4421:
--

Integrated in Hive-trunk-h0.21 #2091 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2091/])
HIVE-4421 : Improve memory usage by ORC dictionaries (Owen Omalley via 
Ashutosh Chauhan) (Revision 1480159)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480159
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicByteArray.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicIntArray.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/MemoryManager.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OutStream.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/PositionedOutputStream.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RedBlackTree.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/StringRedBlackTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestMemoryManager.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestStringRedBlackTree.java
* /hive/trunk/ql/src/test/resources/orc-file-dump.out


 Improve memory usage by ORC dictionaries
 

 Key: HIVE-4421
 URL: https://issues.apache.org/jira/browse/HIVE-4421
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.12.0

 Attachments: HIVE-4421.D10545.1.patch, HIVE-4421.D10545.2.patch, 
 HIVE-4421.D10545.3.patch, HIVE-4421.D10545.4.patch


 Currently, for tables with many string columns, it is possible to 
 significantly underestimate the memory used by the ORC dictionaries and cause 
 the query to run out of memory in the task. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13651879#comment-13651879
 ] 

Hudson commented on HIVE-4392:
--

Integrated in Hive-trunk-h0.21 #2091 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2091/])
HIVE-4392 : Illogical InvalidObjectException throwed when use mulit 
aggregate functions with star columns  (Navis via Ashutosh Chauhan) (Revision 
1480161)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480161
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/ctas_colname.q
* /hive/trunk/ql/src/test/results/clientpositive/ctas_colname.q.out


 Illogical InvalidObjectException throwed when use mulit aggregate functions 
 with star columns 
 --

 Key: HIVE-4392
 URL: https://issues.apache.org/jira/browse/HIVE-4392
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
 Environment: Apache Hadoop 0.20.1
 Apache Hive Trunk
Reporter: caofangkun
Assignee: Navis
Priority: Minor
 Fix For: 0.12.0

 Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch, 
 HIVE-4392.D10431.3.patch, HIVE-4392.D10431.4.patch, HIVE-4392.D10431.5.patch


 For Example:
 hive (default) create table liza_1 as 
select *, sum(key), sum(value) 
from new_src;
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201304191025_0003, Tracking URL = 
 http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
 Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
 job_201304191025_0003
 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
 1
 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201304191025_0003
 Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
 FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
 valid object name)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 MapReduce Jobs Launched: 
 Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
 Total MapReduce CPU Time Spent: 0 msec
 hive (default) create table liza_1 as 
select *, sum(key), sum(value) 
from new_src   
group by key, value;
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Estimated from input data size: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201304191025_0004, Tracking URL = 
 http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
 Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
 job_201304191025_0004
 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
 1
 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201304191025_0004
 Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
 FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
 valid object name)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 MapReduce Jobs Launched: 
 Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
 Total MapReduce CPU Time Spent: 0 msec
 But the following tow Queries  work:
 hive (default) create table liza_1 as select * from new_src;
 Total MapReduce jobs = 3
 Launching Job 1 out of 3
 Number of reduce tasks is set to 0 since there's no reduce operator
 

[jira] [Commented] (HIVE-4516) Fix concurrency bug in serde/src/java/org/apache/hadoop/hive/serde2/io/TimestampWritable.java

2013-05-08 Thread Jon Hartlaub (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13651986#comment-13651986
 ] 

Jon Hartlaub commented on HIVE-4516:


I agree - HIVE-4220 may be a manifestation, although I think this ThreadLocal 
based patch or one using commons.lang FastDateFormat (which is thread-safe) 
would be preferable over the patch submitted in HIVE-4220.

 Fix concurrency bug in 
 serde/src/java/org/apache/hadoop/hive/serde2/io/TimestampWritable.java
 -

 Key: HIVE-4516
 URL: https://issues.apache.org/jira/browse/HIVE-4516
 Project: Hive
  Issue Type: Bug
Reporter: Jon Hartlaub
 Attachments: TimestampWritable.java.patch


 A patch for concurrent use of TimestampWritable which occurs in a 
 multithreaded scenario (as found in AmpLab Shark).  A static SimpleDateFormat 
 (not ThreadSafe) is used by TimestampWritable in CTAS DDL statements where it 
 manifests as data corruption when used in a concurrent environment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4421) Improve memory usage by ORC dictionaries

2013-05-08 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-4421:


Fix Version/s: 0.11.0

 Improve memory usage by ORC dictionaries
 

 Key: HIVE-4421
 URL: https://issues.apache.org/jira/browse/HIVE-4421
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.11.0, 0.12.0

 Attachments: HIVE-4421.D10545.1.patch, HIVE-4421.D10545.2.patch, 
 HIVE-4421.D10545.3.patch, HIVE-4421.D10545.4.patch


 Currently, for tables with many string columns, it is possible to 
 significantly underestimate the memory used by the ORC dictionaries and cause 
 the query to run out of memory in the task. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4494) ORC map columns get class cast exception in some context

2013-05-08 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4494:
--

Attachment: HIVE-4494.D10653.2.patch

omalley updated the revision HIVE-4494 [jira] ORC map columns get class cast 
exception in some context.

  Added the equals methods so that the schema conversion is handled better
  as suggested by Pamela.

Reviewers: JIRA

REVISION DETAIL
  https://reviews.facebook.net/D10653

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D10653?vs=33351id=33387#toc

AFFECTED FILES
  data/files/orc_create.txt
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcUnion.java
  ql/src/test/queries/clientpositive/orc_create.q
  ql/src/test/results/clientpositive/orc_create.q.out

To: JIRA, omalley
Cc: pamelavagata


 ORC map columns get class cast exception in some context
 

 Key: HIVE-4494
 URL: https://issues.apache.org/jira/browse/HIVE-4494
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: HIVE-4494.D10653.1.patch, HIVE-4494.D10653.2.patch


 Setting up the test case like:
 {quote}
 create table map_text (
   name string,
   m mapstring,string
 ) row format delimited
 fields terminated by '|'
 collection items terminated by ','
 map keys terminated by ':';
 create table map_orc (
   name string,
   m mapstring,string
 ) stored as orc;
 cat map.txt
 name1|key11:value11,key12:value12,key13:value13
 name2|key21:value21,key22:value22,key23:value23
 name3|key31:value31,key32:value32,key33:value33
 load data local   inpath 'map.txt' into table map_text;
 insert overwrite table map_orc select * from map_text;
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4500) HS2 holding too many file handles of hive_job_log_hive_*.txt files

2013-05-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652001#comment-13652001
 ] 

Thejas M Nair commented on HIVE-4500:
-

bq. I dislike the use of try finally in Java
Which part of the patch are you referring to ? Is it the change in 
HiveSessionImpl.java ?


 HS2 holding too many file handles of hive_job_log_hive_*.txt files
 --

 Key: HIVE-4500
 URL: https://issues.apache.org/jira/browse/HIVE-4500
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-4500-2.patch, HIVE-4500-3.patch, HIVE-4500.patch


 In the hiveserver2 setup used for testing, we see that it has 2444 files open 
 and of them 2152 are /tmp/hive/hive_job_log_hive_*.txt files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4500) HS2 holding too many file handles of hive_job_log_hive_*.txt files

2013-05-08 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652007#comment-13652007
 ] 

Brock Noland commented on HIVE-4500:


bq. I dislike the use of try finally in Java, because it either causes the code 
to ignore exceptions in the normal case or hide exceptions in the exception 
case.

This is not true as long as the finally block does not throw an exception of 
it's own. Thus finally blocks should not throw an Exception.

 HS2 holding too many file handles of hive_job_log_hive_*.txt files
 --

 Key: HIVE-4500
 URL: https://issues.apache.org/jira/browse/HIVE-4500
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-4500-2.patch, HIVE-4500-3.patch, HIVE-4500.patch


 In the hiveserver2 setup used for testing, we see that it has 2444 files open 
 and of them 2152 are /tmp/hive/hive_job_log_hive_*.txt files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4523) round() function with specified decimal places not consistent with mysql

2013-05-08 Thread Fred Desing (JIRA)
Fred Desing created HIVE-4523:
-

 Summary: round() function with specified decimal places not 
consistent with mysql 
 Key: HIVE-4523
 URL: https://issues.apache.org/jira/browse/HIVE-4523
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Affects Versions: 0.7.1
Reporter: Fred Desing
Priority: Minor


// hive
hive select round(150.00, 2) from temp limit 1;
150.0

hive select round(150, 2) from temp limit 1;
150.0

// mysql
mysql select round(150.00, 2) from DUAL limit 1;
round(150.00, 2)
150.00

mysql select round(150, 2) from DUAL limit 1;
round(150, 2)
150


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3768) Document JDBC client configuration for secure clusters

2013-05-08 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-3768:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

This was put into the wiki.

 Document JDBC client configuration for secure clusters
 --

 Key: HIVE-3768
 URL: https://issues.apache.org/jira/browse/HIVE-3768
 Project: Hive
  Issue Type: Bug
  Components: Documentation
Affects Versions: 0.9.0
Reporter: Lefty Leverenz
Assignee: Lefty Leverenz
  Labels: documentation
 Attachments: HIVE-3768.1.patch, HIVE-3768.2.patch


 Document the JDBC client configuration required for starting Hive on a secure 
 cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4523) round() function with specified decimal places not consistent with mysql

2013-05-08 Thread Fred Desing (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fred Desing updated HIVE-4523:
--

Description: 
// hive
hive select round(150.000, 2) from temp limit 1;
150.0

hive select round(150, 2) from temp limit 1;
150.0

// mysql
mysql select round(150.000, 2) from DUAL limit 1;
round(150.000, 2)
150.00

mysql select round(150, 2) from DUAL limit 1;
round(150, 2)
150

http://dev.mysql.com/doc/refman/5.1/en/mathematical-functions.html#function_round

  was:
// hive
hive select round(150.00, 2) from temp limit 1;
150.0

hive select round(150, 2) from temp limit 1;
150.0

// mysql
mysql select round(150.00, 2) from DUAL limit 1;
round(150.00, 2)
150.00

mysql select round(150, 2) from DUAL limit 1;
round(150, 2)
150



 round() function with specified decimal places not consistent with mysql 
 -

 Key: HIVE-4523
 URL: https://issues.apache.org/jira/browse/HIVE-4523
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Affects Versions: 0.7.1
Reporter: Fred Desing
Priority: Minor

 // hive
 hive select round(150.000, 2) from temp limit 1;
 150.0
 hive select round(150, 2) from temp limit 1;
 150.0
 // mysql
 mysql select round(150.000, 2) from DUAL limit 1;
 round(150.000, 2)
 150.00
 mysql select round(150, 2) from DUAL limit 1;
 round(150, 2)
 150
 http://dev.mysql.com/doc/refman/5.1/en/mathematical-functions.html#function_round

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4500) HS2 holding too many file handles of hive_job_log_hive_*.txt files

2013-05-08 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652085#comment-13652085
 ] 

Owen O'Malley commented on HIVE-4500:
-

{quote}
Which part of the patch are you referring to ? Is it the change in 
HiveSessionImpl.java ?
{quote}

I'm sorry, I was assuming that since you were calling IOUtils.cleanup, which 
throws away exceptions, that it was in a finally block outside of the patch.

{quote}
This is not true as long as the finally block does not throw an exception of 
it's own. Thus finally blocks should not throw an Exception.
{quote}

If the finally block calls a method like IOUtils.cleanup that discards 
exceptions it will miss exceptions on the close. For example, if the code looks 
like:

{code}
OutputStream stream = null;
try {
  ...
} finally {
  IOUtils.cleanup(stream);
}
{code}

Then in the case where the close, which likely includes the last write to the 
stream, fails that exception will be lost. The right code is:

{code}
OutputStream stream = null;
try {
  ...
  stream.close();
} catch (Throwable th) {
  IOUtils.cleanup(stream);
  throw new IOException(something, th);
}
{code}


 HS2 holding too many file handles of hive_job_log_hive_*.txt files
 --

 Key: HIVE-4500
 URL: https://issues.apache.org/jira/browse/HIVE-4500
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-4500-2.patch, HIVE-4500-3.patch, HIVE-4500.patch


 In the hiveserver2 setup used for testing, we see that it has 2444 files open 
 and of them 2152 are /tmp/hive/hive_job_log_hive_*.txt files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4495) Implement vectorized string substr

2013-05-08 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-4495:
--

Assignee: Eric Hanson  (was: Teddy Choi)

 Implement vectorized string substr
 --

 Key: HIVE-4495
 URL: https://issues.apache.org/jira/browse/HIVE-4495
 Project: Hive
  Issue Type: Sub-task
Reporter: Timothy Chen
Assignee: Eric Hanson



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4220) TimestampWritable.toString throws array index exception sometimes

2013-05-08 Thread Mikhail Bautin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652133#comment-13652133
 ] 

Mikhail Bautin commented on HIVE-4220:
--

[~navis]: I think HIVE-4516 solves the same problem but in somewhat simpler way 
(using ThreadLocal). Could you please take a look at the patch there and let us 
know what you think?

[~ashutoshc]: I think it is reasonable to assume at this point that Hive 
primitives, especially as low-level as TimestampWritable, have to be 
thread-safe. This is not only required by third-party low-latency query 
processing systems such as AmpLab's Shark, but also by the effort in the Hive 
community itself to speed up query processing (e.g. 
http://hortonworks.com/blog/introducing-tez-faster-hadoop-processing/) that I 
believe will inevitably require keeping pre-existing multi-threaded executor 
JVMs around.

 TimestampWritable.toString throws array index exception sometimes
 -

 Key: HIVE-4220
 URL: https://issues.apache.org/jira/browse/HIVE-4220
 Project: Hive
  Issue Type: Bug
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-4220.D9669.1.patch


 {noformat}
 org.apache.hive.service.cli.HiveSQLException: java.io.IOException: 
 java.lang.ArrayIndexOutOfBoundsException: 45
 at 
 org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:215)
 at 
 org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:170)
 at 
 org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:288)
 at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:348)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1553)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1538)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 45
 at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:194)
 at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1449)
 at 
 org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:193)
 ... 11 more
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 45
 at 
 sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:436)
 at 
 java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2081)
 at 
 java.util.GregorianCalendar.computeFields(GregorianCalendar.java:1996)
 at java.util.Calendar.setTimeInMillis(Calendar.java:1110)
 at java.util.Calendar.setTime(Calendar.java:1076)
 at java.text.SimpleDateFormat.format(SimpleDateFormat.java:875)
 at java.text.SimpleDateFormat.format(SimpleDateFormat.java:868)
 at java.text.DateFormat.format(DateFormat.java:316)
 at 
 org.apache.hadoop.hive.serde2.io.TimestampWritable.toString(TimestampWritable.java:327)
 at 
 org.apache.hadoop.hive.serde2.lazy.LazyTimestamp.writeUTF8(LazyTimestamp.java:95)
 at 
 org.apache.hadoop.hive.serde2.lazy.LazyUtils.writePrimitiveUTF8(LazyUtils.java:234)
 at 
 org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serialize(LazySimpleSerDe.java:427)
 at 
 org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serializeField(LazySimpleSerDe.java:381)
 at 
 org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.serialize(LazySimpleSerDe.java:365)
 at 
 org.apache.hadoop.hive.ql.exec.ListSinkOperator.processOp(ListSinkOperator.java:96)
 at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:487)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:821)
 at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
 at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:487)
 at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:821)
 at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:90)
 at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:487)
 at 
 

[jira] [Updated] (HIVE-4500) HS2 holding too many file handles of hive_job_log_hive_*.txt files

2013-05-08 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-4500:


   Resolution: Fixed
Fix Version/s: 0.11.0
   Status: Resolved  (was: Patch Available)

I just committed this to branch-0.11 and trunk. Thanks, Alan!

 HS2 holding too many file handles of hive_job_log_hive_*.txt files
 --

 Key: HIVE-4500
 URL: https://issues.apache.org/jira/browse/HIVE-4500
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.11.0

 Attachments: HIVE-4500-2.patch, HIVE-4500-3.patch, HIVE-4500.patch


 In the hiveserver2 setup used for testing, we see that it has 2444 files open 
 and of them 2152 are /tmp/hive/hive_job_log_hive_*.txt files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4500) HS2 holding too many file handles of hive_job_log_hive_*.txt files

2013-05-08 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652195#comment-13652195
 ] 

Brock Noland commented on HIVE-4500:


bq. If the finally block calls a method like IOUtils.cleanup that discards 
exceptions it will miss exceptions on the close. For example, if the code looks 
like:

That's an incorrect idiom and shouldn't damn all finally use.

Regarding your example:
{code}
OutputStream stream = null;
try {
  ...
  stream.close();
} catch (Throwable th) {
  IOUtils.cleanup(stream);
  throw new IOException(something, th);
}
{code}

Granted that was an example but a Throwable should never be blindly converted 
to an IOException. Hadoop is too often guilty of this. I believe the following 
to be more correct as it doesn't convert all Throwables to IOException and only 
eats an exception on close if a previous exception as thrown, which is the same 
as your example.

{code}
OutputStream stream = null;
try {
  ...
  stream.close();
  stream = null;
} finally {
  IOUtils.cleanup(stream);
}
{code}


 HS2 holding too many file handles of hive_job_log_hive_*.txt files
 --

 Key: HIVE-4500
 URL: https://issues.apache.org/jira/browse/HIVE-4500
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.11.0

 Attachments: HIVE-4500-2.patch, HIVE-4500-3.patch, HIVE-4500.patch


 In the hiveserver2 setup used for testing, we see that it has 2444 files open 
 and of them 2152 are /tmp/hive/hive_job_log_hive_*.txt files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4494) ORC map columns get class cast exception in some context

2013-05-08 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652214#comment-13652214
 ] 

Kevin Wilfong commented on HIVE-4494:
-

+1

Go ahead and commit if tests pass.

 ORC map columns get class cast exception in some context
 

 Key: HIVE-4494
 URL: https://issues.apache.org/jira/browse/HIVE-4494
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: HIVE-4494.D10653.1.patch, HIVE-4494.D10653.2.patch


 Setting up the test case like:
 {quote}
 create table map_text (
   name string,
   m mapstring,string
 ) row format delimited
 fields terminated by '|'
 collection items terminated by ','
 map keys terminated by ':';
 create table map_orc (
   name string,
   m mapstring,string
 ) stored as orc;
 cat map.txt
 name1|key11:value11,key12:value12,key13:value13
 name2|key21:value21,key22:value22,key23:value23
 name3|key31:value31,key32:value32,key33:value33
 load data local   inpath 'map.txt' into table map_text;
 insert overwrite table map_orc select * from map_text;
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4233) The TGT gotten from class 'CLIService' should be renewed on time

2013-05-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652238#comment-13652238
 ] 

Jitendra Nath Pandey commented on HIVE-4233:


1. You could use UserGroupInformation#checkTGTAndReloginFromKeytab method that 
checks the tgt refresh time before relogin, instead of explicitly checking tgt 
in your code.

2. Instead of adding a new relogin thread, you could also consider following 
two options, which are relatively simpler.
  a) call checkTGTAndReloginFromKeytab before every connection,
  b) catch connection failure and call reloginFromKeytab, if you are able to 
catch this particular  failure.
  Hadoop rpc uses (b)

3. Apache guidelines insist on not putting username in the code as in the 
javadoc for HiveKerberosReloginHelper.
 

 The TGT gotten from class 'CLIService'  should be renewed on time
 -

 Key: HIVE-4233
 URL: https://issues.apache.org/jira/browse/HIVE-4233
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
 Environment: CentOS release 6.3 (Final)
 jdk1.6.0_31
 HiveServer2  0.10.0-cdh4.2.0
 Kerberos Security 
Reporter: Dongyong Wang
Priority: Critical
 Attachments: 0001-FIX-HIVE-4233.patch


 When the HIveServer2 have started more than 7 days, I use beeline  shell  to  
 connect the HiveServer2,all operation failed.
 The log of HiveServer2 shows it was caused by the Kerberos auth failure,the 
 exception stack trace is:
 2013-03-26 11:55:20,932 ERROR hive.ql.metadata.Hive: 
 java.lang.RuntimeException: Unable to instantiate 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1084)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:51)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2140)
 at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2151)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.getDelegationToken(Hive.java:2275)
 at 
 org.apache.hive.service.cli.CLIService.getDelegationTokenFromMetaStore(CLIService.java:358)
 at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:127)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1073)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1058)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge20S.java:565)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
 Source)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1082)
 ... 16 more
 Caused by: java.lang.IllegalStateException: This ticket is no longer valid
 at 
 javax.security.auth.kerberos.KerberosTicket.toString(KerberosTicket.java:601)
 at java.lang.String.valueOf(String.java:2826)
 at java.lang.StringBuilder.append(StringBuilder.java:115)
 at 
 sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:120)
 at sun.security.jgss.krb5.SubjectComber.find(SubjectComber.java:41)
 at sun.security.jgss.krb5.Krb5Util.getTicket(Krb5Util.java:130)
 at 
 sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:328)
 at java.security.AccessController.doPrivileged(Native Method)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getTgt(Krb5InitCredential.java:325)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:128)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:106)
 at 
 

[jira] [Commented] (HIVE-4233) The TGT gotten from class 'CLIService' should be renewed on time

2013-05-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652259#comment-13652259
 ] 

Thejas M Nair commented on HIVE-4233:
-

UserGroupInformation.reloginFromKeytab is not a method available in hadoop 
0.20.x, this will break hive build against it. That call should go through the 
shim layer for hadoop. (similar to the way shim is used for - 
ShimLoader.getHadoopShims().loginUserFromKeytab).




 The TGT gotten from class 'CLIService'  should be renewed on time
 -

 Key: HIVE-4233
 URL: https://issues.apache.org/jira/browse/HIVE-4233
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
 Environment: CentOS release 6.3 (Final)
 jdk1.6.0_31
 HiveServer2  0.10.0-cdh4.2.0
 Kerberos Security 
Reporter: Dongyong Wang
Priority: Critical
 Attachments: 0001-FIX-HIVE-4233.patch


 When the HIveServer2 have started more than 7 days, I use beeline  shell  to  
 connect the HiveServer2,all operation failed.
 The log of HiveServer2 shows it was caused by the Kerberos auth failure,the 
 exception stack trace is:
 2013-03-26 11:55:20,932 ERROR hive.ql.metadata.Hive: 
 java.lang.RuntimeException: Unable to instantiate 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1084)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:51)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2140)
 at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2151)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.getDelegationToken(Hive.java:2275)
 at 
 org.apache.hive.service.cli.CLIService.getDelegationTokenFromMetaStore(CLIService.java:358)
 at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:127)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1073)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1058)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge20S.java:565)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
 Source)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1082)
 ... 16 more
 Caused by: java.lang.IllegalStateException: This ticket is no longer valid
 at 
 javax.security.auth.kerberos.KerberosTicket.toString(KerberosTicket.java:601)
 at java.lang.String.valueOf(String.java:2826)
 at java.lang.StringBuilder.append(StringBuilder.java:115)
 at 
 sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:120)
 at sun.security.jgss.krb5.SubjectComber.find(SubjectComber.java:41)
 at sun.security.jgss.krb5.Krb5Util.getTicket(Krb5Util.java:130)
 at 
 sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:328)
 at java.security.AccessController.doPrivileged(Native Method)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getTgt(Krb5InitCredential.java:325)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:128)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:106)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:172)
 at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:209)
 at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:195)
 at 
 sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:162)
 at 
 

[jira] [Commented] (HIVE-4486) FetchOperator slows down SMB map joins by 50% when there are many partitions

2013-05-08 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652262#comment-13652262
 ] 

Gopal V commented on HIVE-4486:
---

Root caused to the following code in HiveConf

{code}
private void initialize(Class? cls) {
hiveJar = (new JobConf(cls)).getJar();

// preserve the original configuration
origProp = getAllProperties();

// Overlay the ConfVars. Note that this ignores ConfVars with null values
addResource(getConfVarInputStream());
{code}

addResource() calls reloadConfiguration() eventually, causing each new 
HiveConf(job) to parse all the conf xml files again.

 FetchOperator slows down SMB map joins by 50% when there are many partitions
 

 Key: HIVE-4486
 URL: https://issues.apache.org/jira/browse/HIVE-4486
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
 Environment: Ubuntu LXC 12.10
Reporter: Gopal V
Priority: Minor
 Attachments: smb-profile.html


 While looking at log files for SMB joins in hive, it was noticed that the 
 actual join op didn't show up as a significant fraction of the time spent. 
 Most of the time was spent parsing configuration files.
 To confirm, I put log lines in the HiveConf constructor and eventually made 
 the following edit to the code
 {code}
 --- ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
 +++ ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
 @@ -648,8 +648,7 @@ public ObjectInspector getOutputObjectInspector() throws 
 HiveException {
 * @return list of file status entries
 */
private FileStatus[] listStatusUnderPath(FileSystem fs, Path p) throws 
 IOException {
 -HiveConf hiveConf = new HiveConf(job, FetchOperator.class);
 -boolean recursive = 
 hiveConf.getBoolVar(HiveConf.ConfVars.HADOOPMAPREDINPUTDIRRECURSIVE);
 +boolean recursive = false;
  if (!recursive) {
return fs.listStatus(p);
  }
 {code}
 And re-ran my query to compare timings.
 || ||Before||After||
 |Cumulative CPU| 731.07 sec|386.0 sec|
 |Total time | 347.66 seconds | 218.855 seconds | 
 |
 The query used was 
 {code}INSERT OVERWRITE LOCAL DIRECTORY
 '/grid/0/smb/'
 select inv_item_sk
 from
  inventory inv
  join store_sales ss on (ss.ss_item_sk = inv.inv_item_sk)
 limit 10
 ;
 {code}
 On a scale=2 tpcds data-set, where both store_sales  inventory are bucketed 
 into 4 buckets, with store_sales split into 7 partitions and inventory into 
 261 partitions.
 78% of all CPU time was spent within new HiveConf(). The yourkit profiler 
 runs are attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4521) Auto join conversion fails in certain cases (empty tables, empty partitions, no partitions)

2013-05-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-4521:
-

Attachment: HIVE-4521.2.patch

 Auto join conversion fails in certain cases (empty tables, empty partitions, 
 no partitions)
 ---

 Key: HIVE-4521
 URL: https://issues.apache.org/jira/browse/HIVE-4521
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-4521.1.patch, HIVE-4521.2.patch


 Automatic join conversion for both map joins as well as smb joins fails when 
 tables, files or partitions are empty (see test cases in patch). Error 
 messages include: Big Table Alias is null and Divide by Zero.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4521) Auto join conversion fails in certain cases (empty tables, empty partitions, no partitions)

2013-05-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652276#comment-13652276
 ] 

Gunther Hagleitner commented on HIVE-4521:
--

Added another case in patch .2: Empty bucketed tables throw Metadata 
Incorrect when they are joined. It'll produce 0 results but should not fail.

 Auto join conversion fails in certain cases (empty tables, empty partitions, 
 no partitions)
 ---

 Key: HIVE-4521
 URL: https://issues.apache.org/jira/browse/HIVE-4521
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-4521.1.patch, HIVE-4521.2.patch


 Automatic join conversion for both map joins as well as smb joins fails when 
 tables, files or partitions are empty (see test cases in patch). Error 
 messages include: Big Table Alias is null and Divide by Zero.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4524) Make the Hive HBaseStorageHandler work under HCat

2013-05-08 Thread Sushanth Sowmyan (JIRA)
Sushanth Sowmyan created HIVE-4524:
--

 Summary: Make the Hive HBaseStorageHandler work under HCat
 Key: HIVE-4524
 URL: https://issues.apache.org/jira/browse/HIVE-4524
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler, HCatalog
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan


Currently, HCatalog has its own HCatHBaseStorageHandler that extends from 
HBaseStorageHandler to allow for StorageHandler support, and does some 
translations, like org.apache.mapred-org.apache.mapreduce wrapping, etc. 
However, this compatibility layer is not complete in functionality as it still 
assumes the underlying OutputFormat is a mapred.OutputFormat implementation as 
opposed to a HiveOutputFormat implementation, and it makes assumptions about 
config property copies that implementations of the HiveStorageHandler, such as 
the HBaseStorageHandler do not do.

To fix this, we need to improve the ability for HCat to properly load 
native-hive-style StorageHandlers.

Also, since HCat has its own HBaseStorageHandler and we'd like to not maintain 
two separate HBaseStorageHandlers, the idea is to deprecate HCat's storage 
handler over time, and make sure that hive's HBaseStorageHandler works properly 
from HCat, and over time, have it reach feature parity with the HCat one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4524) Make the Hive HBaseStorageHandler work under HCat

2013-05-08 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4524:
---

Attachment: hbh4.patch

Attaching patch that introduces WrapperStorageHandler in HCat which wraps 
native HiveStorageHandlers and makes them work from within HCat.

Also, it includes one non-hcat fix to the HiveHBaseTableOutputFormat, where its 
super.getConf() would throw an exception if called before checkOutputSpecs() 
was called - I had to refactor out common code to ensure that didn't happen.

 Make the Hive HBaseStorageHandler work under HCat
 -

 Key: HIVE-4524
 URL: https://issues.apache.org/jira/browse/HIVE-4524
 Project: Hive
  Issue Type: Bug
  Components: HBase Handler, HCatalog
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: hbh4.patch


 Currently, HCatalog has its own HCatHBaseStorageHandler that extends from 
 HBaseStorageHandler to allow for StorageHandler support, and does some 
 translations, like org.apache.mapred-org.apache.mapreduce wrapping, etc. 
 However, this compatibility layer is not complete in functionality as it 
 still assumes the underlying OutputFormat is a mapred.OutputFormat 
 implementation as opposed to a HiveOutputFormat implementation, and it makes 
 assumptions about config property copies that implementations of the 
 HiveStorageHandler, such as the HBaseStorageHandler do not do.
 To fix this, we need to improve the ability for HCat to properly load 
 native-hive-style StorageHandlers.
 Also, since HCat has its own HBaseStorageHandler and we'd like to not 
 maintain two separate HBaseStorageHandlers, the idea is to deprecate HCat's 
 storage handler over time, and make sure that hive's HBaseStorageHandler 
 works properly from HCat, and over time, have it reach feature parity with 
 the HCat one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4233) The TGT gotten from class 'CLIService' should be renewed on time

2013-05-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652324#comment-13652324
 ] 

Thejas M Nair commented on HIVE-4233:
-

Hadoop api's already do relogin if ticket has expired. If we add similar logic 
to metastore client, we can fix this without having an additional thread. (and 
fix it for other potential metastore users as well). Since the kerberos expiry 
is usually in hours or days, having a simple logic of doing relogin if the 
previous login was several minutes back should fix this issue.
[~d0ngw] Will you be create a new patch for this ? If not, I should be able to 
upload a new one soon.


 The TGT gotten from class 'CLIService'  should be renewed on time
 -

 Key: HIVE-4233
 URL: https://issues.apache.org/jira/browse/HIVE-4233
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
 Environment: CentOS release 6.3 (Final)
 jdk1.6.0_31
 HiveServer2  0.10.0-cdh4.2.0
 Kerberos Security 
Reporter: Dongyong Wang
Priority: Critical
 Attachments: 0001-FIX-HIVE-4233.patch


 When the HIveServer2 have started more than 7 days, I use beeline  shell  to  
 connect the HiveServer2,all operation failed.
 The log of HiveServer2 shows it was caused by the Kerberos auth failure,the 
 exception stack trace is:
 2013-03-26 11:55:20,932 ERROR hive.ql.metadata.Hive: 
 java.lang.RuntimeException: Unable to instantiate 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1084)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:51)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2140)
 at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2151)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.getDelegationToken(Hive.java:2275)
 at 
 org.apache.hive.service.cli.CLIService.getDelegationTokenFromMetaStore(CLIService.java:358)
 at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:127)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1073)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1058)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge20S.java:565)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
 Source)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1082)
 ... 16 more
 Caused by: java.lang.IllegalStateException: This ticket is no longer valid
 at 
 javax.security.auth.kerberos.KerberosTicket.toString(KerberosTicket.java:601)
 at java.lang.String.valueOf(String.java:2826)
 at java.lang.StringBuilder.append(StringBuilder.java:115)
 at 
 sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:120)
 at sun.security.jgss.krb5.SubjectComber.find(SubjectComber.java:41)
 at sun.security.jgss.krb5.Krb5Util.getTicket(Krb5Util.java:130)
 at 
 sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:328)
 at java.security.AccessController.doPrivileged(Native Method)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getTgt(Krb5InitCredential.java:325)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:128)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:106)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:172)
 at 
 

[jira] [Commented] (HIVE-4233) The TGT gotten from class 'CLIService' should be renewed on time

2013-05-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652345#comment-13652345
 ] 

Jitendra Nath Pandey commented on HIVE-4233:


bq.  Since the kerberos expiry is usually in hours or days, having a simple 
logic of doing relogin if the previous login was several minutes back should 
fix this issue.

That sounds right, UGI.reloginFromKeytab already does it, checks if sufficient 
time has elapsed. The 'sufficient time' can be configured. This method also 
checks for TGT refresh time, therefore just calling reloginFromKeytab before 
every connection should be a reasonable fix.

 The TGT gotten from class 'CLIService'  should be renewed on time
 -

 Key: HIVE-4233
 URL: https://issues.apache.org/jira/browse/HIVE-4233
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
 Environment: CentOS release 6.3 (Final)
 jdk1.6.0_31
 HiveServer2  0.10.0-cdh4.2.0
 Kerberos Security 
Reporter: Dongyong Wang
Priority: Critical
 Attachments: 0001-FIX-HIVE-4233.patch


 When the HIveServer2 have started more than 7 days, I use beeline  shell  to  
 connect the HiveServer2,all operation failed.
 The log of HiveServer2 shows it was caused by the Kerberos auth failure,the 
 exception stack trace is:
 2013-03-26 11:55:20,932 ERROR hive.ql.metadata.Hive: 
 java.lang.RuntimeException: Unable to instantiate 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1084)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.init(RetryingMetaStoreClient.java:51)
 at 
 org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:61)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2140)
 at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2151)
 at 
 org.apache.hadoop.hive.ql.metadata.Hive.getDelegationToken(Hive.java:2275)
 at 
 org.apache.hive.service.cli.CLIService.getDelegationTokenFromMetaStore(CLIService.java:358)
 at 
 org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:127)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1073)
 at 
 org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1058)
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
 at 
 org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge20S.java:565)
 at 
 org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.GeneratedConstructorAccessor52.newInstance(Unknown 
 Source)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
 at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1082)
 ... 16 more
 Caused by: java.lang.IllegalStateException: This ticket is no longer valid
 at 
 javax.security.auth.kerberos.KerberosTicket.toString(KerberosTicket.java:601)
 at java.lang.String.valueOf(String.java:2826)
 at java.lang.StringBuilder.append(StringBuilder.java:115)
 at 
 sun.security.jgss.krb5.SubjectComber.findAux(SubjectComber.java:120)
 at sun.security.jgss.krb5.SubjectComber.find(SubjectComber.java:41)
 at sun.security.jgss.krb5.Krb5Util.getTicket(Krb5Util.java:130)
 at 
 sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:328)
 at java.security.AccessController.doPrivileged(Native Method)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getTgt(Krb5InitCredential.java:325)
 at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:128)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:106)
 at 
 sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:172)
 at 
 sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:209)
 

[jira] [Updated] (HIVE-4494) ORC map columns get class cast exception in some context

2013-05-08 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-4494:


   Resolution: Fixed
Fix Version/s: 0.11.0
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks for the review, Kevin  Pamela!

 ORC map columns get class cast exception in some context
 

 Key: HIVE-4494
 URL: https://issues.apache.org/jira/browse/HIVE-4494
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.11.0

 Attachments: HIVE-4494.D10653.1.patch, HIVE-4494.D10653.2.patch


 Setting up the test case like:
 {quote}
 create table map_text (
   name string,
   m mapstring,string
 ) row format delimited
 fields terminated by '|'
 collection items terminated by ','
 map keys terminated by ':';
 create table map_orc (
   name string,
   m mapstring,string
 ) stored as orc;
 cat map.txt
 name1|key11:value11,key12:value12,key13:value13
 name2|key21:value21,key22:value22,key23:value23
 name3|key31:value31,key32:value32,key33:value33
 load data local   inpath 'map.txt' into table map_text;
 insert overwrite table map_orc select * from map_text;
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-3726) History file closed in finalize method

2013-05-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-3726.
--

Resolution: Invalid

 History file closed in finalize method
 --

 Key: HIVE-3726
 URL: https://issues.apache.org/jira/browse/HIVE-3726
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0, 0.10.0
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-3726.2-r1411423.patch, HIVE-3736.1-r1411423.patch


 TestCliNegative fails intermittently because it's up to the garbage collector 
 to close History files. This is only a problem if you deal with a lot of 
 SessionState objects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


support for timestamps before 1970 or after 2038

2013-05-08 Thread Mikhail Bautin
Hello,

Are there plans to support timestamps that cannot be represented by a
signed 32-bit integer number of seconds since the UNIX epoch? (i.e. those
before 1970 or after a certain point in 2038). Currently Hive's behavior
regarding these timestamps is inconsistent, because it is possible to
insert them into a table, but Hive does not handle them properly. Trying to
serialize and deserialize the 1969-12-31 23:59:59 timestamp using
TimestampWritable results in a 2038-01-19 03:14:07 timestamp.

Thanks,
Mikhail


[jira] [Commented] (HIVE-4466) Fix continue.on.failure in unit tests to -well- continue on failure in unit tests

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652450#comment-13652450
 ] 

Hudson commented on HIVE-4466:
--

Integrated in Hive-trunk-h0.21 #2092 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2092/])
HIVE-4466 : Fix continue.on.failure in unit tests to -well- continue on 
failure in unit tests (Gunther Hagleitner via Ashutosh Chauhan) (Revision 
1480164)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480164
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/build.xml


 Fix continue.on.failure in unit tests to -well- continue on failure in unit 
 tests
 -

 Key: HIVE-4466
 URL: https://issues.apache.org/jira/browse/HIVE-4466
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.12.0

 Attachments: HIVE-4466.1.patch


 continue.on.failure is no longer hooked up to anything in the build scripts. 
 more importantly, the only choice right now is to continue through a module 
 and then fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4471) Build fails with hcatalog checkstyle error

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652451#comment-13652451
 ] 

Hudson commented on HIVE-4471:
--

Integrated in Hive-trunk-h0.21 #2092 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2092/])
HIVE-4471 : Build fails with hcatalog checkstyle error (Gunther Hagleitner 
via Ashutosh Chauhan) (Revision 1480162)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480162
Files : 
* /hive/trunk/hcatalog/build-support/ant/checkstyle.xml
* /hive/trunk/hcatalog/src/test/.gitignore


 Build fails with hcatalog checkstyle error
 --

 Key: HIVE-4471
 URL: https://issues.apache.org/jira/browse/HIVE-4471
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.12.0

 Attachments: HIVE-4471.1.patch, HIVE-4471.2.patch


 This is the output:
 checkstyle:
  [echo] hcatalog
 [checkstyle] Running Checkstyle 5.5 on 412 files
 [checkstyle] 
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/hcatalog/src/test/.gitignore:1:
  Missing a header - not enough lines in file.
 BUILD FAILED
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/build.xml:296: 
 The following error occurred while executing this line:
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/build.xml:298: 
 The following error occurred while executing this line:
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/hcatalog/build.xml:109:
  The following error occurred while executing this line:
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/hcatalog/build-support/ant/checkstyle.xml:32:
  Got 1 errors and 0 warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 2092 - Still Failing

2013-05-08 Thread Apache Jenkins Server
Changes for Build #2069
[hashutosh] HIVE-4280 : TestRetryingHMSHandler is failing on trunk. (Teddy Choi 
via Ashutosh Chauhan)


Changes for Build #2070
[hashutosh] HIVE-4278 : HCat needs to get current Hive jars instead of pulling 
them from maven repo (Sushanth Sowmyan via Ashutosh Chauhan)


Changes for Build #2071
[khorgath] HCATALOG-621 bin/hcat should include hbase jar and dependencies in 
the classpath (Nick Dimiduk via Sushanth Sowmyan)

[omalley] HIVE-4178 : ORC fails with files with different numbers of columns


Changes for Build #2072
[hashutosh] HIVE-4304 : Remove unused builtins and pdk submodules (Travis 
Crawford via Ashutosh Chauhan)

[namit] HIVE-4310 optimize count(distinct) with hive.map.groupby.sorted
(Namit Jain via Gang Tim Liu)

[hashutosh] HIVE-4356 :  remove duplicate impersonation parameters for 
hiveserver2 (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4318 : OperatorHooks hit performance even when not used 
(Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4378 : Counters hit performance even when not used (Gunther 
Hagleitner via Ashutosh Chauhan)

[omalley] HIVE-4189 : ORC fails with String column that ends in lots of nulls 
(Kevin
Wilfong)


Changes for Build #2073
[hashutosh] 4248 : Implement a memory manager for ORC. Missed one test file. 
(Owen Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-4248 : Implement a memory manager for ORC (Owen Omalley via 
Ashutosh Chauhan)

[hashutosh] HIVE-4103 : Remove System.gc() call from the map-join local-task 
loop (Gopal V via Ashutosh Chauhan)


Changes for Build #2074
[namit] HIVE-4371 some issue with merging join trees
(Navis via namit)

[hashutosh] HIVE-4333 : most windowing tests fail on hadoop 2 (Harish Butani 
via Ashutosh Chauhan)

[namit] HIVE-4342 NPE for query involving UNION ALL with nested JOIN and UNION 
ALL
(Navis via namit)

[hashutosh] HIVE-4364 : beeline always exits with 0 status, should exit with 
non-zero status on error (Rob Weltman via Ashutosh Chauhan)

[hashutosh] HIVE-4130 : Bring the Lead/Lag UDFs interface in line with Lead/Lag 
UDAFs (Harish Butani via Ashutosh Chauhan)


Changes for Build #2075
[hashutosh] HIVE-2379 : Hive/HBase integration could be improved (Navis via 
Ashutosh Chauhan)

[hashutosh] HIVE-4295 : Lateral view makes invalid result if CP is disabled 
(Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4365 : wrong result in left semi join (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3861 : Upgrade hbase dependency to 0.94 (Gunther Hagleitner 
via Ashutosh Chauhan)


Changes for Build #2076
[hashutosh] HIVE-3891 : physical optimizer changes for auto sort-merge join 
(Namit Jain via Ashutosh Chauhan)

[namit] HIVE-4393 Make the deleteData flag accessable from DropTable/Partition 
events
(Morgan Philips via namit)

[hashutosh] HIVE-4394 : test leadlag.q fails (Ashutosh Chauhan)

[namit] HIVE-4018 MapJoin failing with Distributed Cache error
(Amareshwari Sriramadasu via Namit Jain)


Changes for Build #2077
[namit] HIVE-4300 ant thriftif generated code that is checkedin is not 
up-to-date
(Roshan Naik via namit)


Changes for Build #2078
[namit] HIVE-4409 Prevent incompatible column type changes
(Dilip Joseph via namit)

[namit] HIVE-4095 Add exchange partition in Hive
(Dheeraj Kumar Singh via namit)

[namit] HIVE-4005 Column truncation
(Kevin Wilfong via namit)

[namit] HIVE-3952 merge map-job followed by map-reduce job
(Vinod Kumar Vavilapalli via namit)

[hashutosh] HIVE-4412 : PTFDesc tries serialize transient fields like OIs, etc. 
(Navis via Ashutosh Chauhan)

[khorgath] HIVE-4419 : webhcat - support ${WEBHCAT_PREFIX}/conf/ as config 
directory (Thejas M Nair via Sushanth Sowmyan)

[namit] HIVE-4181 Star argument without table alias for UDTF is not working
(Navis via namit)

[hashutosh] HIVE-4407 : TestHCatStorer.testStoreFuncAllSimpleTypes fails 
because of null case difference (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4369 : Many new failures on hadoop 2 (Vikram Dixit via 
Ashutosh Chauhan)


Changes for Build #2079
[namit] HIVE-4424 MetaStoreUtils.java.orig checked in mistakenly by HIVE-4409
(Namit Jain)

[hashutosh] HIVE-4358 : Check for Map side processing in PTFOp is no longer 
valid (Harish Butani via Ashutosh Chauhan)


Changes for Build #2080
[navis] HIVE-4068 Size of aggregation buffer which uses non-primitive type is 
not estimated correctly (Navis)

[khorgath] HIVE-4420 : HCatalog unit tests stop after a failure (Alan Gates via 
Sushanth Sowmyan)

[hashutosh] HIVE-3708 : Add mapreduce workflow information to job configuration 
(Billie Rinaldi via Ashutosh Chauhan)


Changes for Build #2081

Changes for Build #2082
[hashutosh] HIVE-4423 : Improve RCFile::sync(long) 10x (Gopal V via Ashutosh 
Chauhan)

[hashutosh] HIVE-4398 : HS2 Resource leak: operation handles not cleaned when 
originating session is closed (Ashish Vaidya via Ashutosh Chauhan)

[hashutosh] HIVE-4019 : Ability to create and drop temporary partition function 
(Brock Noland via 

[jira] [Commented] (HIVE-4508) Fix various release issues in 0.11.0rc1

2013-05-08 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652478#comment-13652478
 ] 

Owen O'Malley commented on HIVE-4508:
-

{quote}
Not true. You need an explicit +1 from another committer, and we do usually 
file JIRAs for stuff like this.
{quote}

The HowToRelease wiki for Hive 
(https://cwiki.apache.org/Hive/howtorelease.html) about editing the release 
notes: It's OK to do this with a direct commit rather than a patch.

{quote}
Also, most of the changes in this patch need to get committed to trunk as well.
{quote}

You're right about that.

 Fix various release issues in 0.11.0rc1
 ---

 Key: HIVE-4508
 URL: https://issues.apache.org/jira/browse/HIVE-4508
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.11.0

 Attachments: h-4508.patch, h-4508.patch


 Carl described some non-code issues in the 0.11.0rc1 and I want to fix them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4508) Fix various release issues in 0.11.0rc1

2013-05-08 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652484#comment-13652484
 ] 

Carl Steinbach commented on HIVE-4508:
--

bq. The HowToRelease wiki for Hive 
(https://cwiki.apache.org/Hive/howtorelease.html) about editing the release 
notes: It's OK to do this with a direct commit rather than a patch.

Owen, which files did you edit in this patch?

 Fix various release issues in 0.11.0rc1
 ---

 Key: HIVE-4508
 URL: https://issues.apache.org/jira/browse/HIVE-4508
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.11.0

 Attachments: h-4508.patch, h-4508.patch


 Carl described some non-code issues in the 0.11.0rc1 and I want to fix them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3176) implement returning values for SQLException getSQLState()

2013-05-08 Thread Xiu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiu updated HIVE-3176:
--

Attachment: (was: HIVE-3176.patch.txt)

 implement returning values for SQLException getSQLState()
 -

 Key: HIVE-3176
 URL: https://issues.apache.org/jira/browse/HIVE-3176
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.8.1
Reporter: N Campbell
  Labels: newbie, patch

 a dynamic SQL application should be able to check the values returned by 
 getSQLState on a SQLException object. Currently the Hive driver is not doing 
 this (throws exceptions etc).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652530#comment-13652530
 ] 

Gunther Hagleitner commented on HIVE-4498:
--

Now that the build doesn't fail earlier anymore this problem is causing the 
build to fail. Since we've disabled BeeLine tests I guess we can disable this 
one too?

 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair

 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652539#comment-13652539
 ] 

Gunther Hagleitner commented on HIVE-4498:
--

[~robw] do you want to take a look?

 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair

 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4508) Fix various release issues in 0.11.0rc1

2013-05-08 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652549#comment-13652549
 ] 

Owen O'Malley commented on HIVE-4508:
-

{quote}
Owen, which files did you edit in this patch?
{quote}

The change is in the attached patch obviously. If you want to generate it from 
git:

{code}
% git show --numstat 5364dd83cd6a0adb5b31e3275888e2cbd7608ed4
{code}

The result shows that:

{code}
commit 5364dd83cd6a0adb5b31e3275888e2cbd7608ed4
Author: Owen O'Malley omal...@apache.org
Date:   Tue May 7 15:09:18 2013 +

HIVE-4508 Update release notes for Hive 0.11.0.


git-svn-id: 
https://svn.apache.org/repos/asf/hive/branches/branch-0.11@1479935 
13f79535-47bb-03

1   1   NOTICE
4   8   README.txt
18  61  RELEASE_NOTES.txt
1   1   build.properties
1   1   docs/xdocs/index.xml
1   1   eclipse-templates/.classpath
{code}

which are precisely the set of changes that you asked for in the email thread.

 Fix various release issues in 0.11.0rc1
 ---

 Key: HIVE-4508
 URL: https://issues.apache.org/jira/browse/HIVE-4508
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.11.0

 Attachments: h-4508.patch, h-4508.patch


 Carl described some non-code issues in the 0.11.0rc1 and I want to fix them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652550#comment-13652550
 ] 

Carl Steinbach commented on HIVE-4498:
--

bq. Since we've disabled BeeLine tests I guess we can disable this one too?

No, I think it would be better to fix TestBeeLineWithArgs (which isn't actually 
broken), and re-enable a comprehensive subset of the other HiveServer2 tests, 
unless of course the plan is to let HiveServer2 rot until it's completely 
broken.

TestBeeLineWithArgs was committed in HIVE-4268, but it wasn't enabled until I 
committed HIVE-4497 the other day. HIVE-4356 was committed in between, and 
that's the actual source of the bug.

Gunther, can you please take a look at this? Thanks.

 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair

 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-4498:
-

Assignee: Gunther Hagleitner

 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair
Assignee: Gunther Hagleitner

 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-4498:
-

 Priority: Blocker  (was: Major)
Fix Version/s: 0.11.0

This is a blocker for 0.11 since HIVE-4356 is included on that branch.

 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair
Assignee: Gunther Hagleitner
Priority: Blocker
 Fix For: 0.11.0


 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4517) Fix build broken in HIVE-4497

2013-05-08 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach resolved HIVE-4517.
--

Resolution: Duplicate

 Fix build broken in HIVE-4497
 -

 Key: HIVE-4517
 URL: https://issues.apache.org/jira/browse/HIVE-4517
 Project: Hive
  Issue Type: Bug
Reporter: Carl Steinbach
Assignee: Carl Steinbach



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4495) Implement vectorized string substr

2013-05-08 Thread Teddy Choi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652583#comment-13652583
 ] 

Teddy Choi commented on HIVE-4495:
--

Hello [~ehans]. I thought that there's nobody assigned to. Please excuse me for 
misunderstanding. I'm interested in working on vectorization and any chance of 
it will be great for me. Thank you [~ehans].

 Implement vectorized string substr
 --

 Key: HIVE-4495
 URL: https://issues.apache.org/jira/browse/HIVE-4495
 Project: Hive
  Issue Type: Sub-task
Reporter: Timothy Chen
Assignee: Eric Hanson



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652594#comment-13652594
 ] 

Gunther Hagleitner commented on HIVE-4498:
--

I haven't actually touched HIVE-4356. That's Thejas' work. I believe Ashutosh 
mistakenly put my name in the commit log. [~thejas] do you have any ideas?

 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair
Assignee: Gunther Hagleitner
Priority: Blocker
 Fix For: 0.11.0


 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner reassigned HIVE-4498:


Assignee: Thejas M Nair  (was: Gunther Hagleitner)

 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.11.0


 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4508) Fix various release issues in 0.11.0rc1

2013-05-08 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652599#comment-13652599
 ] 

Carl Steinbach commented on HIVE-4508:
--

Here's what the HowToRelease page says:

{quote}
# You probably also want to commit a patch (on both trunk and branch) which 
updates README.txt to bring it up to date (at a minimum, search+replacing 
references to the version number). Also check NOTICE to see if anything needs 
to be updated for recent library dependency changes or additions.
## Select all of the JIRA's for the current release that aren't FIXED and do 
bulk update to clear the 'Fixed Version' field.
## Likewise, use JIRA's Release Notes link to generate content for the 
RELEASE_NOTES.txt file. Be sure to select 'Text' format. (It's OK to do this 
with a direct commit rather than a patch.)
{quote}

The narrow interpretation of this is that you are only allowed to commit 
changes to RELEASE_NOTES.txt without a +1, and I believe that is the actual 
intent since those changes are purely mechanical. I can understand how someone 
could read this and conclude that the policy also extends to README.txt, 
NOTICE, and build.properties, but it's easy to verify that this isn't the case 
by looking at the comments in the commit log. That still leaves two files that 
were modified in the patch which aren't mentioned anywhere on the HowToRelease 
page.

Can you please post a review request for this patch? I noticed a couple other 
issues in the affected files that should be fixed before this patch is 
forward-ported to trunk. Thanks.

 Fix various release issues in 0.11.0rc1
 ---

 Key: HIVE-4508
 URL: https://issues.apache.org/jira/browse/HIVE-4508
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.11.0

 Attachments: h-4508.patch, h-4508.patch


 Carl described some non-code issues in the 0.11.0rc1 and I want to fix them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652616#comment-13652616
 ] 

Thejas M Nair commented on HIVE-4498:
-

Sure, looking into it.


 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.11.0


 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4525) Support timestamps earlier than 1970 and later than 2038

2013-05-08 Thread Mikhail Bautin (JIRA)
Mikhail Bautin created HIVE-4525:


 Summary: Support timestamps earlier than 1970 and later than 2038
 Key: HIVE-4525
 URL: https://issues.apache.org/jira/browse/HIVE-4525
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin


TimestampWritable currently serializes timestamps using the lower 31 bits of an 
int. This does not allow to store timestamps earlier than 1970 or later than a 
certain point in 2038.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-05-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4501:


Assignee: Thejas M Nair

 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair

 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to false.
 Users should not have to bother with this extra configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #366

2013-05-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/366/

--
[...truncated 36595 lines...]
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/jenkins/hive_2013-05-08_18-36-06_730_6693095345281778471/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/366/artifact/hive/build/service/tmp/hive_job_log_jenkins_201305081836_2122444961.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] Copying file: 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: load data local inpath 
'https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] Copying data from 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt
[junit] Loading data to table default.testhivedrivertable
[junit] POSTHOOK: query: load data local inpath 
'https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/ws/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: select * from testhivedrivertable limit 10
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: 
file:/tmp/jenkins/hive_2013-05-08_18-36-11_294_940548574889982764/-mr-1
[junit] POSTHOOK: query: select * from testhivedrivertable limit 10
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/tmp/jenkins/hive_2013-05-08_18-36-11_294_940548574889982764/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/366/artifact/hive/build/service/tmp/hive_job_log_jenkins_201305081836_1271284143.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/366/artifact/hive/build/service/tmp/hive_job_log_jenkins_201305081836_1021425625.txt
[junit] Hive history 
file=https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/366/artifact/hive/build/service/tmp/hive_job_log_jenkins_201305081836_726158442.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (key int, value 

[jira] [Updated] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4498:


Attachment: HIVE-4498.1.patch

HIVE-4498.1.patch - TUGIContainingProcessor calls  shim.closeAllForUGI() to 
free up entries from FileSystem.CACHE. But that ends resulting in the empty 
result being fetched in the next call.

The best way to fix the memory leak, in my opinion is to just disable the 
FileSystem cache.  See HIVE-4501



 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4498.1.patch


 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Review Request: HIVE-4498

2013-05-08 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11017/
---

Review request for hive and Carl Steinbach.


Description
---

HIVE-4498


Diffs
-

  service/src/java/org/apache/hive/service/auth/TUGIContainingProcessor.java 
12250ec 

Diff: https://reviews.apache.org/r/11017/diff/


Testing
---

verified that TestBeeLineWithArgs works .


Thanks,

Thejas Nair



[jira] [Updated] (HIVE-4498) TestBeeLineWithArgs.testPositiveScriptFile fails

2013-05-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4498:


Status: Patch Available  (was: Open)

Review board link - https://reviews.apache.org/r/11017/ 
I think we should also get HIVE-4501 into hive 0.11, ie disable cache by 
default, so that there is no memory leak with default configs.



 TestBeeLineWithArgs.testPositiveScriptFile fails
 

 Key: HIVE-4498
 URL: https://issues.apache.org/jira/browse/HIVE-4498
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2, JDBC
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4498.1.patch


 TestBeeLineWithArgs.testPositiveScriptFile fails -
 {code}
[junit] 0: jdbc:hive2://localhost:1  STARTED 
 testBreakOnErrorScriptFile
 [junit] Output: Connecting to jdbc:hive2://localhost:1
 [junit] Connected to: Hive (version 0.12.0-SNAPSHOT)
 [junit] Driver: Hive (version 0.12.0-SNAPSHOT)
 [junit] Transaction isolation: TRANSACTION_REPEATABLE_READ
 [junit] Beeline version 0.12.0-SNAPSHOT by Apache Hive
 [junit] ++
 [junit] | database_name  |
 [junit] ++
 [junit] ++
 [junit] No rows selected (0.899 seconds)
 [junit] Closing: org.apache.hive.jdbc.HiveConnection
 [junit]
 [junit]  FAILED testPositiveScriptFile (ERROR) (2s)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4194) JDBC2: HiveDriver should not throw RuntimeException when passed an invalid URL

2013-05-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4194:


Status: Patch Available  (was: Open)

marking as patch available, so that it gets to the attention of other 
committers.


 JDBC2: HiveDriver should not throw RuntimeException when passed an invalid URL
 --

 Key: HIVE-4194
 URL: https://issues.apache.org/jira/browse/HIVE-4194
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.11.0
Reporter: Richard Ding
Assignee: Richard Ding
 Attachments: HIVE-4194.patch


 As per JDBC 3.0 Spec (section 9.2)
 If the Driver implementation understands the URL, it will return a 
 Connection object; otherwise it returns null
 Currently HiveConnection constructor will throw IllegalArgumentException if 
 url string doesn't start with jdbc:hive2. This exception should be caught 
 by HiveDriver.connect and return null.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4392) Illogical InvalidObjectException throwed when use mulit aggregate functions with star columns

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652669#comment-13652669
 ] 

Hudson commented on HIVE-4392:
--

Integrated in Hive-trunk-hadoop2 #188 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/188/])
HIVE-4392 : Illogical InvalidObjectException throwed when use mulit 
aggregate functions with star columns  (Navis via Ashutosh Chauhan) (Revision 
1480161)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480161
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/PTFTranslator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/ctas_colname.q
* /hive/trunk/ql/src/test/results/clientpositive/ctas_colname.q.out


 Illogical InvalidObjectException throwed when use mulit aggregate functions 
 with star columns 
 --

 Key: HIVE-4392
 URL: https://issues.apache.org/jira/browse/HIVE-4392
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
 Environment: Apache Hadoop 0.20.1
 Apache Hive Trunk
Reporter: caofangkun
Assignee: Navis
Priority: Minor
 Fix For: 0.12.0

 Attachments: HIVE-4392.D10431.1.patch, HIVE-4392.D10431.2.patch, 
 HIVE-4392.D10431.3.patch, HIVE-4392.D10431.4.patch, HIVE-4392.D10431.5.patch


 For Example:
 hive (default) create table liza_1 as 
select *, sum(key), sum(value) 
from new_src;
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201304191025_0003, Tracking URL = 
 http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0003
 Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
 job_201304191025_0003
 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
 1
 2013-04-22 11:09:28,017 Stage-1 map = 0%,  reduce = 0%
 2013-04-22 11:09:34,054 Stage-1 map = 0%,  reduce = 100%
 2013-04-22 11:09:37,074 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201304191025_0003
 Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
 FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
 valid object name)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 MapReduce Jobs Launched: 
 Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 12 SUCCESS
 Total MapReduce CPU Time Spent: 0 msec
 hive (default) create table liza_1 as 
select *, sum(key), sum(value) 
from new_src   
group by key, value;
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks not specified. Estimated from input data size: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set mapred.reduce.tasks=number
 Starting Job = job_201304191025_0004, Tracking URL = 
 http://hd17-vm5:51030/jobdetails.jsp?jobid=job_201304191025_0004
 Kill Command = /home/zongren/hadoop-current/bin/../bin/hadoop job  -kill 
 job_201304191025_0004
 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 
 1
 2013-04-22 11:11:58,945 Stage-1 map = 0%,  reduce = 0%
 2013-04-22 11:12:01,964 Stage-1 map = 0%,  reduce = 100%
 2013-04-22 11:12:04,982 Stage-1 map = 100%,  reduce = 100%
 Ended Job = job_201304191025_0004
 Moving data to: hdfs://hd17-vm5:9101/user/zongren/hive/liza_1
 FAILED: Error in metadata: InvalidObjectException(message:liza_1 is not a 
 valid object name)
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.DDLTask
 MapReduce Jobs Launched: 
 Job 0: Reduce: 1   HDFS Read: 0 HDFS Write: 0 SUCCESS
 Total MapReduce CPU Time Spent: 0 msec
 But the following tow Queries  work:
 hive (default) create table liza_1 as select * from new_src;
 Total MapReduce jobs = 3
 Launching Job 1 out of 3
 Number of reduce tasks is set to 0 since there's no reduce operator
 

[jira] [Commented] (HIVE-4466) Fix continue.on.failure in unit tests to -well- continue on failure in unit tests

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652670#comment-13652670
 ] 

Hudson commented on HIVE-4466:
--

Integrated in Hive-trunk-hadoop2 #188 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/188/])
HIVE-4466 : Fix continue.on.failure in unit tests to -well- continue on 
failure in unit tests (Gunther Hagleitner via Ashutosh Chauhan) (Revision 
1480164)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480164
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/build.xml


 Fix continue.on.failure in unit tests to -well- continue on failure in unit 
 tests
 -

 Key: HIVE-4466
 URL: https://issues.apache.org/jira/browse/HIVE-4466
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.12.0

 Attachments: HIVE-4466.1.patch


 continue.on.failure is no longer hooked up to anything in the build scripts. 
 more importantly, the only choice right now is to continue through a module 
 and then fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4471) Build fails with hcatalog checkstyle error

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652672#comment-13652672
 ] 

Hudson commented on HIVE-4471:
--

Integrated in Hive-trunk-hadoop2 #188 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/188/])
HIVE-4471 : Build fails with hcatalog checkstyle error (Gunther Hagleitner 
via Ashutosh Chauhan) (Revision 1480162)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480162
Files : 
* /hive/trunk/hcatalog/build-support/ant/checkstyle.xml
* /hive/trunk/hcatalog/src/test/.gitignore


 Build fails with hcatalog checkstyle error
 --

 Key: HIVE-4471
 URL: https://issues.apache.org/jira/browse/HIVE-4471
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.12.0

 Attachments: HIVE-4471.1.patch, HIVE-4471.2.patch


 This is the output:
 checkstyle:
  [echo] hcatalog
 [checkstyle] Running Checkstyle 5.5 on 412 files
 [checkstyle] 
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/hcatalog/src/test/.gitignore:1:
  Missing a header - not enough lines in file.
 BUILD FAILED
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/build.xml:296: 
 The following error occurred while executing this line:
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/build.xml:298: 
 The following error occurred while executing this line:
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/hcatalog/build.xml:109:
  The following error occurred while executing this line:
 /home/jenkins/jenkins-slave/workspace/Hive-trunk-h0.21/hive/hcatalog/build-support/ant/checkstyle.xml:32:
  Got 1 errors and 0 warnings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4421) Improve memory usage by ORC dictionaries

2013-05-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652673#comment-13652673
 ] 

Hudson commented on HIVE-4421:
--

Integrated in Hive-trunk-hadoop2 #188 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/188/])
HIVE-4421 : Improve memory usage by ORC dictionaries (Owen Omalley via 
Ashutosh Chauhan) (Revision 1480159)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1480159
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicByteArray.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicIntArray.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/MemoryManager.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OutStream.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/PositionedOutputStream.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RedBlackTree.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/StringRedBlackTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestMemoryManager.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestStringRedBlackTree.java
* /hive/trunk/ql/src/test/resources/orc-file-dump.out


 Improve memory usage by ORC dictionaries
 

 Key: HIVE-4421
 URL: https://issues.apache.org/jira/browse/HIVE-4421
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.11.0, 0.12.0

 Attachments: HIVE-4421.D10545.1.patch, HIVE-4421.D10545.2.patch, 
 HIVE-4421.D10545.3.patch, HIVE-4421.D10545.4.patch


 Currently, for tables with many string columns, it is possible to 
 significantly underestimate the memory used by the ORC dictionaries and cause 
 the query to run out of memory in the task. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4525) Support timestamps earlier than 1970 and later than 2038

2013-05-08 Thread Mikhail Bautin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652676#comment-13652676
 ] 

Mikhail Bautin commented on HIVE-4525:
--

h4. Design proposal

We have to be able to read the current {{TimestampWritable}}-serializable 
format for backward-compatibility, and write the format recognizable by the 
current {{TimestampWritable}} implementation for timestamps within the 
currently supported range. We can use the negative range of the {{VInt}} in the 
binary representation of the timestamp that normally represents the reversed 
decimal part to indicate the presence of an additional {{VInt}} field that 
stores the remaining bits of the {{seconds}} number (i.e. {{seconds  31}}). 
The meaning of the 7th bit of the first byte then changes from has decimal to 
has decimal or 31 bits of seconds.

The following table summarizes the four logical cases of timestamp 
serialization. The first two are backward-compatible. The second two cases are 
unsupported by the current format, so they will not be recognized by the 
current version.

|| Seconds need 31 bits || Has decimal || 7th bit of the first byte || First 
VInt || Second VInt ||
| No | No | {{0}} | N/A | N/A |
| No | Yes | {{1}} | {{reversedDecimal}} | N/A |
| Yes | No | {{1}} | {{-1}} | {{seconds  31}} |
| Yes | Yes | {{1}} | {{-2 - reversedDecimal}} | {{seconds  31}} |




 Support timestamps earlier than 1970 and later than 2038
 

 Key: HIVE-4525
 URL: https://issues.apache.org/jira/browse/HIVE-4525
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 TimestampWritable currently serializes timestamps using the lower 31 bits of 
 an int. This does not allow to store timestamps earlier than 1970 or later 
 than a certain point in 2038.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-05-08 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4501:


Attachment: HIVE-4501.1.patch

HIVE-4501.1.patch - disables fs cache by default, in non-kerberos mode with 
impersonation turned on. Makes hive work with default settings. 


 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Attachments: HIVE-4501.1.patch


 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to false.
 Users should not have to bother with this extra configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4508) Fix various release issues in 0.11.0rc1

2013-05-08 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652683#comment-13652683
 ] 

Owen O'Malley commented on HIVE-4508:
-

{quote}
Can you please post a review request for this patch?
{quote}

No, there is no need for me to push it into arc. Just review either the patch 
or browse it from git:
https://github.com/apache/hive/commit/5364dd83cd6a0adb5b31e3275888e2cbd7608ed4

Fixing the release notes is the only important part of the patch. You requested 
a bunch of changes and I did the work. I don't understand why you are being so 
argumentative about work that you requested.

{quote}
I noticed a couple other issues in the affected files that should be fixed 
before this patch is forward-ported to trunk
{quote}

Please file a new jira and include a patch. I'll review it promptly.

 Fix various release issues in 0.11.0rc1
 ---

 Key: HIVE-4508
 URL: https://issues.apache.org/jira/browse/HIVE-4508
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.11.0

 Attachments: h-4508.patch, h-4508.patch


 Carl described some non-code issues in the 0.11.0rc1 and I want to fix them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4525) Support timestamps earlier than 1970 and later than 2038

2013-05-08 Thread Mikhail Bautin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13652709#comment-13652709
 ] 

Mikhail Bautin commented on HIVE-4525:
--

Also, the binary-sortable representation of timestamps would have to change to 
accommodate additional high-order bits. If a 4-byte second-precision timestamp 
covers 68 years (or 136 if signed), by adding one most-significant byte we can 
cover 17408 (or 34816) years, which is good enough for all practical purposes.

 Support timestamps earlier than 1970 and later than 2038
 

 Key: HIVE-4525
 URL: https://issues.apache.org/jira/browse/HIVE-4525
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin

 TimestampWritable currently serializes timestamps using the lower 31 bits of 
 an int. This does not allow to store timestamps earlier than 1970 or later 
 than a certain point in 2038.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: support for timestamps before 1970 or after 2038

2013-05-08 Thread Mikhail Bautin
Since this does seem to be a problem in Hive, I went ahead and created a
JIRA with a design proposal of a backward-compatible solution.

https://issues.apache.org/jira/browse/HIVE-4525

Years earlier than 1970 in particular are an important case for supporting
historical data in many business applications.

Thanks,
Mikhail

On Wed, May 8, 2013 at 3:15 PM, Mikhail Bautin 
bautin.mailing.li...@gmail.com wrote:

 Hello,

 Are there plans to support timestamps that cannot be represented by a
 signed 32-bit integer number of seconds since the UNIX epoch? (i.e. those
 before 1970 or after a certain point in 2038). Currently Hive's behavior
 regarding these timestamps is inconsistent, because it is possible to
 insert them into a table, but Hive does not handle them properly. Trying to
 serialize and deserialize the 1969-12-31 23:59:59 timestamp using
 TimestampWritable results in a 2038-01-19 03:14:07 timestamp.

 Thanks,
 Mikhail




[jira] [Updated] (HIVE-4495) Implement vectorized string substr

2013-05-08 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4495:
---

Assignee: Timothy Chen  (was: Eric Hanson)

 Implement vectorized string substr
 --

 Key: HIVE-4495
 URL: https://issues.apache.org/jira/browse/HIVE-4495
 Project: Hive
  Issue Type: Sub-task
Reporter: Timothy Chen
Assignee: Timothy Chen



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4493) Implement vectorized filter for string column compared to string column

2013-05-08 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4493:
---

   Resolution: Fixed
Fix Version/s: vectorization-branch
   Status: Resolved  (was: Patch Available)

Committed to branch. Thanks, Eric!

 Implement vectorized filter for string column compared to string column
 ---

 Key: HIVE-4493
 URL: https://issues.apache.org/jira/browse/HIVE-4493
 Project: Hive
  Issue Type: Sub-task
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: vectorization-branch

 Attachments: HIVE-4493.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4526) auto_sortmerge_join_9.q throws NPE but test is succeeded

2013-05-08 Thread Navis (JIRA)
Navis created HIVE-4526:
---

 Summary: auto_sortmerge_join_9.q throws NPE but test is succeeded
 Key: HIVE-4526
 URL: https://issues.apache.org/jira/browse/HIVE-4526
 Project: Hive
  Issue Type: Test
  Components: Tests
Reporter: Navis
Priority: Minor


auto_sortmerge_join_9.q

{noformat}
[junit] Running org.apache.hadoop.hive.cli.TestCliDriver
[junit] Begin query: auto_sortmerge_join_9.q
[junit] Deleted 
file:/home/navis/apache/oss-hive/build/ql/test/data/warehouse/tbl1
[junit] Deleted 
file:/home/navis/apache/oss-hive/build/ql/test/data/warehouse/tbl2
[junit] org.apache.hadoop.hive.ql.metadata.HiveException: Failed with 
exception nulljava.lang.NullPointerException
[junit] at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRowInspectorFromPartitionedTable(FetchOperator.java:252)
[junit] at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getOutputObjectInspector(FetchOperator.java:605)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.initializeOperators(MapredLocalTask.java:393)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:277)
[junit] at 
org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:676)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
[junit] 
[junit] at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getOutputObjectInspector(FetchOperator.java:631)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.initializeOperators(MapredLocalTask.java:393)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:277)
[junit] at 
org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:676)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
[junit] org.apache.hadoop.hive.ql.metadata.HiveException: Failed with 
exception nulljava.lang.NullPointerException
[junit] at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getRowInspectorFromPartitionedTable(FetchOperator.java:252)
[junit] at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getOutputObjectInspector(FetchOperator.java:605)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.initializeOperators(MapredLocalTask.java:393)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:277)
[junit] at 
org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:676)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
[junit] 
[junit] at 
org.apache.hadoop.hive.ql.exec.FetchOperator.getOutputObjectInspector(FetchOperator.java:631)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.initializeOperators(MapredLocalTask.java:393)
[junit] at 
org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:277)
[junit] at 
org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:676)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
[junit] Running: diff -a 
/home/navis/apache/oss-hive/build/ql/test/logs/clientpositive/auto_sortmerge_join_9.q.out
 
/home/navis/apache/oss-hive/ql/src/test/results/clientpositive/auto_sortmerge_join_9.q.out
[junit] Done query: auto_sortmerge_join_9.q elapsedTime=178s
[junit] Cleaning up TestCliDriver
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 

[jira] [Created] (HIVE-4527) Fix eclipse project template

2013-05-08 Thread Carl Steinbach (JIRA)
Carl Steinbach created HIVE-4527:


 Summary: Fix eclipse project template
 Key: HIVE-4527
 URL: https://issues.apache.org/jira/browse/HIVE-4527
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Carl Steinbach
Priority: Blocker
 Fix For: 0.11.0


The eclipse template is broken on trunk and branch-0.11. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira