[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-16 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632604#comment-13632604
 ] 

Navis commented on HIVE-4300:
-

[~ashutoshc] Could you commit this? I tried but I couldn't.

 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.h
 # serde/src/gen/thrift/gen-cpp/megastruct_types.cpp
 # 

[jira] [Updated] (HIVE-3996) Correctly enforce the memory limit on the multi-table map-join

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3996:
-

   Resolution: Fixed
Fix Version/s: 0.11.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed. Thanks Vikram

 Correctly enforce the memory limit on the multi-table map-join
 --

 Key: HIVE-3996
 URL: https://issues.apache.org/jira/browse/HIVE-3996
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.11.0

 Attachments: HIVE-3996_2.patch, HIVE-3996_3.patch, HIVE-3996_4.patch, 
 HIVE-3996_5.patch, HIVE-3996_6.patch, HIVE-3996_7.patch, HIVE-3996_8.patch, 
 HIVE-3996_9.patch, hive.3996.9.patch-nohcat, HIVE-3996.patch


 Currently with HIVE-3784, the joins are converted to map-joins based on 
 checks of the table size against the config variable: 
 hive.auto.convert.join.noconditionaltask.size. 
 However, the current implementation will also merge multiple mapjoin 
 operators into a single task regardless of whether the sum of the table sizes 
 will exceed the configured value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3682) when output hive table to file,users should could have a separator of their own choice

2013-04-16 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-3682:
--

Attachment: HIVE-3682.D10275.1.patch

khorgath requested code review of HIVE-3682 [jira] when output hive table to 
file,users should could have a separator of their own choice.

Reviewers: JIRA

HIVE-3682 Supporting custom INSERT OVERWRITE LOCAL DIRECTORY syntax with SerDe 
and Outputformat support

By default,when output hive table to file ,columns of the Hive table are 
separated by ^A character (that is \001).
But indeed users should have the right to set a seperator of their own choice.

In addition, we need to be able to support custom serde specification to 
output(such as an available json serde),
or we need to be able to specify an output format like a 'stored as rcfile' 
specification to allow cases
where we want to export data that is meant to be copied into dfs elsewhere and 
directly read as an external table.

Usage Example:
create table for_test (key string, value string);
load data local inpath './in1.txt' into table for_test
select * from for_test;
UT-01:default separator is \001 line separator is \n
insert overwrite local directory './test-01'
select * from src ;

create table array_table (a arraystring, b arraystring)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY ',';

load data local inpath ../hive/examples/files/arraytest.txt overwrite into 
table table2;

CREATE TABLE map_table (foo STRING , bar MAPSTRING, STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY ','
MAP KEYS TERMINATED BY ':'
STORED AS TEXTFILE;

UT-02:defined field separator as ':'
insert overwrite local directory './test-02'
row format delimited
FIELDS TERMINATED BY ':'
select * from src ;

UT-03: line separator DO NOT ALLOWED to define as other separator
insert overwrite local directory './test-03'
row format delimited
FIELDS TERMINATED BY ':'
select * from src ;

UT-04: define map separators
insert overwrite local directory './test-04'
row format delimited
FIELDS TERMINATED BY '\t'
COLLECTION ITEMS TERMINATED BY ','
MAP KEYS TERMINATED BY ':'
select * from src;

UT-05: STORED-AS specification
insert overwrite local directory './test-05'
stored as rcfile
select * from src;

UT-06: custom SerDe specification for output
insert overwrite local directory './test-06'
row format 'org.apache.hadoop.hive.serde2.DelimitedJSONSerDe'
stored as textfile
select * from src;

TEST PLAN
  Included .q files

REVISION DETAIL
  https://reviews.facebook.net/D10275

AFFECTED FILES
  data/files/array_table.txt
  data/files/map_table.txt
  ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
  ql/src/java/org/apache/hadoop/hive/ql/parse/QB.java
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/java/org/apache/hadoop/hive/ql/plan/LocalDirectoryDesc.java
  ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
  ql/src/test/queries/clientpositive/insert_overwrite_local_directory_1.q
  ql/src/test/results/clientpositive/insert_overwrite_local_directory_1.q.out

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/24573/

To: JIRA, khorgath


 when output hive table to file,users should could have a separator of their 
 own choice
 --

 Key: HIVE-3682
 URL: https://issues.apache.org/jira/browse/HIVE-3682
 Project: Hive
  Issue Type: New Feature
  Components: CLI
Affects Versions: 0.8.1
 Environment: Linux 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 
 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
 java version 1.6.0_25
 hadoop-0.20.2-cdh3u0
 hive-0.8.1
Reporter: caofangkun
Assignee: Gang Tim Liu
 Attachments: HIVE-3682-1.patch, HIVE-3682.D10275.1.patch, 
 HIVE-3682.with.serde.patch


 By default,when output hive table to file ,columns of the Hive table are 
 separated by ^A character (that is \001).
 But indeed users should have the right to set a seperator of their own choice.
 Usage Example:
 create table for_test (key string, value string);
 load data local inpath './in1.txt' into table for_test
 select * from for_test;
 UT-01:default separator is \001 line separator is \n
 insert overwrite local directory './test-01' 
 select * from src ;
 create table array_table (a arraystring, b arraystring)
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '\t'
 COLLECTION ITEMS TERMINATED BY ',';
 load data local inpath ../hive/examples/files/arraytest.txt overwrite into 
 table table2;
 CREATE TABLE map_table (foo STRING , bar MAPSTRING, STRING)
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '\t'
 COLLECTION ITEMS TERMINATED BY ','
 MAP KEYS TERMINATED BY ':'
 STORED AS TEXTFILE;

[jira] [Commented] (HIVE-3682) when output hive table to file,users should could have a separator of their own choice

2013-04-16 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632647#comment-13632647
 ] 

Phabricator commented on HIVE-3682:
---

khorgath has added reviewers to the revision HIVE-3682 [jira] when output hive 
table to file,users should could have a separator of their own choice.
Added Reviewers: ashutoshc, omalley

  Updated patch based on [~caofangkun]'s initial patch to support STORED-AS and 
SerDe specification.

REVISION DETAIL
  https://reviews.facebook.net/D10275

To: JIRA, ashutoshc, omalley, khorgath


 when output hive table to file,users should could have a separator of their 
 own choice
 --

 Key: HIVE-3682
 URL: https://issues.apache.org/jira/browse/HIVE-3682
 Project: Hive
  Issue Type: New Feature
  Components: CLI
Affects Versions: 0.8.1
 Environment: Linux 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 
 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
 java version 1.6.0_25
 hadoop-0.20.2-cdh3u0
 hive-0.8.1
Reporter: caofangkun
Assignee: Gang Tim Liu
 Attachments: HIVE-3682-1.patch, HIVE-3682.D10275.1.patch, 
 HIVE-3682.with.serde.patch


 By default,when output hive table to file ,columns of the Hive table are 
 separated by ^A character (that is \001).
 But indeed users should have the right to set a seperator of their own choice.
 Usage Example:
 create table for_test (key string, value string);
 load data local inpath './in1.txt' into table for_test
 select * from for_test;
 UT-01:default separator is \001 line separator is \n
 insert overwrite local directory './test-01' 
 select * from src ;
 create table array_table (a arraystring, b arraystring)
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '\t'
 COLLECTION ITEMS TERMINATED BY ',';
 load data local inpath ../hive/examples/files/arraytest.txt overwrite into 
 table table2;
 CREATE TABLE map_table (foo STRING , bar MAPSTRING, STRING)
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY '\t'
 COLLECTION ITEMS TERMINATED BY ','
 MAP KEYS TERMINATED BY ':'
 STORED AS TEXTFILE;
 UT-02:defined field separator as ':'
 insert overwrite local directory './test-02' 
 row format delimited 
 FIELDS TERMINATED BY ':' 
 select * from src ;
 UT-03: line separator DO NOT ALLOWED to define as other separator 
 insert overwrite local directory './test-03' 
 row format delimited 
 FIELDS TERMINATED BY ':' 
 select * from src ;
 UT-04: define map separators 
 insert overwrite local directory './test-04' 
 row format delimited 
 FIELDS TERMINATED BY '\t'
 COLLECTION ITEMS TERMINATED BY ','
 MAP KEYS TERMINATED BY ':'
 select * from src;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4167) Hive converts bucket map join to SMB join even when tables are not sorted

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4167:
-

Attachment: hive.4167.4.patch-nohcat

 Hive converts bucket map join to SMB join even when tables are not sorted
 -

 Key: HIVE-4167
 URL: https://issues.apache.org/jira/browse/HIVE-4167
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: hive.4167.1.patch, hive.4167.2.patch, hive.4167.3.patch, 
 hive.4167.4.patch-nohcat, HIVE-4167.patch


 If tables are just bucketed but not sorted, we are generating smb join 
 operator. This results in loss of rows in queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4167) Hive converts bucket map join to SMB join even when tables are not sorted

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4167:
-

Attachment: hive.4167.4.patch

 Hive converts bucket map join to SMB join even when tables are not sorted
 -

 Key: HIVE-4167
 URL: https://issues.apache.org/jira/browse/HIVE-4167
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: hive.4167.1.patch, hive.4167.2.patch, hive.4167.3.patch, 
 hive.4167.4.patch, hive.4167.4.patch-nohcat, HIVE-4167.patch


 If tables are just bucketed but not sorted, we are generating smb join 
 operator. This results in loss of rows in queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4106) SMB joins fail in multi-way joins

2013-04-16 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632672#comment-13632672
 ] 

Namit Jain commented on HIVE-4106:
--

[~vikram.dixit], can you confirm ?

 SMB joins fail in multi-way joins
 -

 Key: HIVE-4106
 URL: https://issues.apache.org/jira/browse/HIVE-4106
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Blocker
 Attachments: auto_sortmerge_join_12.q, HIVE-4106.patch


 I see array out of bounds exception in case of multi way smb joins. This is 
 related to changes that went in as part of HIVE-3403. This issue has been 
 discussed in HIVE-3891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4308) Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk

2013-04-16 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632677#comment-13632677
 ] 

Namit Jain commented on HIVE-4308:
--

+1

 Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk
 --

 Key: HIVE-4308
 URL: https://issues.apache.org/jira/browse/HIVE-4308
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Ashutosh Chauhan
 Attachments: HIVE-4308.D10269.1.patch


 This only happens while running whole test suite. Failure doesn't manifest if 
 this test is run alone.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4284) Implement class for vectorized row batch

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4284:
-

Status: Open  (was: Patch Available)

Comments from Jitendra.

For a big patch like this, it would be very useful to have a phabricator entry

 Implement class for vectorized row batch
 

 Key: HIVE-4284
 URL: https://issues.apache.org/jira/browse/HIVE-4284
 Project: Hive
  Issue Type: Sub-task
Reporter: Jitendra Nath Pandey
Assignee: Eric Hanson
 Attachments: HIVE-4284.1.patch


 Vectorized row batch object will represent the row batch that vectorized 
 operators will work on. Refer to design spec attached to HIVE-4160 for 
 details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4365) wrong result in left semi join

2013-04-16 Thread ransom.hezhiqiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ransom.hezhiqiang updated HIVE-4365:


Description: 
wrong result in left semi join while hive.optimize.ppd=true
for example:
1、create table
   create table t1(c1 int,c2 int, c3 int, c4 int, c5 double,c6 int,c7 string)   
row format DELIMITED FIELDS TERMINATED BY '|';
   create table t2(c1 int) ;
2、load data
load data local inpath '/home/test/t1.txt' OVERWRITE into table t1;
load data local inpath '/home/test/t2.txt' OVERWRITE into table t2;
t1 data:
1|3|10003|52|781.96|555|201203
1|3|10003|39|782.96|555|201203
1|3|10003|87|783.96|555|201203
2|5|10004|24|789.96|555|201203
2|5|10004|58|788.96|555|201203
t2 data:
555
3、excute Query
select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 on 
t1.c6 = t2.c1 and  t1.c1 =  '1' and t1.c7 = '201203' ;   
can got result.
select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 on 
t1.c6 = t2.c1 where t1.c1 =  '1' and t1.c7 = '201203' ;   
can't got result.



  was:
wrong result in left semi join while hive.optimize.ppd=true
for example:
1、create table
   create table t1(c1 int,c2 int, c3 int, c4 int, c5 double,c6 int,c7 string)   
row format DELIMITED FIELDS TERMINATED BY '|';
   create table t2(c1 int) ;
2、load data
load data local inpath '/home/omm/t1.txt' OVERWRITE into table t1;
load data local inpath '/home/omm/t2.txt' OVERWRITE into table t2;
t1 data:
1|3|10003|52|781.96|555|201203
1|3|10003|39|782.96|555|201203
1|3|10003|87|783.96|555|201203
2|5|10004|24|789.96|555|201203
2|5|10004|58|788.96|555|201203
t2 data:
555
3、excute Query
select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 on 
t1.c6 = t2.c1 and  t1.c1 =  '1' and t1.c7 = '201203' ;   
can got result.
select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 on 
t1.c6 = t2.c1 where t1.c1 =  '1' and t1.c7 = '201203' ;   
can't got result.




 wrong result in left semi join
 --

 Key: HIVE-4365
 URL: https://issues.apache.org/jira/browse/HIVE-4365
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0, 0.10.0
Reporter: ransom.hezhiqiang

 wrong result in left semi join while hive.optimize.ppd=true
 for example:
 1、create table
create table t1(c1 int,c2 int, c3 int, c4 int, c5 double,c6 int,c7 string) 
   row format DELIMITED FIELDS TERMINATED BY '|';
create table t2(c1 int) ;
 2、load data
 load data local inpath '/home/test/t1.txt' OVERWRITE into table t1;
 load data local inpath '/home/test/t2.txt' OVERWRITE into table t2;
 t1 data:
 1|3|10003|52|781.96|555|201203
 1|3|10003|39|782.96|555|201203
 1|3|10003|87|783.96|555|201203
 2|5|10004|24|789.96|555|201203
 2|5|10004|58|788.96|555|201203
 t2 data:
 555
 3、excute Query
 select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 
 on t1.c6 = t2.c1 and  t1.c1 =  '1' and t1.c7 = '201203' ;   
 can got result.
 select t1.c1,t1.c2,t1.c3,t1.c4,t1.c5,t1.c6,t1.c7  from t1 left semi join t2 
 on t1.c6 = t2.c1 where t1.c1 =  '1' and t1.c7 = '201203' ;   
 can't got result.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4308) Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4308:
-

   Resolution: Fixed
Fix Version/s: 0.11.0
 Assignee: Navis
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed. Thanks Navis

 Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk
 --

 Key: HIVE-4308
 URL: https://issues.apache.org/jira/browse/HIVE-4308
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.11.0

 Attachments: HIVE-4308.D10269.1.patch


 This only happens while running whole test suite. Failure doesn't manifest if 
 this test is run alone.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4366) wrong result in show role grant user

2013-04-16 Thread ransom.hezhiqiang (JIRA)
ransom.hezhiqiang created HIVE-4366:
---

 Summary: wrong result in show role grant user
 Key: HIVE-4366
 URL: https://issues.apache.org/jira/browse/HIVE-4366
 Project: Hive
  Issue Type: Bug
  Components: Authentication
Affects Versions: 0.10.0, 0.9.0
Reporter: ransom.hezhiqiang


in test case authorization_1.q

show role grant user hive_test_user
the result is :
role name:src_role
role name:src_role

the same result is print twice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4324) ORC Turn off dictionary encoding when number of distinct keys is greater than threshold

2013-04-16 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632701#comment-13632701
 ] 

Namit Jain commented on HIVE-4324:
--

+1

 ORC Turn off dictionary encoding when number of distinct keys is greater than 
 threshold
 ---

 Key: HIVE-4324
 URL: https://issues.apache.org/jira/browse/HIVE-4324
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Affects Versions: 0.11.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-4324.1.patch.txt


 Add a configurable threshold so that if the number of distinct values in a 
 string column is greater than that fraction of non-null values, dictionary 
 encoding is turned off.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4367) enhance TRUNCATE syntex to drop data of external table

2013-04-16 Thread caofangkun (JIRA)
caofangkun created HIVE-4367:


 Summary: enhance  TRUNCATE syntex  to drop data of external table
 Key: HIVE-4367
 URL: https://issues.apache.org/jira/browse/HIVE-4367
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: caofangkun
Priority: Minor


In my use case ,
sometimes I have to remove data of external tables to free up storage space of 
the cluster .
So it's necessary for to enhance the syntax like 
TRUNCATE TABLE srcpart_truncate PARTITION (dt='201130412') FORCE;
to remove data from EXTERNAL table.

And I add a configuration property to enable remove data to Trash 
property
  namehive.truncate.skiptrash/name
  valuefalse/value
  description
 if true will remove data to trash, else false drop data immediately
  /description
/property

For example :
hive (default) TRUNCATE TABLE external1 partition (ds='11'); 
FAILED: Error in semantic analysis: Cannot truncate non-managed table external1
hive (default) TRUNCATE TABLE external1 partition (ds='11') FORCE;
[2013-04-16 17:15:52]: Compile Start 
[2013-04-16 17:15:52]: Compile End
[2013-04-16 17:15:52]: OK
[2013-04-16 17:15:52]: Time taken: 0.413 seconds

hive (default) set hive.truncate.skiptrash;
hive.truncate.skiptrash=false

hive (default) set hive.truncate.skiptrash=true; 
hive (default) TRUNCATE TABLE external1 partition (ds='12') FORCE;
[2013-04-16 17:16:21]: Compile Start 
[2013-04-16 17:16:21]: Compile End
[2013-04-16 17:16:21]: OK
[2013-04-16 17:16:21]: Time taken: 0.143 seconds

hive (default) dfs -ls /user/test/.Trash/Current/; 
Found 1 items
drwxr-xr-x - kun.cao supergroup 0 2013-04-16 17:06 
/user/test/.Trash/Current/ds=11

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4367) enhance TRUNCATE syntex to drop data of external table

2013-04-16 Thread caofangkun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caofangkun updated HIVE-4367:
-

Description: 
In my use case ,
sometimes I have to remove data of external tables to free up storage space of 
the cluster .
So it's necessary for to enhance the syntax like 
TRUNCATE TABLE srcpart_truncate PARTITION (dt='201130412') FORCE;
to remove data from EXTERNAL table.

And I add a configuration property to enable remove data to Trash 
property
  namehive.truncate.skiptrash/name
  valuefalse/value
  description
 if true will remove data to trash, else false drop data immediately
  /description
/property

For example :
hive (default) TRUNCATE TABLE external1 partition (ds='11'); 
FAILED: Error in semantic analysis: Cannot truncate non-managed table external1
hive (default) TRUNCATE TABLE external1 partition (ds='11') FORCE;
[2013-04-16 17:15:52]: Compile Start 
[2013-04-16 17:15:52]: Compile End
[2013-04-16 17:15:52]: OK
[2013-04-16 17:15:52]: Time taken: 0.413 seconds

hive (default) set hive.truncate.skiptrash;
hive.truncate.skiptrash=false

hive (default) set hive.truncate.skiptrash=true; 
hive (default) TRUNCATE TABLE external1 partition (ds='12') FORCE;
[2013-04-16 17:16:21]: Compile Start 
[2013-04-16 17:16:21]: Compile End
[2013-04-16 17:16:21]: OK
[2013-04-16 17:16:21]: Time taken: 0.143 seconds

hive (default) dfs -ls /user/test/.Trash/Current/; 
Found 1 items
drwxr-xr-x -test supergroup 0 2013-04-16 17:06 /user/test/.Trash/Current/ds=11

  was:
In my use case ,
sometimes I have to remove data of external tables to free up storage space of 
the cluster .
So it's necessary for to enhance the syntax like 
TRUNCATE TABLE srcpart_truncate PARTITION (dt='201130412') FORCE;
to remove data from EXTERNAL table.

And I add a configuration property to enable remove data to Trash 
property
  namehive.truncate.skiptrash/name
  valuefalse/value
  description
 if true will remove data to trash, else false drop data immediately
  /description
/property

For example :
hive (default) TRUNCATE TABLE external1 partition (ds='11'); 
FAILED: Error in semantic analysis: Cannot truncate non-managed table external1
hive (default) TRUNCATE TABLE external1 partition (ds='11') FORCE;
[2013-04-16 17:15:52]: Compile Start 
[2013-04-16 17:15:52]: Compile End
[2013-04-16 17:15:52]: OK
[2013-04-16 17:15:52]: Time taken: 0.413 seconds

hive (default) set hive.truncate.skiptrash;
hive.truncate.skiptrash=false

hive (default) set hive.truncate.skiptrash=true; 
hive (default) TRUNCATE TABLE external1 partition (ds='12') FORCE;
[2013-04-16 17:16:21]: Compile Start 
[2013-04-16 17:16:21]: Compile End
[2013-04-16 17:16:21]: OK
[2013-04-16 17:16:21]: Time taken: 0.143 seconds

hive (default) dfs -ls /user/test/.Trash/Current/; 
Found 1 items
drwxr-xr-x - kun.cao supergroup 0 2013-04-16 17:06 
/user/test/.Trash/Current/ds=11


 enhance  TRUNCATE syntex  to drop data of external table
 

 Key: HIVE-4367
 URL: https://issues.apache.org/jira/browse/HIVE-4367
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: caofangkun
Priority: Minor

 In my use case ,
 sometimes I have to remove data of external tables to free up storage space 
 of the cluster .
 So it's necessary for to enhance the syntax like 
 TRUNCATE TABLE srcpart_truncate PARTITION (dt='201130412') FORCE;
 to remove data from EXTERNAL table.
 And I add a configuration property to enable remove data to Trash 
 property
   namehive.truncate.skiptrash/name
   valuefalse/value
   description
  if true will remove data to trash, else false drop data immediately
   /description
 /property
 For example :
 hive (default) TRUNCATE TABLE external1 partition (ds='11'); 
 FAILED: Error in semantic analysis: Cannot truncate non-managed table 
 external1
 hive (default) TRUNCATE TABLE external1 partition (ds='11') FORCE;
 [2013-04-16 17:15:52]: Compile Start 
 [2013-04-16 17:15:52]: Compile End
 [2013-04-16 17:15:52]: OK
 [2013-04-16 17:15:52]: Time taken: 0.413 seconds
 hive (default) set hive.truncate.skiptrash;
 hive.truncate.skiptrash=false
 hive (default) set hive.truncate.skiptrash=true; 
 hive (default) TRUNCATE TABLE external1 partition (ds='12') FORCE;
 [2013-04-16 17:16:21]: Compile Start 
 [2013-04-16 17:16:21]: Compile End
 [2013-04-16 17:16:21]: OK
 [2013-04-16 17:16:21]: Time taken: 0.143 seconds
 hive (default) dfs -ls /user/test/.Trash/Current/; 
 Found 1 items
 drwxr-xr-x -test supergroup 0 2013-04-16 17:06 /user/test/.Trash/Current/ds=11

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-446) Implement TRUNCATE

2013-04-16 Thread caofangkun (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632717#comment-13632717
 ] 

caofangkun commented on HIVE-446:
-

Thank you [~gangtimliu] 
Many times I have to remove data from external table to free up storage space 
of the cluster. So it's necessary for me to have some statement like truncate 
... force  to remove data.
I submited an issue https://issues.apache.org/jira/browse/HIVE-4367 
Just in case it may be some helpful for people have a similar need .


 Implement TRUNCATE
 --

 Key: HIVE-446
 URL: https://issues.apache.org/jira/browse/HIVE-446
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Prasad Chakka
Assignee: Navis
 Fix For: 0.11.0

 Attachments: HIVE-446.D7371.1.patch, HIVE-446.D7371.2.patch, 
 HIVE-446.D7371.3.patch, HIVE-446.D7371.4.patch


 truncate the data but leave the table and metadata intact.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4167) Hive converts bucket map join to SMB join even when tables are not sorted

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4167:
-

Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 Hive converts bucket map join to SMB join even when tables are not sorted
 -

 Key: HIVE-4167
 URL: https://issues.apache.org/jira/browse/HIVE-4167
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: hive.4167.1.patch, hive.4167.2.patch, hive.4167.3.patch, 
 hive.4167.4.patch, hive.4167.4.patch-nohcat, HIVE-4167.patch


 If tables are just bucketed but not sorted, we are generating smb join 
 operator. This results in loss of rows in queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4167) Hive converts bucket map join to SMB join even when tables are not sorted

2013-04-16 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632727#comment-13632727
 ] 

Namit Jain commented on HIVE-4167:
--

added comments

 Hive converts bucket map join to SMB join even when tables are not sorted
 -

 Key: HIVE-4167
 URL: https://issues.apache.org/jira/browse/HIVE-4167
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: hive.4167.1.patch, hive.4167.2.patch, hive.4167.3.patch, 
 hive.4167.4.patch, hive.4167.4.patch-nohcat, HIVE-4167.patch


 If tables are just bucketed but not sorted, we are generating smb join 
 operator. This results in loss of rows in queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4167) Hive converts bucket map join to SMB join even when tables are not sorted

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4167:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed. Thanks Ashutosh

 Hive converts bucket map join to SMB join even when tables are not sorted
 -

 Key: HIVE-4167
 URL: https://issues.apache.org/jira/browse/HIVE-4167
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: hive.4167.1.patch, hive.4167.2.patch, hive.4167.3.patch, 
 hive.4167.4.patch, hive.4167.4.patch-nohcat, HIVE-4167.patch


 If tables are just bucketed but not sorted, we are generating smb join 
 operator. This results in loss of rows in queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3891) physical optimizer changes for auto sort-merge join

2013-04-16 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632730#comment-13632730
 ] 

Namit Jain commented on HIVE-3891:
--

refreshing again and resolving conflicts

 physical optimizer changes for auto sort-merge join
 ---

 Key: HIVE-3891
 URL: https://issues.apache.org/jira/browse/HIVE-3891
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: auto_sortmerge_join_1.q, auto_sortmerge_join_1.q.out, 
 hive.3891.10.patch, hive.3891.11.patch, hive.3891.1.patch, hive.3891.2.patch, 
 hive.3891.3.patch, hive.3891.4.patch, hive.3891.5.patch, hive.3891.6.patch, 
 hive.3891.7.patch, HIVE-3891_8.patch, hive.3891.9.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3996) Correctly enforce the memory limit on the multi-table map-join

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3996:
-

Fix Version/s: (was: 0.11.0)

 Correctly enforce the memory limit on the multi-table map-join
 --

 Key: HIVE-3996
 URL: https://issues.apache.org/jira/browse/HIVE-3996
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-3996_2.patch, HIVE-3996_3.patch, HIVE-3996_4.patch, 
 HIVE-3996_5.patch, HIVE-3996_6.patch, HIVE-3996_7.patch, HIVE-3996_8.patch, 
 HIVE-3996_9.patch, hive.3996.9.patch-nohcat, HIVE-3996.patch


 Currently with HIVE-3784, the joins are converted to map-joins based on 
 checks of the table size against the config variable: 
 hive.auto.convert.join.noconditionaltask.size. 
 However, the current implementation will also merge multiple mapjoin 
 operators into a single task regardless of whether the sum of the table sizes 
 will exceed the configured value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4308) Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4308:
-

Fix Version/s: (was: 0.11.0)
   0.12.0

 Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk
 --

 Key: HIVE-4308
 URL: https://issues.apache.org/jira/browse/HIVE-4308
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.12.0

 Attachments: HIVE-4308.D10269.1.patch


 This only happens while running whole test suite. Failure doesn't manifest if 
 this test is run alone.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3891) physical optimizer changes for auto sort-merge join

2013-04-16 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3891:
-

Attachment: hive.3891.12.patch

 physical optimizer changes for auto sort-merge join
 ---

 Key: HIVE-3891
 URL: https://issues.apache.org/jira/browse/HIVE-3891
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: auto_sortmerge_join_1.q, auto_sortmerge_join_1.q.out, 
 hive.3891.10.patch, hive.3891.11.patch, hive.3891.12.patch, 
 hive.3891.1.patch, hive.3891.2.patch, hive.3891.3.patch, hive.3891.4.patch, 
 hive.3891.5.patch, hive.3891.6.patch, hive.3891.7.patch, HIVE-3891_8.patch, 
 hive.3891.9.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4268) Beeline should support the -f option

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632760#comment-13632760
 ] 

Hudson commented on HIVE-4268:
--

Integrated in Hive-trunk-hadoop2 #161 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/161/])
HIVE-4268. Beeline should support the -f option (Rob Weltman via cws) 
(Revision 1467920)

 Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467920
Files : 
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java
* /hive/trunk/beeline/src/test/org
* /hive/trunk/beeline/src/test/org/apache
* /hive/trunk/beeline/src/test/org/apache/hive
* /hive/trunk/beeline/src/test/org/apache/hive/beeline
* /hive/trunk/beeline/src/test/org/apache/hive/beeline/src
* /hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test
* 
/hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test/TestBeeLineWithArgs.java
* /hive/trunk/build.xml


 Beeline should support the -f option
 

 Key: HIVE-4268
 URL: https://issues.apache.org/jira/browse/HIVE-4268
 Project: Hive
  Issue Type: Improvement
  Components: CLI, HiveServer2
Affects Versions: 0.10.0
Reporter: Rob Weltman
Assignee: Rob Weltman
  Labels: HiveServer2
 Fix For: 0.12.0

 Attachments: HIVE-4268.1.patch.txt, HIVE-4268.2.patch.txt, 
 HIVE-4268.3.patch.txt


 Beeline should support the -f option (pass in a script to execute) for 
 compatibility with the Hive CLI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-hadoop2 - Build # 161 - Still Failing

2013-04-16 Thread Apache Jenkins Server
Changes for Build #138
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)

[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver 
(Thejas Nair via Ashutosh Chauhan)

[namit] HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit)

[namit] HIVE-4240 optimize hive.enforce.bucketing and hive.enforce sorting 
insert
(Gang Tim Liu via namit)

[navis] HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik 
via Navis)

[namit] HIVE-4272 partition wise metadata does not work for text files

[hashutosh] HIVE-896 : Add LEAD/LAG/FIRST/LAST analytical windowing functions 
to Hive. (Harish Butani via Ashutosh Chauhan)

[namit] HIVE-4260 union_remove_12, union_remove_13 are failing on hadoop2
(Gunther Hagleitner via namit)

[hashutosh] HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover 
via Ashutosh Chauhan)

[namit] HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple 
input partitions
(Namit via Gang Tim Liu)

[hashutosh] HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan)

[hashutosh] HIVE-4122 : Queries fail if timestamp data not in expected format 
(Prasad Mujumdar via Ashutosh Chauhan)

[hashutosh] HIVE-4170 : [REGRESSION] FsShell.close closes filesystem, removing 
temporary directories (Navis via Ashutosh Chauhan)

[gates] HIVE-4264 Moved hcatalog trunk code up to hive/trunk/hcatalog

[hashutosh] HIVE-4263 : Adjust build.xml package command to move all hcat jars 
and binaries into build (Alan Gates via Ashutosh Chauhan)

[namit] HIVE-4258 Log logical plan tree for debugging
(Navis via namit)

[navis] HIVE-2264 Hive server is SHUTTING DOWN when invalid queries beeing 
executed

[kevinwilfong] HIVE-4235. CREATE TABLE IF NOT EXISTS uses inefficient way to 
check if table exists. (Gang Tim Liu via kevinwilfong)

[gangtimliu] HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi 
Gang Tim Liu)

[gangtimliu] HIVE-4155: Expose ORC's FileDump as a service

[gangtimliu] HIVE-4159:RetryingHMSHandler doesn't retry in enough cases (Kevin 
Wilfong vi Gang Tim Liu)

[namit] HIVE-4149 wrong results big outer joins with array of ints
(Navis via namit)

[namit] HIVE-3958 support partial scan for analyze command - RCFile
(Gang Tim Liu via namit)

[gates] Removing old branches to limit size of Hive downloads.

[gates] Removing tags directory as we no longer need them and they're in the 
history.

[gates] Moving HCatalog into Hive.

[gates] Test that perms work for hcatalog

[hashutosh] HIVE-4007 : Create abstract classes for serializer and deserializer 
(Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-3381 : Result of outer join is not valid (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3980 : Cleanup after 3403 (Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-4042 : ignore mapjoin hint (Namit Jain via Ashutosh Chauhan)

[namit] HIVE-3348 semi-colon in comments in .q file does not work
(Nick Collins via namit)

[namit] HIVE-4212 sort merge join should work for outer joins for more than 8 
inputs
(Namit via Gang Tim Liu)

[namit] HIVE-4219 explain dependency does not capture the input table
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4092. Store complete names of tables in column access 
analyzer (Samuel Yuan via kevinwilfong)

[namit] HIVE-4208 Clientpositive test parenthesis_star_by is non-deteministic
(Mark Grover via namit)

[cws] HIVE-4217. Fix show_create_table_*.q test failures (Carl Steinbach via 
cws)

[namit] HIVE-4206 Sort merge join does not work for outer joins for 7 inputs
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4188. TestJdbcDriver2.testDescribeTable failing 
consistently. (Prasad Mujumdar via kevinwilfong)

[hashutosh] HIVE-3820 Consider creating a literal like D or BD for representing 
Decimal type constants (Gunther Hagleitner 

[jira] [Commented] (HIVE-4106) SMB joins fail in multi-way joins

2013-04-16 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632791#comment-13632791
 ] 

Vikram Dixit K commented on HIVE-4106:
--

I am going to try this and provide an update in a bit.

Thanks
Vikram.



 SMB joins fail in multi-way joins
 -

 Key: HIVE-4106
 URL: https://issues.apache.org/jira/browse/HIVE-4106
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Blocker
 Attachments: auto_sortmerge_join_12.q, HIVE-4106.patch


 I see array out of bounds exception in case of multi way smb joins. This is 
 related to changes that went in as part of HIVE-3403. This issue has been 
 discussed in HIVE-3891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


can hive handle concurrent JDBC statements?

2013-04-16 Thread Bing Li
Hi All,


I am writing a java program to run concurrent JDBC statements. But it
failed with:
org.apache.thrift.TApplicationException: execute failed: out of sequence
response


The steps are:
1. open a connection to jdbc:derby://hiveHost:port/commonDb
2. run select statement at the same time:
String sql = select * from  + tableName;
ResultSet rs1 = stmt.executeQuery(sql);
ResultSet rs2 = stmt.executeQuery(sql);
while(rs1.next()  rs2.next())
{
String s1 = rs1.getString(1);
String s2 = rs2.getString(1);
System.out.println(s1+ | +s2);
}


My question is can hive handle concurrent JDBC statements?

Thanks,
- Bing


[jira] [Updated] (HIVE-4327) NPE in constant folding with decimal

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4327:
---

   Resolution: Fixed
Fix Version/s: 0.11.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and 0.11. Thanks, Gunther!

 NPE in constant folding with decimal
 

 Key: HIVE-4327
 URL: https://issues.apache.org/jira/browse/HIVE-4327
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
Priority: Minor
 Fix For: 0.11.0

 Attachments: HIVE-4327.1.q, HIVE-4327.2.patch, HIVE-4327.3.patch


 The query:
 SELECT dec * cast('123456789012345678901234567890.1234567' as decimal) FROM 
 DECIMAL_PRECISION LIMIT 1
 fails with an NPE while constant folding. This only happens when the decimal 
 is out of range of max precision.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4105) Hive MapJoinOperator unnecessarily deserializes values for all join-keys

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4105:
---

   Resolution: Fixed
Fix Version/s: 0.12.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Vinod!

 Hive MapJoinOperator unnecessarily deserializes values for all join-keys
 

 Key: HIVE-4105
 URL: https://issues.apache.org/jira/browse/HIVE-4105
 Project: Hive
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 0.12.0

 Attachments: HIVE-4105-20130301.1.txt, HIVE-4105-20130301.txt, 
 HIVE-4105-20130415.txt, HIVE-4105.patch


 We can avoid this for inner-joins. Hive does an explicit value 
 de-serialization up front so even for those rows which won't emit output. In 
 these cases, we can do just with key de-serialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4315) enable doAs in unsecure mode for hive server2, when MR job runs locally

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4315:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.11. Thanks, Thejas!

 enable doAs in unsecure mode for hive server2, when MR job runs locally
 ---

 Key: HIVE-4315
 URL: https://issues.apache.org/jira/browse/HIVE-4315
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.11.0

 Attachments: HIVE-4315.1.patch, HIVE-4315.2.patch


 When MR job is run locally by hive (instead of hadoop cluster), the MR job 
 ends up running as hiveserver user instead of the user submitting the query, 
 even if doAs configuration is enabled.
 In case of map-side join (see [map join 
 optimization|https://cwiki.apache.org/confluence/display/Hive/MapJoinOptimization])
  , MapredLocalTask is spawned in child process to process the map-side file 
 before adding it to distributed cache. When hive.server2.enable.doAs is 
 enabled, MapredLocalTask should run as the user submitting the query. But by 
 default, in case of unsecure (ie without kerberos security) mode hadoop 
 considers user the process runs as the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4320) Consider extending max limit for precision to 38

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4320:
---

Status: Open  (was: Patch Available)

Test {{decimal_precision.q}} is now failing, after commit of HIVE-4327

 Consider extending max limit for precision to 38
 

 Key: HIVE-4320
 URL: https://issues.apache.org/jira/browse/HIVE-4320
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-4320.1.patch, HIVE-4320.2.patch


 Max precision of 38 still fits in 128. It changes the way you do math on 
 these numbers though. Need to see if there will be perf implications, but 
 there's a strong case to support 38 (instead of 36) to comply with other DBs. 
 (Oracle, SQL Server, Teradata).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4352) Guava not getting included in build package

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4352:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.11 Thanks, Gunther!

 Guava not getting included in build package
 ---

 Key: HIVE-4352
 URL: https://issues.apache.org/jira/browse/HIVE-4352
 Project: Hive
  Issue Type: Bug
Reporter: Mark Wagner
Assignee: Gunther Hagleitner
 Fix For: 0.11.0

 Attachments: HIVE-4352.1.patch


 Since HIVE-4148, Guava is not getting included in the appropriate packages. 
 This manifests as a ClassNotFoundException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4275) Hive does not differentiate scheme and authority in file uris

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4275:
---

   Resolution: Fixed
Fix Version/s: 0.12.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Vikram!

 Hive does not differentiate scheme and authority in file uris
 -

 Key: HIVE-4275
 URL: https://issues.apache.org/jira/browse/HIVE-4275
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.12.0

 Attachments: HIVE-4275.2.patch, HIVE-4275.3.patch, HIVE-4275.4.patch, 
 HIVE-4275.patch


 Consider the following set of queries:
 ALTER TABLE abc ADD PARTITION (x='0') LOCATION 'file:///foo';
 ALTER TABLE abc ADD PARTITION (x='1') LOCATION '/foo';
 select count(*) from abc;
 Even though there are different files under these directories, depending on 
 number of mappers, the count produces a value = num of mappers * num of files 
 in the 2 directories. This is incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4268) Beeline should support the -f option

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632903#comment-13632903
 ] 

Hudson commented on HIVE-4268:
--

Integrated in Hive-trunk-h0.21 #2066 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2066/])
HIVE-4268. Beeline should support the -f option (Rob Weltman via cws) 
(Revision 1467920)

 Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1467920
Files : 
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java
* /hive/trunk/beeline/src/test/org
* /hive/trunk/beeline/src/test/org/apache
* /hive/trunk/beeline/src/test/org/apache/hive
* /hive/trunk/beeline/src/test/org/apache/hive/beeline
* /hive/trunk/beeline/src/test/org/apache/hive/beeline/src
* /hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test
* 
/hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test/TestBeeLineWithArgs.java
* /hive/trunk/build.xml


 Beeline should support the -f option
 

 Key: HIVE-4268
 URL: https://issues.apache.org/jira/browse/HIVE-4268
 Project: Hive
  Issue Type: Improvement
  Components: CLI, HiveServer2
Affects Versions: 0.10.0
Reporter: Rob Weltman
Assignee: Rob Weltman
  Labels: HiveServer2
 Fix For: 0.12.0

 Attachments: HIVE-4268.1.patch.txt, HIVE-4268.2.patch.txt, 
 HIVE-4268.3.patch.txt


 Beeline should support the -f option (pass in a script to execute) for 
 compatibility with the Hive CLI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 2066 - Still Failing

2013-04-16 Thread Apache Jenkins Server
Changes for Build #2032
[namit] HIVE-4219 explain dependency does not capture the input table
(Namit via Gang Tim Liu)


Changes for Build #2033
[gates] Removing old branches to limit size of Hive downloads.

[gates] Removing tags directory as we no longer need them and they're in the 
history.

[gates] Moving HCatalog into Hive.

[gates] Test that perms work for hcatalog

[hashutosh] HIVE-4007 : Create abstract classes for serializer and deserializer 
(Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-3381 : Result of outer join is not valid (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3980 : Cleanup after 3403 (Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-4042 : ignore mapjoin hint (Namit Jain via Ashutosh Chauhan)

[namit] HIVE-3348 semi-colon in comments in .q file does not work
(Nick Collins via namit)

[namit] HIVE-4212 sort merge join should work for outer joins for more than 8 
inputs
(Namit via Gang Tim Liu)


Changes for Build #2034
[namit] HIVE-3958 support partial scan for analyze command - RCFile
(Gang Tim Liu via namit)


Changes for Build #2035
[kevinwilfong] HIVE-4235. CREATE TABLE IF NOT EXISTS uses inefficient way to 
check if table exists. (Gang Tim Liu via kevinwilfong)

[gangtimliu] HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi 
Gang Tim Liu)

[gangtimliu] HIVE-4155: Expose ORC's FileDump as a service

[gangtimliu] HIVE-4159:RetryingHMSHandler doesn't retry in enough cases (Kevin 
Wilfong vi Gang Tim Liu)

[namit] HIVE-4149 wrong results big outer joins with array of ints
(Navis via namit)


Changes for Build #2036
[gates] HIVE-4264 Moved hcatalog trunk code up to hive/trunk/hcatalog

[hashutosh] HIVE-4263 : Adjust build.xml package command to move all hcat jars 
and binaries into build (Alan Gates via Ashutosh Chauhan)

[namit] HIVE-4258 Log logical plan tree for debugging
(Navis via namit)

[navis] HIVE-2264 Hive server is SHUTTING DOWN when invalid queries beeing 
executed


Changes for Build #2037

Changes for Build #2038
[hashutosh] HIVE-4122 : Queries fail if timestamp data not in expected format 
(Prasad Mujumdar via Ashutosh Chauhan)

[hashutosh] HIVE-4170 : [REGRESSION] FsShell.close closes filesystem, removing 
temporary directories (Navis via Ashutosh Chauhan)


Changes for Build #2039
[hashutosh] HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan)


Changes for Build #2040
[hashutosh] HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover 
via Ashutosh Chauhan)

[namit] HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple 
input partitions
(Namit via Gang Tim Liu)


Changes for Build #2041

Changes for Build #2042

Changes for Build #2043
[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver 
(Thejas Nair via Ashutosh Chauhan)

[namit] HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit)

[namit] HIVE-4240 optimize hive.enforce.bucketing and hive.enforce sorting 
insert
(Gang Tim Liu via namit)

[navis] HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik 
via Navis)


Changes for Build #2044
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)


Changes for Build #2045

Changes for Build #2046
[hashutosh] HIVE-4067 : Followup to HIVE-701: reduce ambiguity in grammar 
(Samuel Yuan via Ashutosh Chauhan)


Changes for Build #2047

Changes for Build #2048
[gangtimliu] HIVE-4298: add tests for distincts for hive.map.groutp.sorted. 
(Namit via Gang Tim Liu)

[hashutosh] HIVE-4128 : Support avg(decimal) (Brock Noland via Ashutosh Chauhan)

[kevinwilfong] HIVE-4151. HiveProfiler NPE with ScriptOperator. (Pamela Vagata 
via kevinwilfong)


Changes for Build #2049
[hashutosh] HIVE-3985 : Update new UDAFs introduced for Windowing to work with 
new Decimal Type (Brock Noland via Ashutosh Chauhan)

[jira] [Commented] (HIVE-4304) Remove unused builtins and pdk submodules

2013-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632917#comment-13632917
 ] 

Ashutosh Chauhan commented on HIVE-4304:


[~traviscrawford] All tests passed except {{show_functions.q}} which just needs 
to be run with -Doverwrite=true to update .q.out

 Remove unused builtins and pdk submodules
 -

 Key: HIVE-4304
 URL: https://issues.apache.org/jira/browse/HIVE-4304
 Project: Hive
  Issue Type: Improvement
Reporter: Travis Crawford
Assignee: Travis Crawford
 Attachments: HIVE-4304.1.patch


 Moving from email. The 
 [builtins|http://svn.apache.org/repos/asf/hive/trunk/builtins/] and 
 [pdk|http://svn.apache.org/repos/asf/hive/trunk/pdk/] submodules are not 
 believed to be in use and should be removed. The main benefits are 
 simplification and maintainability of the Hive code base.
 Forwarded conversation
 Subject: builtins submodule - is it still needed?
 
 From: Travis Crawford traviscrawf...@gmail.com
 Date: Thu, Apr 4, 2013 at 2:01 PM
 To: u...@hive.apache.org, dev@hive.apache.org
 Hey hive gurus -
 Is the builtins hive submodule in use? The submodule was added in
 HIVE-2523 as a location for builtin-UDFs, but it appears to not have
 taken off. Any objections to removing it?
 DETAILS
 For HIVE-4278 I'm making some build changes for the HCatalog
 integration. The builtins submodule causes issues because it delays
 building until the packaging phase - so HCatalog can't depend on
 builtins, which it does transitively.
 While investigating a path forward I discovered the builtins
 submodule contains very little code, and likely could either go away
 entirely or merge into ql, simplifying things both for users and
 developers.
 Thoughts? Can anyone with context help me understand builtins, both
 in general and around its non-standard build? For your trouble I'll
 either make the submodule go away/merge into another submodule, or
 update the docs with what we learn.
 Thanks!
 Travis
 --
 From: Ashutosh Chauhan ashutosh.chau...@gmail.com
 Date: Fri, Apr 5, 2013 at 3:10 PM
 To: dev@hive.apache.org
 Cc: u...@hive.apache.org u...@hive.apache.org
 I haven't used it myself anytime till now. Neither have met anyone who used
 it or plan to use it.
 Ashutosh
 On Thu, Apr 4, 2013 at 2:01 PM, Travis Crawford 
 traviscrawf...@gmail.comwrote:
 --
 From: Gunther Hagleitner ghagleit...@hortonworks.com
 Date: Fri, Apr 5, 2013 at 3:11 PM
 To: dev@hive.apache.org
 Cc: u...@hive.apache.org
 +1
 I would actually go a step further and propose to remove both PDK and
 builtins. I've went through the code for both and here is what I found:
 Builtins:
 - BuiltInUtils.java: Empty file
 - UDAFUnionMap: Merges maps. Doesn't seem to be useful by itself, but was
 intended as a building block for PDK
 PDK:
 - some helper build.xml/test setup + teardown scripts
 - Classes/annotations to help run unit tests
 - rot13 as an example
 From what I can tell it's a fair assessment that it hasn't taken off, last
 commits to it seem to have happened more than 1.5 years ago.
 Thanks,
 Gunther.
 On Thu, Apr 4, 2013 at 2:01 PM, Travis Crawford 
 traviscrawf...@gmail.comwrote:
 --
 From: Owen O'Malley omal...@apache.org
 Date: Fri, Apr 5, 2013 at 4:45 PM
 To: u...@hive.apache.org
 +1 to removing them. 
 We have a Rot13 example in 
 ql/src/test/org/apache/hadoop/hive/ql/io/udf/Rot13{In,Out}putFormat.java 
 anyways. *smile*
 -- Owen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4359) Remove old versions of the javadoc

2013-04-16 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley resolved HIVE-4359.
-

Resolution: Fixed

I just committed this.

 Remove old versions of the javadoc
 --

 Key: HIVE-4359
 URL: https://issues.apache.org/jira/browse/HIVE-4359
 Project: Hive
  Issue Type: Task
  Components: Website
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: h-4359.patch


 Delete the old versions of the javadoc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4130) Bring the Lead/Lag UDFs interface in line with Lead/Lag UDAFs

2013-04-16 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632928#comment-13632928
 ] 

Phabricator commented on HIVE-4130:
---

ashutoshc has requested changes to the revision HIVE-4130 [jira] Bring the 
Lead/Lag UDFs interface in line with Lead/Lag UDAFs.

  Couple more comments.

INLINE COMMENTS
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLeadLag.java:98 
Shouldn't the default value of amt be 1 ?
  ql/src/test/queries/clientpositive/windowing_expressions.q:38 It will be good 
to add a testcase which exercises non-default values like 
sum(lag(p_retailprice,3,29.43))

REVISION DETAIL
  https://reviews.facebook.net/D10233

BRANCH
  HIVE-4130

ARCANIST PROJECT
  hive

To: JIRA, ashutoshc, hbutani


 Bring the Lead/Lag UDFs interface in line with Lead/Lag UDAFs
 -

 Key: HIVE-4130
 URL: https://issues.apache.org/jira/browse/HIVE-4130
 Project: Hive
  Issue Type: Bug
  Components: PTF-Windowing
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-4130.D10233.1.patch, HIVE-4130.D10233.2.patch


 - support a default value arg
 - both amt and defaultValue args can be optional

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4341) Make Hive table external by default

2013-04-16 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632972#comment-13632972
 ] 

Alan Gates commented on HIVE-4341:
--

Are you thinking of this as a configuration option?  I don't think most users 
would want this.

 Make Hive table external by default
 ---

 Key: HIVE-4341
 URL: https://issues.apache.org/jira/browse/HIVE-4341
 Project: Hive
  Issue Type: New Feature
Reporter: Romit Singhai

 Make the default table definition in Hive as external to prevent users from 
 accidentally deleting data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4341) Make Hive table external by default

2013-04-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632986#comment-13632986
 ] 

Brock Noland commented on HIVE-4341:


Agreed. Perhaps a configuration setting like hive.do.not.drop.data would be a 
better approach to providing a safety net.

 Make Hive table external by default
 ---

 Key: HIVE-4341
 URL: https://issues.apache.org/jira/browse/HIVE-4341
 Project: Hive
  Issue Type: New Feature
Reporter: Romit Singhai

 Make the default table definition in Hive as external to prevent users from 
 accidentally deleting data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4315) enable doAs in unsecure mode for hive server2, when MR job runs locally

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633054#comment-13633054
 ] 

Hudson commented on HIVE-4315:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4315 : enable doAs in unsecure mode for hive server2, when MR job runs 
locally (Thejas Nair via Ashutosh Chauhan) (Revision 1468438)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468438
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapredLocalTask.java


 enable doAs in unsecure mode for hive server2, when MR job runs locally
 ---

 Key: HIVE-4315
 URL: https://issues.apache.org/jira/browse/HIVE-4315
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.11.0

 Attachments: HIVE-4315.1.patch, HIVE-4315.2.patch


 When MR job is run locally by hive (instead of hadoop cluster), the MR job 
 ends up running as hiveserver user instead of the user submitting the query, 
 even if doAs configuration is enabled.
 In case of map-side join (see [map join 
 optimization|https://cwiki.apache.org/confluence/display/Hive/MapJoinOptimization])
  , MapredLocalTask is spawned in child process to process the map-side file 
 before adding it to distributed cache. When hive.server2.enable.doAs is 
 enabled, MapredLocalTask should run as the user submitting the query. But by 
 default, in case of unsecure (ie without kerberos security) mode hadoop 
 considers user the process runs as the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4261) union_remove_10 is failing on hadoop2 with assertion (root task with non-empty set of parents)

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633055#comment-13633055
 ] 

Hudson commented on HIVE-4261:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4261 union_remove_10 is failing on hadoop2 with assertion (root task 
with non-empty set of parents) (Revision 1468292)

 Result = FAILURE
navis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468292
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinFactory.java
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_10.q.out


 union_remove_10 is failing on hadoop2 with assertion (root task with 
 non-empty set of parents)
 --

 Key: HIVE-4261
 URL: https://issues.apache.org/jira/browse/HIVE-4261
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
Priority: Critical
 Fix For: 0.12.0

 Attachments: HIVE-4261.1.patch, HIVE-4261.2.patch


 Output seems to indicate that the stage plan is broken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4352) Guava not getting included in build package

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633056#comment-13633056
 ] 

Hudson commented on HIVE-4352:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4352 : Guava not getting included in build package (Gunther Hagleitner 
via Ashutosh Chauhan) (Revision 1468442)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468442
Files : 
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/shims/ivy.xml


 Guava not getting included in build package
 ---

 Key: HIVE-4352
 URL: https://issues.apache.org/jira/browse/HIVE-4352
 Project: Hive
  Issue Type: Bug
Reporter: Mark Wagner
Assignee: Gunther Hagleitner
 Fix For: 0.11.0

 Attachments: HIVE-4352.1.patch


 Since HIVE-4148, Guava is not getting included in the appropriate packages. 
 This manifests as a ClassNotFoundException.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4275) Hive does not differentiate scheme and authority in file uris

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633057#comment-13633057
 ] 

Hudson commented on HIVE-4275:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4275 : Hive does not differentiate scheme and authority in file uris 
(Vikram Dixit via Ashutosh Chauhan) (Revision 1468445)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468445
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestExecDriver.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestOperators.java
* /hive/trunk/ql/src/test/queries/clientpositive/schemeAuthority.q
* /hive/trunk/ql/src/test/results/clientpositive/schemeAuthority.q.out


 Hive does not differentiate scheme and authority in file uris
 -

 Key: HIVE-4275
 URL: https://issues.apache.org/jira/browse/HIVE-4275
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Fix For: 0.12.0

 Attachments: HIVE-4275.2.patch, HIVE-4275.3.patch, HIVE-4275.4.patch, 
 HIVE-4275.patch


 Consider the following set of queries:
 ALTER TABLE abc ADD PARTITION (x='0') LOCATION 'file:///foo';
 ALTER TABLE abc ADD PARTITION (x='1') LOCATION '/foo';
 select count(*) from abc;
 Even though there are different files under these directories, depending on 
 number of mappers, the count produces a value = num of mappers * num of files 
 in the 2 directories. This is incorrect.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3996) Correctly enforce the memory limit on the multi-table map-join

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633058#comment-13633058
 ] 

Hudson commented on HIVE-3996:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-3996 Correctly enforce the memory limit on the multi-table map-join
(Vikram Dixit via namit) (Revision 1468321)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468321
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinResolver.java
* /hive/trunk/ql/src/test/queries/clientpositive/join32_lessSize.q
* /hive/trunk/ql/src/test/results/clientpositive/join32_lessSize.q.out


 Correctly enforce the memory limit on the multi-table map-join
 --

 Key: HIVE-3996
 URL: https://issues.apache.org/jira/browse/HIVE-3996
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-3996_2.patch, HIVE-3996_3.patch, HIVE-3996_4.patch, 
 HIVE-3996_5.patch, HIVE-3996_6.patch, HIVE-3996_7.patch, HIVE-3996_8.patch, 
 HIVE-3996_9.patch, hive.3996.9.patch-nohcat, HIVE-3996.patch


 Currently with HIVE-3784, the joins are converted to map-joins based on 
 checks of the table size against the config variable: 
 hive.auto.convert.join.noconditionaltask.size. 
 However, the current implementation will also merge multiple mapjoin 
 operators into a single task regardless of whether the sum of the table sizes 
 will exceed the configured value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4327) NPE in constant folding with decimal

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633060#comment-13633060
 ] 

Hudson commented on HIVE-4327:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4327 : NPE in constant folding with decimal (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1468423)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468423
Files : 
* /hive/trunk/ql/src/test/queries/clientnegative/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientnegative/decimal_precision_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_precision.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_udf.q
* /hive/trunk/ql/src/test/results/clientnegative/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientnegative/decimal_precision_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_precision.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_udf.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorConverter.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils.java


 NPE in constant folding with decimal
 

 Key: HIVE-4327
 URL: https://issues.apache.org/jira/browse/HIVE-4327
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
Priority: Minor
 Fix For: 0.11.0

 Attachments: HIVE-4327.1.q, HIVE-4327.2.patch, HIVE-4327.3.patch


 The query:
 SELECT dec * cast('123456789012345678901234567890.1234567' as decimal) FROM 
 DECIMAL_PRECISION LIMIT 1
 fails with an NPE while constant folding. This only happens when the decimal 
 is out of range of max precision.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4308) Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633059#comment-13633059
 ] 

Hudson commented on HIVE-4308:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4308 Newly added test TestCliDriver.hiveprofiler_union0 is failing on 
trunk
(Navis via namit) (Revision 1468329)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468329
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/profiler/HiveProfilerStatsAggregator.java


 Newly added test TestCliDriver.hiveprofiler_union0 is failing on trunk
 --

 Key: HIVE-4308
 URL: https://issues.apache.org/jira/browse/HIVE-4308
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.12.0

 Attachments: HIVE-4308.D10269.1.patch


 This only happens while running whole test suite. Failure doesn't manifest if 
 this test is run alone.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4105) Hive MapJoinOperator unnecessarily deserializes values for all join-keys

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633061#comment-13633061
 ] 

Hudson commented on HIVE-4105:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4105 : Hive MapJoinOperator unnecessarily deserializes values for all 
join-keys (Vinod KV via Ashutosh Chauhan) (Revision 1468433)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468433
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java


 Hive MapJoinOperator unnecessarily deserializes values for all join-keys
 

 Key: HIVE-4105
 URL: https://issues.apache.org/jira/browse/HIVE-4105
 Project: Hive
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 0.12.0

 Attachments: HIVE-4105-20130301.1.txt, HIVE-4105-20130301.txt, 
 HIVE-4105-20130415.txt, HIVE-4105.patch


 We can avoid this for inner-joins. Hive does an explicit value 
 de-serialization up front so even for those rows which won't emit output. In 
 these cases, we can do just with key de-serialization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4167) Hive converts bucket map join to SMB join even when tables are not sorted

2013-04-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633062#comment-13633062
 ] 

Hudson commented on HIVE-4167:
--

Integrated in Hive-trunk-hadoop2 #162 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/162/])
HIVE-4167 Hive converts bucket map join to SMB join even when tables are 
not sorted
(Namit Jain via Ashutosh) (Revision 1468349)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1468349
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractSMBJoinProc.java
* /hive/trunk/ql/src/test/queries/clientpositive/auto_sortmerge_join_11.q
* /hive/trunk/ql/src/test/results/clientpositive/auto_sortmerge_join_11.q.out


 Hive converts bucket map join to SMB join even when tables are not sorted
 -

 Key: HIVE-4167
 URL: https://issues.apache.org/jira/browse/HIVE-4167
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Namit Jain
Priority: Blocker
 Attachments: hive.4167.1.patch, hive.4167.2.patch, hive.4167.3.patch, 
 hive.4167.4.patch, hive.4167.4.patch-nohcat, HIVE-4167.patch


 If tables are just bucketed but not sorted, we are generating smb join 
 operator. This results in loss of rows in queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4341) Make Hive table external by default

2013-04-16 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633063#comment-13633063
 ] 

Prasad Mujumdar commented on HIVE-4341:
---

The metastore interface already supports no-drop options for drop table and 
databases. It should be fairly straight forward to implement the configuration 
option from hive side.

 Make Hive table external by default
 ---

 Key: HIVE-4341
 URL: https://issues.apache.org/jira/browse/HIVE-4341
 Project: Hive
  Issue Type: New Feature
Reporter: Romit Singhai

 Make the default table definition in Hive as external to prevent users from 
 accidentally deleting data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4364) beeline always exits with 0 status, should exit with non-zero status on error

2013-04-16 Thread Rob Weltman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633088#comment-13633088
 ] 

Rob Weltman commented on HIVE-4364:
---

Patch up for review at https://reviews.apache.org/r/10551/


 beeline always exits with 0 status, should exit with non-zero status on error
 -

 Key: HIVE-4364
 URL: https://issues.apache.org/jira/browse/HIVE-4364
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
Reporter: Rob Weltman
Assignee: Rob Weltman
 Fix For: 0.11.0

 Attachments: HIVE-4364.1.patch.txt


 beeline should exit with non-zero status on error so that executors such as a 
 shell script or Oozie can detect failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4364) beeline always exits with 0 status, should exit with non-zero status on error

2013-04-16 Thread Rob Weltman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Weltman updated HIVE-4364:
--

Attachment: HIVE-4364.1.patch.txt

BeeLine patch to set exit status to non-zero on error


 beeline always exits with 0 status, should exit with non-zero status on error
 -

 Key: HIVE-4364
 URL: https://issues.apache.org/jira/browse/HIVE-4364
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
Reporter: Rob Weltman
Assignee: Rob Weltman
 Fix For: 0.11.0

 Attachments: HIVE-4364.1.patch.txt


 beeline should exit with non-zero status on error so that executors such as a 
 shell script or Oozie can detect failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4095) Add exchange partition in Hive

2013-04-16 Thread Dheeraj Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dheeraj Kumar Singh updated HIVE-4095:
--

Attachment: (was: HIVE-4095.part2.patch.txt)

 Add exchange partition in Hive
 --

 Key: HIVE-4095
 URL: https://issues.apache.org/jira/browse/HIVE-4095
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Dheeraj Kumar Singh
 Attachments: HIVE-4095.D10155.1.patch, HIVE-4095.D10155.2.patch, 
 HIVE-4095.part11.patch.txt, HIVE-4095.part12.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4095) Add exchange partition in Hive

2013-04-16 Thread Dheeraj Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dheeraj Kumar Singh updated HIVE-4095:
--

Attachment: (was: HIVE-4095.1.patch.txt)

 Add exchange partition in Hive
 --

 Key: HIVE-4095
 URL: https://issues.apache.org/jira/browse/HIVE-4095
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Dheeraj Kumar Singh
 Attachments: HIVE-4095.D10155.1.patch, HIVE-4095.D10155.2.patch, 
 HIVE-4095.part11.patch.txt, HIVE-4095.part12.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4095) Add exchange partition in Hive

2013-04-16 Thread Dheeraj Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dheeraj Kumar Singh updated HIVE-4095:
--

Status: Patch Available  (was: Open)

 Add exchange partition in Hive
 --

 Key: HIVE-4095
 URL: https://issues.apache.org/jira/browse/HIVE-4095
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Dheeraj Kumar Singh
 Attachments: HIVE-4095.D10155.1.patch, HIVE-4095.D10155.2.patch, 
 HIVE-4095.part11.patch.txt, HIVE-4095.part12.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4095) Add exchange partition in Hive

2013-04-16 Thread Dheeraj Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dheeraj Kumar Singh updated HIVE-4095:
--

Attachment: (was: HIVE-4095.part1.patch.txt)

 Add exchange partition in Hive
 --

 Key: HIVE-4095
 URL: https://issues.apache.org/jira/browse/HIVE-4095
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Dheeraj Kumar Singh
 Attachments: HIVE-4095.D10155.1.patch, HIVE-4095.D10155.2.patch, 
 HIVE-4095.part11.patch.txt, HIVE-4095.part12.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4095) Add exchange partition in Hive

2013-04-16 Thread Dheeraj Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dheeraj Kumar Singh updated HIVE-4095:
--

Attachment: HIVE-4095.part11.patch.txt
HIVE-4095.part12.patch.txt

 Add exchange partition in Hive
 --

 Key: HIVE-4095
 URL: https://issues.apache.org/jira/browse/HIVE-4095
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Dheeraj Kumar Singh
 Attachments: HIVE-4095.D10155.1.patch, HIVE-4095.D10155.2.patch, 
 HIVE-4095.part11.patch.txt, HIVE-4095.part12.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4364) beeline always exits with 0 status, should exit with non-zero status on error

2013-04-16 Thread Rob Weltman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Weltman updated HIVE-4364:
--

Status: Patch Available  (was: Open)

 beeline always exits with 0 status, should exit with non-zero status on error
 -

 Key: HIVE-4364
 URL: https://issues.apache.org/jira/browse/HIVE-4364
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
Reporter: Rob Weltman
Assignee: Rob Weltman
 Fix For: 0.11.0

 Attachments: HIVE-4364.1.patch.txt


 beeline should exit with non-zero status on error so that executors such as a 
 shell script or Oozie can detect failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-16 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633119#comment-13633119
 ] 

Roshan Naik commented on HIVE-4300:
---

FYI.. HIVE-4322 makes manual changes to auto generated code. This will be a 
maintenance headache. I have incorporated those changes into this as part of 
the rebasing in patch v2.


 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # 

[jira] [Commented] (HIVE-4341) Make Hive table external by default

2013-04-16 Thread Romit Singhai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633128#comment-13633128
 ] 

Romit Singhai commented on HIVE-4341:
-

Agreed. Configuration option is fine with default value of false to maintain 
existing behavior by default. 

 Make Hive table external by default
 ---

 Key: HIVE-4341
 URL: https://issues.apache.org/jira/browse/HIVE-4341
 Project: Hive
  Issue Type: New Feature
Reporter: Romit Singhai

 Make the default table definition in Hive as external to prevent users from 
 accidentally deleting data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4364) beeline always exits with 0 status, should exit with non-zero status on error

2013-04-16 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633133#comment-13633133
 ] 

Prasad Mujumdar commented on HIVE-4364:
---

+1 (non-binding)

[~robw] Thanks for the patch. After its committed, we should update the beeline 
[documentation|https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineNewCommandLineshell]
 to list the exit codes.

 beeline always exits with 0 status, should exit with non-zero status on error
 -

 Key: HIVE-4364
 URL: https://issues.apache.org/jira/browse/HIVE-4364
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.10.0
Reporter: Rob Weltman
Assignee: Rob Weltman
 Fix For: 0.11.0

 Attachments: HIVE-4364.1.patch.txt


 beeline should exit with non-zero status on error so that executors such as a 
 shell script or Oozie can detect failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4368) Upgrade JavaEWAH dependency to version 0.6.11

2013-04-16 Thread Daniel Lemire (JIRA)
Daniel Lemire created HIVE-4368:
---

 Summary: Upgrade JavaEWAH dependency to version 0.6.11
 Key: HIVE-4368
 URL: https://issues.apache.org/jira/browse/HIVE-4368
 Project: Hive
  Issue Type: Improvement
  Components: Indexing
Affects Versions: 0.11.0
Reporter: Daniel Lemire
Priority: Minor
 Fix For: 0.10.1


Apache Hive current depends on JavaEWAH version 0.3.2. It is nearly trivial to 
update to version 0.6.11 from a source code perspective: 6 lines need to be 
changed and in a small way.

I include a subversion diff. (I tested that the result builds after this 
change.)

 $ svn diff
Index: ivy/libraries.properties
===
--- ivy/libraries.properties(revision 1468396)
+++ ivy/libraries.properties(working copy)
@@ -47,7 +47,7 @@
 guava-hadoop23.version=11.0.2
 hbase.version=0.92.0
 jackson.version=1.8.8
-javaewah.version=0.3.2
+javaewah.version=0.6.11
 jdo-api.version=2.3-ec
 jdom.version=1.1
 jetty.version=6.1.26
Index: 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapAnd.java
===
--- 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapAnd.java  
(revision 1468396)
+++ 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapAnd.java  
(working copy)
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.hive.ql.udf.generic;
 
-import javaewah.EWAHCompressedBitmap;
+import com.googlecode.javaewah.EWAHCompressedBitmap;
 
 import org.apache.hadoop.hive.ql.exec.Description;
 
Index: 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/AbstractGenericUDFEWAHBitmapBop.java
===
--- 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/AbstractGenericUDFEWAHBitmapBop.java
  (revision 1468396)
+++ 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/AbstractGenericUDFEWAHBitmapBop.java
  (working copy)
@@ -22,7 +22,7 @@
 import java.util.ArrayList;
 import java.util.List;
 
-import javaewah.EWAHCompressedBitmap;
+import com.googlecode.javaewah.EWAHCompressedBitmap;
 
 import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
 import org.apache.hadoop.hive.ql.exec.UDFArgumentLengthException;
Index: 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFEWAHBitmap.java
===
--- 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFEWAHBitmap.java
(revision 1468396)
+++ 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFEWAHBitmap.java
(working copy)
@@ -21,7 +21,7 @@
 import java.util.ArrayList;
 import java.util.List;
 
-import javaewah.EWAHCompressedBitmap;
+import com.googlecode.javaewah.EWAHCompressedBitmap;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
Index: 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapEmpty.java
===
--- 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapEmpty.java
(revision 1468396)
+++ 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapEmpty.java
(working copy)
@@ -21,7 +21,7 @@
 import java.io.IOException;
 import java.util.ArrayList;
 
-import javaewah.EWAHCompressedBitmap;
+import com.googlecode.javaewah.EWAHCompressedBitmap;
 
 import org.apache.hadoop.hive.ql.exec.Description;
 import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
Index: 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapOr.java
===
--- 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapOr.java   
(revision 1468396)
+++ 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFEWAHBitmapOr.java   
(working copy)
@@ -18,7 +18,7 @@
 
 package org.apache.hadoop.hive.ql.udf.generic;
 
-import javaewah.EWAHCompressedBitmap;
+import com.googlecode.javaewah.EWAHCompressedBitmap;
 
 import org.apache.hadoop.hive.ql.exec.Description;

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4191) describe table output always prints as if formatted keyword is specified

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4191:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.11. Thanks, Thejas!

 describe table output always prints as if formatted keyword is specified
 

 Key: HIVE-4191
 URL: https://issues.apache.org/jira/browse/HIVE-4191
 Project: Hive
  Issue Type: Bug
  Components: CLI, HiveServer2
Affects Versions: 0.10.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.11.0

 Attachments: HIVE-4191.1.patch, HIVE-4191.2.patch, HIVE-4191.3.patch


 With the change in HIVE-3140, describe table output prints like the format 
 expected from describe *formatted* table. ie, the headers are included and 
 there is padding with space for the fields. 
 This is a non backward compatible change, we should discuss if this change in 
 the formatting of output should remain. 
 This has impact on hiveserver2, it has been relying on the old format, and 
 with this change it prints additional headers and fields with space padding.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4369) Many new failures on hadoop 2

2013-04-16 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-4369:


 Summary: Many new failures on hadoop 2
 Key: HIVE-4369
 URL: https://issues.apache.org/jira/browse/HIVE-4369
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Vikram Dixit K


Roughly half the tests are failing, this seems to be the exception:

[junit] org.apache.hadoop.hive.ql.metadata.HiveException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Configuration and input path 
are inconsistent
[junit] at 
org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:522)
[junit] at 
org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:91)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:103)
[junit] at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:72)
[junit] at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:130)
[junit] at 
org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[junit] at java.lang.reflect.Method.invoke(Method.java:597)
[junit] at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:103)
[junit] at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:72)
[junit] at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:130)
[junit] at 
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:395)
[junit] at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
[junit] at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:232)
[junit] at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
[junit] at 
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
[junit] at java.util.concurrent.FutureTask.run(FutureTask.java:138)
[junit] at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
[junit] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
[junit] at java.lang.Thread.run(Thread.java:680)
[junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
Configuration and input path are inconsistent
[junit] at 
org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:516)
[junit] ... 25 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4369) Many new failures on hadoop 2

2013-04-16 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633175#comment-13633175
 ] 

Gunther Hagleitner commented on HIVE-4369:
--

Test result is here: 
https://builds.apache.org/job/Hive-trunk-hadoop2/162/testReport/

 Many new failures on hadoop 2
 -

 Key: HIVE-4369
 URL: https://issues.apache.org/jira/browse/HIVE-4369
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Vikram Dixit K

 Roughly half the tests are failing, this seems to be the exception:
 [junit] org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Configuration and input 
 path are inconsistent
 [junit]   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:522)
 [junit]   at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:91)
 [junit]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [junit]   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 [junit]   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 [junit]   at java.lang.reflect.Method.invoke(Method.java:597)
 [junit]   at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:103)
 [junit]   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:72)
 [junit]   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:130)
 [junit]   at 
 org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
 [junit]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [junit]   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 [junit]   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 [junit]   at java.lang.reflect.Method.invoke(Method.java:597)
 [junit]   at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:103)
 [junit]   at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:72)
 [junit]   at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:130)
 [junit]   at 
 org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:395)
 [junit]   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
 [junit]   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:232)
 [junit]   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 [junit]   at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 [junit]   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 [junit]   at java.lang.Thread.run(Thread.java:680)
 [junit] Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Configuration and input path are inconsistent
 [junit]   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:516)
 [junit]   ... 25 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-16 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633182#comment-13633182
 ] 

Carl Steinbach commented on HIVE-4300:
--

@Roshan: Which autogenerated files were manually changed in HIVE-4322?

 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.h
 # 

[jira] [Updated] (HIVE-4357) BeeLine tests are not getting executed

2013-04-16 Thread Rob Weltman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Weltman updated HIVE-4357:
--

Attachment: HIVE-4357.1.patch.txt

Added beeline to the test list in build.properties


 BeeLine tests are not getting executed
 --

 Key: HIVE-4357
 URL: https://issues.apache.org/jira/browse/HIVE-4357
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Carl Steinbach
Assignee: Rob Weltman
 Attachments: HIVE-4357.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4357) BeeLine tests are not getting executed

2013-04-16 Thread Rob Weltman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rob Weltman updated HIVE-4357:
--

Fix Version/s: 0.11.0
Affects Version/s: 0.10.0
   Status: Patch Available  (was: Open)

 BeeLine tests are not getting executed
 --

 Key: HIVE-4357
 URL: https://issues.apache.org/jira/browse/HIVE-4357
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Rob Weltman
 Fix For: 0.11.0

 Attachments: HIVE-4357.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4357) BeeLine tests are not getting executed

2013-04-16 Thread Rob Weltman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633207#comment-13633207
 ] 

Rob Weltman commented on HIVE-4357:
---

Patch for review at https://reviews.apache.org/r/10553/


 BeeLine tests are not getting executed
 --

 Key: HIVE-4357
 URL: https://issues.apache.org/jira/browse/HIVE-4357
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Rob Weltman
 Fix For: 0.11.0

 Attachments: HIVE-4357.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4347) Hcatalog build fail on Windows because javadoc command exceed length limit

2013-04-16 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-4347:
-

   Resolution: Fixed
Fix Version/s: (was: 0.11.0)
   0.12.0
   Status: Resolved  (was: Patch Available)

Patch checked in.  Thanks Shuaishuai for the patch.  Could one of the Hive 
committers with JIRA admin privileges please add Shuaishuai to the contributor 
list and assign the JIRA to her? 

 Hcatalog build fail on Windows because javadoc command exceed length limit
 --

 Key: HIVE-4347
 URL: https://issues.apache.org/jira/browse/HIVE-4347
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure, HCatalog, Windows
Affects Versions: 0.11.0
 Environment: Windows 8
Reporter: Shuaishuai Nie
  Labels: build, patch
 Fix For: 0.12.0

 Attachments: HIVE-4347.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 When building Hcatalog on Window 8, build fail because 
 HIVE_DIR\hcatalog\build.xml:213: Javadoc failed: java.io.IOException: Cannot 
 run program JAVA_HOME\bin\javadoc.exe: CreateProces
 s error=206, The filename or extension is too long

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4357) BeeLine tests are not getting executed

2013-04-16 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633222#comment-13633222
 ] 

Carl Steinbach commented on HIVE-4357:
--

@Rob: please make the review request public. Thanks.

 BeeLine tests are not getting executed
 --

 Key: HIVE-4357
 URL: https://issues.apache.org/jira/browse/HIVE-4357
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Rob Weltman
 Fix For: 0.11.0

 Attachments: HIVE-4357.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4019) Ability to create and drop temporary partition function

2013-04-16 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633237#comment-13633237
 ] 

Brock Noland commented on HIVE-4019:


Hi,

Yes I am taking a look at this at present. I'll update this later today with my 
progress.

 Ability to create and drop temporary partition function
 ---

 Key: HIVE-4019
 URL: https://issues.apache.org/jira/browse/HIVE-4019
 Project: Hive
  Issue Type: New Feature
  Components: PTF-Windowing
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: hive-4019.q


 Just like udf/udaf/udtf functions, user should be able to add and drop custom 
 partitioning functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633261#comment-13633261
 ] 

Alan Gates commented on HIVE-4278:
--

+1 to doing this as a stop-gap to unblock 0.11 until we reach consensus on how 
to converge the build the tools between hcat and hive.

 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Travis Crawford
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4347) Hcatalog build fail on Windows because javadoc command exceed length limit

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4347:
---

Assignee: Shuaishuai Nie

 Hcatalog build fail on Windows because javadoc command exceed length limit
 --

 Key: HIVE-4347
 URL: https://issues.apache.org/jira/browse/HIVE-4347
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure, HCatalog, Windows
Affects Versions: 0.11.0
 Environment: Windows 8
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
  Labels: build, patch
 Fix For: 0.12.0

 Attachments: HIVE-4347.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 When building Hcatalog on Window 8, build fail because 
 HIVE_DIR\hcatalog\build.xml:213: Javadoc failed: java.io.IOException: Cannot 
 run program JAVA_HOME\bin\javadoc.exe: CreateProces
 s error=206, The filename or extension is too long

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4320) Consider extending max limit for precision to 38

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4320.


   Resolution: Fixed
Fix Version/s: 0.11.0

Committed to trunk and 0.11. Thanks, Gunther!

 Consider extending max limit for precision to 38
 

 Key: HIVE-4320
 URL: https://issues.apache.org/jira/browse/HIVE-4320
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.11.0

 Attachments: HIVE-4320.1.patch, HIVE-4320.2.patch


 Max precision of 38 still fits in 128. It changes the way you do math on 
 these numbers though. Need to see if there will be perf implications, but 
 there's a strong case to support 38 (instead of 36) to comply with other DBs. 
 (Oracle, SQL Server, Teradata).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HIVE-4320) Consider extending max limit for precision to 38

2013-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633289#comment-13633289
 ] 

Ashutosh Chauhan edited comment on HIVE-4320 at 4/16/13 8:15 PM:
-

My bad. Test passed when I ran it again. I must be doing something wrong 
earlier.
Committed to trunk and 0.11. Thanks, Gunther!

  was (Author: ashutoshc):
Committed to trunk and 0.11. Thanks, Gunther!
  
 Consider extending max limit for precision to 38
 

 Key: HIVE-4320
 URL: https://issues.apache.org/jira/browse/HIVE-4320
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.11.0

 Attachments: HIVE-4320.1.patch, HIVE-4320.2.patch


 Max precision of 38 still fits in 128. It changes the way you do math on 
 these numbers though. Need to see if there will be perf implications, but 
 there's a strong case to support 38 (instead of 36) to comply with other DBs. 
 (Oracle, SQL Server, Teradata).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4344) CREATE VIEW fails when redundant casts are rewritten

2013-04-16 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633290#comment-13633290
 ] 

Kevin Wilfong commented on HIVE-4344:
-

+1

 CREATE VIEW fails when redundant casts are rewritten
 

 Key: HIVE-4344
 URL: https://issues.apache.org/jira/browse/HIVE-4344
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Samuel Yuan
Assignee: Samuel Yuan
 Attachments: HIVE-4344.HIVE-4344.HIVE-4344.HIVE-4344.D10221.1.patch


 e.g. create view v as select cast(key as string) from src;
 The rewriter tries to replace both cast(key as string) and key as 
 `src`.`key`, because cast(key as string) is a no-op.
 There may be other cases like this one.
 See HIVE-2439 for context.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633323#comment-13633323
 ] 

Phabricator commented on HIVE-4278:
---

travis has accepted the revision HIVE-4278 [jira] HCat needs to get current 
Hive jars instead of pulling them from maven repo.

  +1

INLINE COMMENTS
  hcatalog/build-support/ant/deploy.xml:43 I like this approach - good idea. It 
duplicates publishing, but is a less invasive change. This provides a path 
forward until the dependency management converges.

REVISION DETAIL
  https://reviews.facebook.net/D10257

BRANCH
  HIVE-4278

ARCANIST PROJECT
  hive

To: JIRA, cwsteinbach, travis, ashutoshc, khorgath


 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Travis Crawford
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4304) Remove unused builtins and pdk submodules

2013-04-16 Thread Travis Crawford (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633329#comment-13633329
 ] 

Travis Crawford commented on HIVE-4304:
---

Thanks for helping resolve the build issues [~hagleitn] and [~ashutoshc]. I'll 
rebase, update that test and start the tests again.

 Remove unused builtins and pdk submodules
 -

 Key: HIVE-4304
 URL: https://issues.apache.org/jira/browse/HIVE-4304
 Project: Hive
  Issue Type: Improvement
Reporter: Travis Crawford
Assignee: Travis Crawford
 Attachments: HIVE-4304.1.patch


 Moving from email. The 
 [builtins|http://svn.apache.org/repos/asf/hive/trunk/builtins/] and 
 [pdk|http://svn.apache.org/repos/asf/hive/trunk/pdk/] submodules are not 
 believed to be in use and should be removed. The main benefits are 
 simplification and maintainability of the Hive code base.
 Forwarded conversation
 Subject: builtins submodule - is it still needed?
 
 From: Travis Crawford traviscrawf...@gmail.com
 Date: Thu, Apr 4, 2013 at 2:01 PM
 To: u...@hive.apache.org, dev@hive.apache.org
 Hey hive gurus -
 Is the builtins hive submodule in use? The submodule was added in
 HIVE-2523 as a location for builtin-UDFs, but it appears to not have
 taken off. Any objections to removing it?
 DETAILS
 For HIVE-4278 I'm making some build changes for the HCatalog
 integration. The builtins submodule causes issues because it delays
 building until the packaging phase - so HCatalog can't depend on
 builtins, which it does transitively.
 While investigating a path forward I discovered the builtins
 submodule contains very little code, and likely could either go away
 entirely or merge into ql, simplifying things both for users and
 developers.
 Thoughts? Can anyone with context help me understand builtins, both
 in general and around its non-standard build? For your trouble I'll
 either make the submodule go away/merge into another submodule, or
 update the docs with what we learn.
 Thanks!
 Travis
 --
 From: Ashutosh Chauhan ashutosh.chau...@gmail.com
 Date: Fri, Apr 5, 2013 at 3:10 PM
 To: dev@hive.apache.org
 Cc: u...@hive.apache.org u...@hive.apache.org
 I haven't used it myself anytime till now. Neither have met anyone who used
 it or plan to use it.
 Ashutosh
 On Thu, Apr 4, 2013 at 2:01 PM, Travis Crawford 
 traviscrawf...@gmail.comwrote:
 --
 From: Gunther Hagleitner ghagleit...@hortonworks.com
 Date: Fri, Apr 5, 2013 at 3:11 PM
 To: dev@hive.apache.org
 Cc: u...@hive.apache.org
 +1
 I would actually go a step further and propose to remove both PDK and
 builtins. I've went through the code for both and here is what I found:
 Builtins:
 - BuiltInUtils.java: Empty file
 - UDAFUnionMap: Merges maps. Doesn't seem to be useful by itself, but was
 intended as a building block for PDK
 PDK:
 - some helper build.xml/test setup + teardown scripts
 - Classes/annotations to help run unit tests
 - rot13 as an example
 From what I can tell it's a fair assessment that it hasn't taken off, last
 commits to it seem to have happened more than 1.5 years ago.
 Thanks,
 Gunther.
 On Thu, Apr 4, 2013 at 2:01 PM, Travis Crawford 
 traviscrawf...@gmail.comwrote:
 --
 From: Owen O'Malley omal...@apache.org
 Date: Fri, Apr 5, 2013 at 4:45 PM
 To: u...@hive.apache.org
 +1 to removing them. 
 We have a Rot13 example in 
 ql/src/test/org/apache/hadoop/hive/ql/io/udf/Rot13{In,Out}putFormat.java 
 anyways. *smile*
 -- Owen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-16 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1364#comment-1364
 ] 

Roshan Naik commented on HIVE-4300:
---

i take that back.. it looked like a manual change in the python file.. but i 
just double checked.. it seems ok. 


 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.h
 

[jira] [Assigned] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-4278:
--

Assignee: Ashutosh Chauhan  (was: Travis Crawford)

 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Ashutosh Chauhan
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4019) Ability to create and drop temporary partition function

2013-04-16 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-4019:
---

Attachment: HIVE-4019-1.patch

Hi guys,

I've rebased the original patch, attached. However, I am seeing a problem 
during compilation because 
output column names is null on TableFunctionResolver. It seems this on purpose 
(See WIndowTableFunction) but I don't see any location to obtain the column 
names. 
RawInputColumnNames is also null. Any hints?


I have time to work on this tomorrow, but I don't want to hold up anything that 
is important for
release so if this is holding anything up, please feel free to take it up.

{noformat}
2013-04-16 13:52:58,383 ERROR ql.Driver (SessionState.java:printError(401)) - 
FAILED: NullPointerException null
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.parse.PTFTranslator.buildRowResolverForPTF(PTFTranslator.java:1016)
at 
org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:383)
at 
org.apache.hadoop.hive.ql.parse.PTFTranslator.translatePTFChain(PTFTranslator.java:297)
at 
org.apache.hadoop.hive.ql.parse.PTFTranslator.translate(PTFTranslator.java:134)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.translatePTFInvocationSpec(SemanticAnalyzer.java:10491)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPTFPlanForComponentQuery(SemanticAnalyzer.java:10623)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPTFPlan(SemanticAnalyzer.java:10498)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:7974)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8701)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:279)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:433)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:790)
at 
org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:124)
at 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ptf_register_tblfn(TestCliDriver.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:154)
at junit.framework.TestCase.runBare(TestCase.java:127)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:118)
at junit.framework.TestSuite.runTest(TestSuite.java:208)
at junit.framework.TestSuite.run(TestSuite.java:203)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
{noformat}


 Ability to create and drop temporary partition function
 ---

 Key: HIVE-4019
 URL: https://issues.apache.org/jira/browse/HIVE-4019
 Project: Hive
  Issue Type: New Feature
  Components: PTF-Windowing
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-4019-1.patch, hive-4019.q


 Just like udf/udaf/udtf functions, user should be able to add and drop custom 
 partitioning functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4278:
---

Assignee: Sushanth Sowmyan  (was: Ashutosh Chauhan)

 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Sushanth Sowmyan
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633368#comment-13633368
 ] 

Phabricator commented on HIVE-4278:
---

ashutoshc has requested changes to the revision HIVE-4278 [jira] HCat needs to 
get current Hive jars instead of pulling them from maven repo.

  Patch is not applying cleanly. Need to rebase the patch.

INLINE COMMENTS
  hcatalog/build-support/ant/deploy.xml:82 Will making ant  .. call be better 
here?

REVISION DETAIL
  https://reviews.facebook.net/D10257

BRANCH
  HIVE-4278

ARCANIST PROJECT
  hive

To: JIRA, cwsteinbach, travis, ashutoshc, khorgath


 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Sushanth Sowmyan
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-16 Thread Roshan Naik (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633383#comment-13633383
 ] 

Roshan Naik commented on HIVE-4300:
---

FYI.. the general inconsistency that led me to file this jira was most likely 
because some of the auto-generated files checked into trunk were produced using 
thrift v 0.7

 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.cpp
 

Review Request: HIVE-4356 - remove duplicate impersonation parameters for hiveserver2

2013-04-16 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/10554/
---

Review request for hive.


Description
---

remove duplicate impersonation parameters for hiveserver2


This addresses bug HIVE-4356.
https://issues.apache.org/jira/browse/HIVE-4356


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 78d9cc9 
  conf/hive-default.xml.template e266ce7 
  service/src/java/org/apache/hive/service/auth/PlainSaslHelper.java 18d4aae 
  service/src/java/org/apache/hive/service/cli/CLIService.java b53599b 
  service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java 
43d79aa 
  service/src/test/org/apache/hive/service/auth/TestPlainSaslHelper.java 
PRE-CREATION 
  service/src/test/org/apache/hive/service/cli/thrift/TestThriftCLIService.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/10554/diff/


Testing
---

Unit tests included.
Manually tested on (kerberos) secure and unsecure cluster.


Thanks,

Thejas Nair



[jira] [Updated] (HIVE-4356) remove duplicate impersonation parameters for hiveserver2

2013-04-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4356:


Attachment: HIVE-4356.1.patch

 remove duplicate impersonation parameters for hiveserver2
 -

 Key: HIVE-4356
 URL: https://issues.apache.org/jira/browse/HIVE-4356
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.11.0

 Attachments: HIVE-4356.1.patch


 There are two parameters controlling impersonation in hiveserver2. 
 hive.server2.enable.doAs that controls this in kerberos secure mode, while 
 hive.server2.enable.doAs controls this for unsecure mode.
 We should have just one for both modes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4356) remove duplicate impersonation parameters for hiveserver2

2013-04-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4356:


Status: Patch Available  (was: Open)

 HIVE-4356.1.patch - hive.server2.enabled.doAs now controls doAs functionality 
in both secure and unsecure modes. It is set to true by default, as in most 
cases, it makes sense to run hive with the permissions of the user submitting 
the query. This is also more secure.

Review board link - https://reviews.apache.org/r/10554/


 remove duplicate impersonation parameters for hiveserver2
 -

 Key: HIVE-4356
 URL: https://issues.apache.org/jira/browse/HIVE-4356
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.11.0

 Attachments: HIVE-4356.1.patch


 There are two parameters controlling impersonation in hiveserver2. 
 hive.server2.enable.doAs that controls this in kerberos secure mode, while 
 hive.server2.enable.doAs controls this for unsecure mode.
 We should have just one for both modes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4356) remove duplicate impersonation parameters for hiveserver2

2013-04-16 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633430#comment-13633430
 ] 

Thejas M Nair commented on HIVE-4356:
-

In HIVE-4356.1.patch, also refactors ThriftCLIService.OpenSession code to make 
it easier to test.


 remove duplicate impersonation parameters for hiveserver2
 -

 Key: HIVE-4356
 URL: https://issues.apache.org/jira/browse/HIVE-4356
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.11.0

 Attachments: HIVE-4356.1.patch


 There are two parameters controlling impersonation in hiveserver2. 
 hive.server2.enable.doAs that controls this in kerberos secure mode, while 
 hive.server2.enable.doAs controls this for unsecure mode.
 We should have just one for both modes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-16 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated HIVE-4300:
--

Attachment: (was: HIVE-4300.2.patch)

 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.h
 # serde/src/gen/thrift/gen-cpp/megastruct_types.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_types.h
 # 

[jira] [Updated] (HIVE-4300) ant thriftif generated code that is checkedin is not up-to-date

2013-04-16 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated HIVE-4300:
--

Attachment: HIVE-4300.2.patch

 ant thriftif  generated code that is checkedin is not up-to-date
 

 Key: HIVE-4300
 URL: https://issues.apache.org/jira/browse/HIVE-4300
 Project: Hive
  Issue Type: Bug
  Components: Thrift API
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4300.2.patch, HIVE-4300.patch


 running 'ant thriftif -Dthrift.home=/usr/local'  on a freshly checkedout 
 trunk should be a no-op as per 
 [instructions|https://cwiki.apache.org/Hive/howtocontribute.html#HowToContribute-GeneratingThriftCode]
 However this is not the case. Some of files seem to be have been relocated or 
 the classes in them are now in a different file.
 Below is the git status showing the state after the command is run:
 # On branch trunk
 # Changes not staged for commit:
 #   (use git add/rm file... to update what will be committed)
 #   (use git checkout -- file... to discard changes in working directory)
 #
 # modified:   build.properties
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Database.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Index.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Partition.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/PrincipalPrivilegeSet.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SerDeInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StorageDescriptor.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Table.java
 # modified:   
 metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
 # deleted:metastore/src/gen/thrift/gen-php/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_constants.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
 # deleted:
 metastore/src/gen/thrift/gen-php/hive_metastore_constants.php
 # deleted:metastore/src/gen/thrift/gen-php/hive_metastore_types.php
 # modified:   
 metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
 # deleted:ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/InnerStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/test/ThriftTestObj.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/Complex.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/IntString.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
 # modified:   
 serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_constants.php
 # deleted:serde/src/gen/thrift/gen-php/serde/serde_types.php
 # deleted:service/src/gen/thrift/gen-php/hive_service/ThriftHive.php
 # deleted:
 service/src/gen/thrift/gen-php/hive_service/hive_service_types.php
 # modified:   service/src/gen/thrift/gen-py/TCLIService/TCLIService-remote
 # modified:   service/src/gen/thrift/gen-py/hive_service/ThriftHive-remote
 #
 # Untracked files:
 #   (use git add file... to include in what will be committed)
 #
 # serde/src/gen/thrift/gen-cpp/complex_constants.cpp
 # serde/src/gen/thrift/gen-cpp/complex_constants.h
 # serde/src/gen/thrift/gen-cpp/complex_types.cpp
 # serde/src/gen/thrift/gen-cpp/complex_types.h
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_constants.h
 # serde/src/gen/thrift/gen-cpp/megastruct_types.cpp
 # serde/src/gen/thrift/gen-cpp/megastruct_types.h
 # 

[jira] [Commented] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633452#comment-13633452
 ] 

Phabricator commented on HIVE-4278:
---

khorgath has commented on the revision HIVE-4278 [jira] HCat needs to get 
current Hive jars instead of pulling them from maven repo.

INLINE COMMENTS
  hcatalog/build-support/ant/deploy.xml:82 Agreed, we should be using the ant 
task, changing that and uploading rebased patch on the jira.

REVISION DETAIL
  https://reviews.facebook.net/D10257

BRANCH
  HIVE-4278

ARCANIST PROJECT
  hive

To: JIRA, cwsteinbach, travis, ashutoshc, khorgath


 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Sushanth Sowmyan
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4278:
---

Attachment: HIVE-4278.approach2.patch.2.for.branch.12

Uploading rebased patch for 0.12 that takes into account ashutosh's review 
comment (not reposting on phabricator because of merge hell)

 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Sushanth Sowmyan
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, 
 HIVE-4278.approach2.patch.2.for.branch.12, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4278) HCat needs to get current Hive jars instead of pulling them from maven repo

2013-04-16 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-4278:
---

Attachment: HIVE-4278.approach2.patch.2.for.branch.11

Uploading rebased patch for 0.11 that takes into account ashutosh's review 
comment (not reposting on phabricator because of merge hell)

 HCat needs to get current Hive jars instead of pulling them from maven repo
 ---

 Key: HIVE-4278
 URL: https://issues.apache.org/jira/browse/HIVE-4278
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure, HCatalog
Affects Versions: 0.11.0
Reporter: Alan Gates
Assignee: Sushanth Sowmyan
Priority: Blocker
 Fix For: 0.11.0

 Attachments: HIVE-4278.approach2.patch, 
 HIVE-4278.approach2.patch.2.for.branch.11, 
 HIVE-4278.approach2.patch.2.for.branch.12, HIVE-4278.D10257.1.patch, 
 HIVE-4278.D9981.1.patch


 The HCatalog build is currently pulling Hive jars from the maven repo instead 
 of using the ones built as part of the current build.  Now that it is part of 
 Hive it should use the jars being built instead of pulling them from maven.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


is there set of queries, which can be used to benchmark the hive performance?

2013-04-16 Thread ur lops
I am looking to benchmark my database with hive. but before I do that,
I want to run a set of tests on hive to benchmark hive. Is there
something exists in hive, similar to pig gridmix?
Thanks in advance
Rob.


[jira] [Commented] (HIVE-4106) SMB joins fail in multi-way joins

2013-04-16 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633566#comment-13633566
 ] 

Vikram Dixit K commented on HIVE-4106:
--

Hi [~namit]

I am still able to reproduce this issue. The testcase provided in this jira 
still produces this issue as well.

Thanks
Vikram.

 SMB joins fail in multi-way joins
 -

 Key: HIVE-4106
 URL: https://issues.apache.org/jira/browse/HIVE-4106
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Blocker
 Attachments: auto_sortmerge_join_12.q, HIVE-4106.patch


 I see array out of bounds exception in case of multi way smb joins. This is 
 related to changes that went in as part of HIVE-3403. This issue has been 
 discussed in HIVE-3891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4106) SMB joins fail in multi-way joins

2013-04-16 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633568#comment-13633568
 ] 

Vikram Dixit K commented on HIVE-4106:
--

{noformat}
java.lang.ArrayIndexOutOfBoundsException: 1
  at 
org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertJoinToBucketMapJoin(AbstractSMBJoinProc.java:476)
  at 
org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertJoinToSMBJoin(AbstractSMBJoinProc.java:431)
  at 
org.apache.hadoop.hive.ql.optimizer.SortedMergeJoinProc.process(SortedMergeJoinProc.java:46)
  at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
  at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:87)
  at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:124)
  at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:101)
  at 
org.apache.hadoop.hive.ql.optimizer.SortedMergeBucketMapJoinOptimizer.transform(SortedMergeBucketMapJoinOptimizer.java:109)
...
{noformat}

 SMB joins fail in multi-way joins
 -

 Key: HIVE-4106
 URL: https://issues.apache.org/jira/browse/HIVE-4106
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Blocker
 Attachments: auto_sortmerge_join_12.q, HIVE-4106.patch


 I see array out of bounds exception in case of multi way smb joins. This is 
 related to changes that went in as part of HIVE-3403. This issue has been 
 discussed in HIVE-3891.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4284) Implement class for vectorized row batch

2013-04-16 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-4284:
--

Attachment: HIVE-4284.2.patch

new patch that addresses Jitendra's comments

 Implement class for vectorized row batch
 

 Key: HIVE-4284
 URL: https://issues.apache.org/jira/browse/HIVE-4284
 Project: Hive
  Issue Type: Sub-task
Reporter: Jitendra Nath Pandey
Assignee: Eric Hanson
 Attachments: HIVE-4284.1.patch, HIVE-4284.2.patch


 Vectorized row batch object will represent the row batch that vectorized 
 operators will work on. Refer to design spec attached to HIVE-4160 for 
 details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-hadoop2 - Build # 163 - Still Failing

2013-04-16 Thread Apache Jenkins Server
Changes for Build #138
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)

[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver 
(Thejas Nair via Ashutosh Chauhan)

[namit] HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit)

[namit] HIVE-4240 optimize hive.enforce.bucketing and hive.enforce sorting 
insert
(Gang Tim Liu via namit)

[navis] HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik 
via Navis)

[namit] HIVE-4272 partition wise metadata does not work for text files

[hashutosh] HIVE-896 : Add LEAD/LAG/FIRST/LAST analytical windowing functions 
to Hive. (Harish Butani via Ashutosh Chauhan)

[namit] HIVE-4260 union_remove_12, union_remove_13 are failing on hadoop2
(Gunther Hagleitner via namit)

[hashutosh] HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover 
via Ashutosh Chauhan)

[namit] HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple 
input partitions
(Namit via Gang Tim Liu)

[hashutosh] HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan)

[hashutosh] HIVE-4122 : Queries fail if timestamp data not in expected format 
(Prasad Mujumdar via Ashutosh Chauhan)

[hashutosh] HIVE-4170 : [REGRESSION] FsShell.close closes filesystem, removing 
temporary directories (Navis via Ashutosh Chauhan)

[gates] HIVE-4264 Moved hcatalog trunk code up to hive/trunk/hcatalog

[hashutosh] HIVE-4263 : Adjust build.xml package command to move all hcat jars 
and binaries into build (Alan Gates via Ashutosh Chauhan)

[namit] HIVE-4258 Log logical plan tree for debugging
(Navis via namit)

[navis] HIVE-2264 Hive server is SHUTTING DOWN when invalid queries beeing 
executed

[kevinwilfong] HIVE-4235. CREATE TABLE IF NOT EXISTS uses inefficient way to 
check if table exists. (Gang Tim Liu via kevinwilfong)

[gangtimliu] HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi 
Gang Tim Liu)

[gangtimliu] HIVE-4155: Expose ORC's FileDump as a service

[gangtimliu] HIVE-4159:RetryingHMSHandler doesn't retry in enough cases (Kevin 
Wilfong vi Gang Tim Liu)

[namit] HIVE-4149 wrong results big outer joins with array of ints
(Navis via namit)

[namit] HIVE-3958 support partial scan for analyze command - RCFile
(Gang Tim Liu via namit)

[gates] Removing old branches to limit size of Hive downloads.

[gates] Removing tags directory as we no longer need them and they're in the 
history.

[gates] Moving HCatalog into Hive.

[gates] Test that perms work for hcatalog

[hashutosh] HIVE-4007 : Create abstract classes for serializer and deserializer 
(Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-3381 : Result of outer join is not valid (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3980 : Cleanup after 3403 (Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-4042 : ignore mapjoin hint (Namit Jain via Ashutosh Chauhan)

[namit] HIVE-3348 semi-colon in comments in .q file does not work
(Nick Collins via namit)

[namit] HIVE-4212 sort merge join should work for outer joins for more than 8 
inputs
(Namit via Gang Tim Liu)

[namit] HIVE-4219 explain dependency does not capture the input table
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4092. Store complete names of tables in column access 
analyzer (Samuel Yuan via kevinwilfong)

[namit] HIVE-4208 Clientpositive test parenthesis_star_by is non-deteministic
(Mark Grover via namit)

[cws] HIVE-4217. Fix show_create_table_*.q test failures (Carl Steinbach via 
cws)

[namit] HIVE-4206 Sort merge join does not work for outer joins for 7 inputs
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4188. TestJdbcDriver2.testDescribeTable failing 
consistently. (Prasad Mujumdar via kevinwilfong)

[hashutosh] HIVE-3820 Consider creating a literal like D or BD for representing 
Decimal type constants (Gunther Hagleitner 

[jira] [Commented] (HIVE-4284) Implement class for vectorized row batch

2013-04-16 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633605#comment-13633605
 ] 

Carl Steinbach commented on HIVE-4284:
--

Hive's coding conventions are described here:
https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-CodingConvention

Please review this and correct the formatting issues in this patch.

bq. Please limit the length of a line to 80 chars.

100 characters is acceptable.

 Implement class for vectorized row batch
 

 Key: HIVE-4284
 URL: https://issues.apache.org/jira/browse/HIVE-4284
 Project: Hive
  Issue Type: Sub-task
Reporter: Jitendra Nath Pandey
Assignee: Eric Hanson
 Attachments: HIVE-4284.1.patch, HIVE-4284.2.patch


 Vectorized row batch object will represent the row batch that vectorized 
 operators will work on. Refer to design spec attached to HIVE-4160 for 
 details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >