[jira] [Commented] (HIVE-4568) Beeline needs to support resolving variables

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773743#comment-13773743
 ] 

Hudson commented on HIVE-4568:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #446 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/446/])
HIVE-4568 - Beeline needs to support resolving variables (Xuefu Zhang reviewed 
by Thejas M Nair) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525046)
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.properties
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java
* 
/hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test/TestBeeLineWithArgs.java


 Beeline needs to support resolving variables
 

 Key: HIVE-4568
 URL: https://issues.apache.org/jira/browse/HIVE-4568
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-4568-1.patch, HIVE-4568-2.patch, HIVE-4568.3.patch, 
 HIVE-4568.4.patch, HIVE-4568.5.patch, HIVE-4568.6.patch, HIVE-4568.7.patch, 
 HIVE-4568.8.patch, HIVE-4568.patch


 Previous Hive CLI allows user to specify hive variables at the command line 
 using option --hivevar. In user's script, reference to a hive variable will 
 be substituted with the value of the variable. In such way, user can 
 parameterize his/her script and invoke the script with different hive 
 variable values. The following script is one usage:
 {code}
 hive --hivevar
  INPUT=/user/jenkins/oozie.1371538916178/examples/input-data/table
  --hivevar
  OUTPUT=/user/jenkins/oozie.1371538916178/examples/output-data/hive
  -f script.q
 {code}
 script.q makes use of hive variables:
 {code}
 CREATE EXTERNAL TABLE test (a INT) STORED AS TEXTFILE LOCATION '${INPUT}';
 INSERT OVERWRITE DIRECTORY '${OUTPUT}' SELECT * FROM test;
 {code}
 However, after upgrade to hiveserver2 and beeline, this functionality is 
 missing. Beeline doesn't take --hivevar option, and any hive variable isn't 
 passed to server so it cannot be used for substitution.
 This JIRA is to address this issue, providing a backward compatible behavior 
 at Beeline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4113) Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773747#comment-13773747
 ] 

Hive QA commented on HIVE-4113:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12604367/HIVE-4113.8.patch

{color:red}ERROR:{color} -1 due to 272 failed/errored test(s), 3131 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_reordering_values
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_smb_mapjoin_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_binary_output_format
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketmapjoin_negative2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_case_sensitivity
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cast1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cluster
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_column_access_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_colname
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ctas_uses_database_location
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_decimal_udf
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_filter_join_breaktask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby1_limit
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby1_map_skew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby3_map_skew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby6_map_skew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby7_map
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby7_map_skew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby7_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby7_noskew_multi_single_reducer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby8_map
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby8_map_skew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby8_noskew
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_complex_types
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_complex_types_multi_single_reducer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_cube1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_distinct_samekey
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_grouping_sets5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_multi_insert_common_distinct
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_multi_single_reducer
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_position
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_rollup1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_groupby_sort_skew_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto

[jira] [Commented] (HIVE-4568) Beeline needs to support resolving variables

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773753#comment-13773753
 ] 

Hudson commented on HIVE-4568:
--

SUCCESS: Integrated in Hive-trunk-h0.21 #2347 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2347/])
HIVE-4568 - Beeline needs to support resolving variables (Xuefu Zhang reviewed 
by Thejas M Nair) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525046)
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLine.properties
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/BeeLineOpts.java
* /hive/trunk/beeline/src/java/org/apache/hive/beeline/DatabaseConnection.java
* 
/hive/trunk/beeline/src/test/org/apache/hive/beeline/src/test/TestBeeLineWithArgs.java


 Beeline needs to support resolving variables
 

 Key: HIVE-4568
 URL: https://issues.apache.org/jira/browse/HIVE-4568
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-4568-1.patch, HIVE-4568-2.patch, HIVE-4568.3.patch, 
 HIVE-4568.4.patch, HIVE-4568.5.patch, HIVE-4568.6.patch, HIVE-4568.7.patch, 
 HIVE-4568.8.patch, HIVE-4568.patch


 Previous Hive CLI allows user to specify hive variables at the command line 
 using option --hivevar. In user's script, reference to a hive variable will 
 be substituted with the value of the variable. In such way, user can 
 parameterize his/her script and invoke the script with different hive 
 variable values. The following script is one usage:
 {code}
 hive --hivevar
  INPUT=/user/jenkins/oozie.1371538916178/examples/input-data/table
  --hivevar
  OUTPUT=/user/jenkins/oozie.1371538916178/examples/output-data/hive
  -f script.q
 {code}
 script.q makes use of hive variables:
 {code}
 CREATE EXTERNAL TABLE test (a INT) STORED AS TEXTFILE LOCATION '${INPUT}';
 INSERT OVERWRITE DIRECTORY '${OUTPUT}' SELECT * FROM test;
 {code}
 However, after upgrade to hiveserver2 and beeline, this functionality is 
 missing. Beeline doesn't take --hivevar option, and any hive variable isn't 
 passed to server so it cannot be used for substitution.
 This JIRA is to address this issue, providing a backward compatible behavior 
 at Beeline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5327) Potential leak and cleanup in utilities.java

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773754#comment-13773754
 ] 

Hudson commented on HIVE-5327:
--

SUCCESS: Integrated in Hive-trunk-h0.21 #2347 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2347/])
HIVE-5327 - Potential leak and cleanup in utilities.java (Edward Capriolo via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525126)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java


 Potential leak and cleanup in utilities.java
 

 Key: HIVE-5327
 URL: https://issues.apache.org/jira/browse/HIVE-5327
 Project: Hive
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5327.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4914) filtering via partition name should be done inside metastore server (implementation)

2013-09-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773758#comment-13773758
 ] 

Hive QA commented on HIVE-4914:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12604372/HIVE-4914.05.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 3129 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.io.sarg.TestSearchArgumentImpl.testExpression3
org.apache.hadoop.hive.ql.io.sarg.TestSearchArgumentImpl.testExpression5
org.apache.hadoop.hive.ql.io.sarg.TestSearchArgumentImpl.testExpression9
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/852/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/852/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

 filtering via partition name should be done inside metastore server 
 (implementation)
 

 Key: HIVE-4914
 URL: https://issues.apache.org/jira/browse/HIVE-4914
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: D12561.5.patch, HIVE-4914.01.patch, HIVE-4914.02.patch, 
 HIVE-4914.03.patch, HIVE-4914.04.patch, HIVE-4914.05.patch, 
 HIVE-4914.D12561.1.patch, HIVE-4914.D12561.2.patch, HIVE-4914.D12561.3.patch, 
 HIVE-4914.D12561.4.patch, HIVE-4914.D12645.1.patch, 
 HIVE-4914-only-no-gen.patch, HIVE-4914-only.patch, HIVE-4914.patch, 
 HIVE-4914.patch, HIVE-4914.patch


 Currently, if the filter pushdown is impossible (which is most cases), the 
 client gets all partition names from metastore, filters them, and asks for 
 partitions by names for the filtered set.
 Metastore server code should do that instead; it should check if pushdown is 
 possible and do it if so; otherwise it should do name-based filtering.
 Saves the roundtrip with all partition names from the server to client, and 
 also removes the need to have pushdown viability checking on both sides.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5327) Potential leak and cleanup in utilities.java

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773763#comment-13773763
 ] 

Hudson commented on HIVE-5327:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #447 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/447/])
HIVE-5327 - Potential leak and cleanup in utilities.java (Edward Capriolo via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525126)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java


 Potential leak and cleanup in utilities.java
 

 Key: HIVE-5327
 URL: https://issues.apache.org/jira/browse/HIVE-5327
 Project: Hive
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5327.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5209) JDBC support for varchar

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773761#comment-13773761
 ] 

Hudson commented on HIVE-5209:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #447 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/447/])
HIVE-5209: JDBC support for varchar (Jason Dere via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525186)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumn.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumnAttributes.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/service/if/TCLIService.thrift
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.h
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.h
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/service/ThriftHive.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TCLIServiceConstants.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TColumn.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TExecuteStatementReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TGetTablesReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionResp.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TPrimitiveTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TProtocolVersion.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRow.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRowSet.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStatus.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStructTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTableSchema.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeDesc.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeId.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifierValue.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifiers.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TUnionTypeEntry.java
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/constants.py
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/ttypes.py
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_constants.rb
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnValue.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/Type.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeQualifiers.java


 JDBC support for varchar
 

 Key: HIVE-5209
 URL: https://issues.apache.org/jira/browse/HIVE-5209
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2, JDBC, Types
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.12.0

 Attachments: D12999.1.patch, HIVE-5209.1.patch, HIVE-5209.2.patch, 
 HIVE-5209.4.patch, HIVE-5209.5.patch, HIVE-5209.6.patch, 
 HIVE-5209.D12705.1.patch


 Support returning varchar length in result set metadata

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5327) Potential leak and cleanup in utilities.java

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773762#comment-13773762
 ] 

Hudson commented on HIVE-5327:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #176 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/176/])
HIVE-5327 - Potential leak and cleanup in utilities.java (Edward Capriolo via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525126)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java


 Potential leak and cleanup in utilities.java
 

 Key: HIVE-5327
 URL: https://issues.apache.org/jira/browse/HIVE-5327
 Project: Hive
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5327.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5209) JDBC support for varchar

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773760#comment-13773760
 ] 

Hudson commented on HIVE-5209:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #176 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/176/])
HIVE-5209: JDBC support for varchar (Jason Dere via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525186)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumn.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumnAttributes.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/service/if/TCLIService.thrift
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.h
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.h
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/service/ThriftHive.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TCLIServiceConstants.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TColumn.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TExecuteStatementReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TGetTablesReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionResp.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TPrimitiveTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TProtocolVersion.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRow.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRowSet.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStatus.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStructTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTableSchema.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeDesc.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeId.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifierValue.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifiers.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TUnionTypeEntry.java
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/constants.py
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/ttypes.py
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_constants.rb
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnValue.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/Type.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeQualifiers.java


 JDBC support for varchar
 

 Key: HIVE-5209
 URL: https://issues.apache.org/jira/browse/HIVE-5209
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2, JDBC, Types
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.12.0

 Attachments: D12999.1.patch, HIVE-5209.1.patch, HIVE-5209.2.patch, 
 HIVE-5209.4.patch, HIVE-5209.5.patch, HIVE-5209.6.patch, 
 HIVE-5209.D12705.1.patch


 Support returning varchar length in result set metadata

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5327) Potential leak and cleanup in utilities.java

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773820#comment-13773820
 ] 

Hudson commented on HIVE-5327:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #109 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/109/])
HIVE-5327 - Potential leak and cleanup in utilities.java (Edward Capriolo via 
Brock Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525126)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java


 Potential leak and cleanup in utilities.java
 

 Key: HIVE-5327
 URL: https://issues.apache.org/jira/browse/HIVE-5327
 Project: Hive
  Issue Type: Bug
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5327.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5209) JDBC support for varchar

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773819#comment-13773819
 ] 

Hudson commented on HIVE-5209:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #109 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/109/])
HIVE-5209: JDBC support for varchar (Jason Dere via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525186)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumn.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumnAttributes.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/service/if/TCLIService.thrift
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.h
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.h
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/service/ThriftHive.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TCLIServiceConstants.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TColumn.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TExecuteStatementReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TGetTablesReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionResp.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TPrimitiveTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TProtocolVersion.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRow.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRowSet.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStatus.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStructTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTableSchema.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeDesc.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeId.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifierValue.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifiers.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TUnionTypeEntry.java
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/constants.py
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/ttypes.py
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_constants.rb
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnValue.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/Type.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeQualifiers.java


 JDBC support for varchar
 

 Key: HIVE-5209
 URL: https://issues.apache.org/jira/browse/HIVE-5209
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2, JDBC, Types
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.12.0

 Attachments: D12999.1.patch, HIVE-5209.1.patch, HIVE-5209.2.patch, 
 HIVE-5209.4.patch, HIVE-5209.5.patch, HIVE-5209.6.patch, 
 HIVE-5209.D12705.1.patch


 Support returning varchar length in result set metadata

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5333) Milestone 2: Generate tests under maven

2013-09-21 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773827#comment-13773827
 ] 

Edward Capriolo commented on HIVE-5333:
---

A good portion of the tests do not work because they are assuming properties 
that are set via ant. I would like to remove all/most of these types of 
properties and attempt to use the maven default of $cwd/target. Does that seem 
like the best way to handle it? 

 Milestone 2: Generate tests under maven
 ---

 Key: HIVE-5333
 URL: https://issues.apache.org/jira/browse/HIVE-5333
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5333) Milestone 2: Generate tests under maven

2013-09-21 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773829#comment-13773829
 ] 

Brock Noland commented on HIVE-5333:


I'd like to remove as many as possible as well. At the saem time I was hoping 
to keep the Hive ant build working for this patch until the final time we 
execute maven-follforward.sh and merge to trunk. My thought was to convert the 
properties to $basedir/target but set them in maven as system properties (I 
think this how ant does it?).  Then once we are fully on maven and we can 
remove these. Thoughts?

 Milestone 2: Generate tests under maven
 ---

 Key: HIVE-5333
 URL: https://issues.apache.org/jira/browse/HIVE-5333
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5333) Milestone 2: Generate tests under maven

2013-09-21 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773835#comment-13773835
 ] 

Edward Capriolo commented on HIVE-5333:
---

That makes sense, I was hoping to deal with this during the transition, but as 
long as we are dedicated to removing the properties after the transition this 
is a better plan. I see the props as a big barrier to make the components 
testable and re-usable so I want to see that gone. I like your plan. Keep them 
for now.

 Milestone 2: Generate tests under maven
 ---

 Key: HIVE-5333
 URL: https://issues.apache.org/jira/browse/HIVE-5333
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Brock Noland



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4624) Integrate Vectorized Substr into Vectorized QE

2013-09-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4624:
---

   Resolution: Fixed
Fix Version/s: vectorization-branch
   Status: Resolved  (was: Patch Available)

Committed to branch. Thanks, Eric!

 Integrate Vectorized Substr into Vectorized QE
 --

 Key: HIVE-4624
 URL: https://issues.apache.org/jira/browse/HIVE-4624
 Project: Hive
  Issue Type: Sub-task
Affects Versions: vectorization-branch
Reporter: Timothy Chen
Assignee: Eric Hanson
 Fix For: vectorization-branch

 Attachments: HIVE-4624.1-vectorization.patch, 
 HIVE-4624.2-vectorization.patch


 Need to hook up the Vectorized Substr directly into Hive Vectorized QE so it 
 can be leveraged.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (HIVE-5340) TestJdbcDriver2 is failing on trunk.

2013-09-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5340:
---

Comment: was deleted

(was: I am seeing following trace in log of all my tests:
{code}
java.lang.NoSuchFieldError: HIVE_CLI_SERVICE_PROTOCOL_V2
at org.apache.hive.jdbc.HiveConnection.init(HiveConnection.java:106)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:111)
at java.sql.DriverManager.getConnection(DriverManager.java:582)
at java.sql.DriverManager.getConnection(DriverManager.java:185)
at org.apache.hive.jdbc.TestJdbcDriver2.setUp(TestJdbcDriver2.java:96)

{code})

 TestJdbcDriver2 is failing on trunk.
 

 Key: HIVE-5340
 URL: https://issues.apache.org/jira/browse/HIVE-5340
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan

 Seems to be related to yesterday's HIVE-5209 commit

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-5318) Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10

2013-09-21 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang reassigned HIVE-5318:
-

Assignee: Xuefu Zhang

 Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10
 

 Key: HIVE-5318
 URL: https://issues.apache.org/jira/browse/HIVE-5318
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.9.0, 0.10.0
Reporter: Brad Ruderman
Assignee: Xuefu Zhang
Priority: Critical

 When Exporting hive tables using the hive command in Hive 0.9 EXPORT table 
 TO 'hdfs_path' then importing to another hive 0.10 instance using IMPORT 
 FROM 'hdfs_path', hive throws this error:
 13/09/18 13:14:02 ERROR ql.Driver: FAILED: SemanticException Exception while 
 processing
 org.apache.hadoop.hive.ql.parse.SemanticException: Exception while processing
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: java.lang.NullPointerException
   at java.util.ArrayList.init(ArrayList.java:131)
   at 
 org.apache.hadoop.hive.ql.plan.CreateTableDesc.init(CreateTableDesc.java:128)
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:99)
   ... 16 more
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=compile 
 start=1379535241411 end=1379535242332 duration=921
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242332 end=1379535242332 duration=0
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242333 end=1379535242333 duration=0
 This is probably a critical blocker for people who are trying to test Hive 
 0.10 in their staging environments prior to the upgrade from 0.9

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5318) Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10

2013-09-21 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5318:
--

Attachment: HIVE-5318.patch

 Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10
 

 Key: HIVE-5318
 URL: https://issues.apache.org/jira/browse/HIVE-5318
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.9.0, 0.10.0
Reporter: Brad Ruderman
Assignee: Xuefu Zhang
Priority: Critical
 Attachments: HIVE-5318.patch


 When Exporting hive tables using the hive command in Hive 0.9 EXPORT table 
 TO 'hdfs_path' then importing to another hive 0.10 instance using IMPORT 
 FROM 'hdfs_path', hive throws this error:
 13/09/18 13:14:02 ERROR ql.Driver: FAILED: SemanticException Exception while 
 processing
 org.apache.hadoop.hive.ql.parse.SemanticException: Exception while processing
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: java.lang.NullPointerException
   at java.util.ArrayList.init(ArrayList.java:131)
   at 
 org.apache.hadoop.hive.ql.plan.CreateTableDesc.init(CreateTableDesc.java:128)
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:99)
   ... 16 more
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=compile 
 start=1379535241411 end=1379535242332 duration=921
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242332 end=1379535242332 duration=0
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242333 end=1379535242333 duration=0
 This is probably a critical blocker for people who are trying to test Hive 
 0.10 in their staging environments prior to the upgrade from 0.9

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5209) JDBC support for varchar

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773875#comment-13773875
 ] 

Hudson commented on HIVE-5209:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2348 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2348/])
HIVE-5209: JDBC support for varchar (Jason Dere via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525186)
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumn.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/JdbcColumnAttributes.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* /hive/trunk/service/if/TCLIService.thrift
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_constants.h
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.cpp
* /hive/trunk/service/src/gen/thrift/gen-cpp/TCLIService_types.h
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/service/ThriftHive.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TCLIServiceConstants.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TColumn.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TExecuteStatementReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TGetTablesReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionReq.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TOpenSessionResp.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TPrimitiveTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TProtocolVersion.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRow.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TRowSet.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStatus.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TStructTypeEntry.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTableSchema.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeDesc.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeId.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifierValue.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TTypeQualifiers.java
* 
/hive/trunk/service/src/gen/thrift/gen-javabean/org/apache/hive/service/cli/thrift/TUnionTypeEntry.java
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/constants.py
* /hive/trunk/service/src/gen/thrift/gen-py/TCLIService/ttypes.py
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_constants.rb
* /hive/trunk/service/src/gen/thrift/gen-rb/t_c_l_i_service_types.rb
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/ColumnValue.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/Type.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeDescriptor.java
* /hive/trunk/service/src/java/org/apache/hive/service/cli/TypeQualifiers.java


 JDBC support for varchar
 

 Key: HIVE-5209
 URL: https://issues.apache.org/jira/browse/HIVE-5209
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2, JDBC, Types
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.12.0

 Attachments: D12999.1.patch, HIVE-5209.1.patch, HIVE-5209.2.patch, 
 HIVE-5209.4.patch, HIVE-5209.5.patch, HIVE-5209.6.patch, 
 HIVE-5209.D12705.1.patch


 Support returning varchar length in result set metadata

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3764) Support metastore version consistency check

2013-09-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3764:
---

   Resolution: Fixed
Fix Version/s: (was: 0.12.0)
   0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Prasad!

 Support metastore version consistency check
 ---

 Key: HIVE-3764
 URL: https://issues.apache.org/jira/browse/HIVE-3764
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-3764-12.3.patch, HIVE-3764.1.patch, 
 HIVE-3764.2.patch, HIVE-3764-trunk.3.patch


 Today there's no version/compatibility information stored in hive metastore. 
 Also the datanucleus configuration property to automatically create missing 
 tables is enabled by default. If you happen to start an older or newer hive 
 or don't run the correct upgrade scripts during migration, the metastore 
 would end up corrupted. The autoCreate schema is not always sufficient to 
 upgrade metastore when migrating to newer release. It's not supported with 
 all databases. Besides the migration often involves altering existing table, 
 changing or moving data etc.
 Hence it's very useful to have some consistency check to make sure that hive 
 is using correct metastore and for production systems the schema is not 
 automatically by running hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4732) Reduce or eliminate the expensive Schema equals() check for AvroSerde

2013-09-21 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4732:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Mohammad!

 Reduce or eliminate the expensive Schema equals() check for AvroSerde
 -

 Key: HIVE-4732
 URL: https://issues.apache.org/jira/browse/HIVE-4732
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Reporter: Mark Wagner
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: HIVE-4732.1.patch, HIVE-4732.4.patch, HIVE-4732.5.patch, 
 HIVE-4732.6.patch, HIVE-4732.7.patch, HIVE-4732.v1.patch, HIVE-4732.v4.patch


 The AvroSerde spends a significant amount of time checking schema equality. 
 Changing to compare hashcodes (which can be computed once then reused) will 
 improve performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5301) Add a schema tool for offline metastore schema upgrade

2013-09-21 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773900#comment-13773900
 ] 

Prasad Mujumdar commented on HIVE-5301:
---

[~ashutoshc] The 0.7 to 0.8 MySQL upgrade requires some changes to the patch. I 
am currently testing the fix. will update the patch.


 Add a schema tool for offline metastore schema upgrade
 --

 Key: HIVE-5301
 URL: https://issues.apache.org/jira/browse/HIVE-5301
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.12.0

 Attachments: HIVE-5301.1.patch, HIVE-5301-with-HIVE-3764.0.patch


 HIVE-3764 is addressing metastore version consistency.
 Besides it would be helpful to add a tool that can leverage this version 
 information to figure out the required set of upgrade scripts, and execute 
 those against the configured metastore. Now that Hive includes Beeline 
 client, it can be used to execute the scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5306) Use new GenericUDF instead of basic UDF for UDFAbs class

2013-09-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773901#comment-13773901
 ] 

Ashutosh Chauhan commented on HIVE-5306:


+1

 Use new GenericUDF instead of basic UDF for UDFAbs class
 

 Key: HIVE-5306
 URL: https://issues.apache.org/jira/browse/HIVE-5306
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Attachments: HIVE-5306.1.patch, HIVE-5306.2.patch, HIVE-5306.3.patch, 
 HIVE-5306.4.patch, HIVE-5306.5.patch, HIVE-5306.6.patch


 GenericUDF class is the latest  and recommended base class for any UDFs.
 This JIRA is to change the current UDFAbs class extended from GenericUDF.
 The general benefit of GenericUDF is described in comments as 
 * The GenericUDF are superior to normal UDFs in the following ways: 1. It can
  * accept arguments of complex types, and return complex types. 2. It can 
 accept
  * variable length of arguments. 3. It can accept an infinite number of 
 function
  * signature - for example, it's easy to write a GenericUDF that accepts
  * arrayint, arrayarrayint and so on (arbitrary levels of nesting). 4. 
 It
  * can do short-circuit evaluations using DeferedObject.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5301) Add a schema tool for offline metastore schema upgrade

2013-09-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773903#comment-13773903
 ] 

Ashutosh Chauhan commented on HIVE-5301:


Cool. Thanks for testing!

 Add a schema tool for offline metastore schema upgrade
 --

 Key: HIVE-5301
 URL: https://issues.apache.org/jira/browse/HIVE-5301
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.12.0

 Attachments: HIVE-5301.1.patch, HIVE-5301-with-HIVE-3764.0.patch


 HIVE-3764 is addressing metastore version consistency.
 Besides it would be helpful to add a tool that can leverage this version 
 information to figure out the required set of upgrade scripts, and execute 
 those against the configured metastore. Now that Hive includes Beeline 
 client, it can be used to execute the scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5318) Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10

2013-09-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773906#comment-13773906
 ] 

Ashutosh Chauhan commented on HIVE-5318:


Those lists should never be null. Do, we know why they become null and its ok 
not to copy them? Also, a testcase will help to understand the issue better.

 Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10
 

 Key: HIVE-5318
 URL: https://issues.apache.org/jira/browse/HIVE-5318
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.9.0, 0.10.0
Reporter: Brad Ruderman
Assignee: Xuefu Zhang
Priority: Critical
 Attachments: HIVE-5318.patch


 When Exporting hive tables using the hive command in Hive 0.9 EXPORT table 
 TO 'hdfs_path' then importing to another hive 0.10 instance using IMPORT 
 FROM 'hdfs_path', hive throws this error:
 13/09/18 13:14:02 ERROR ql.Driver: FAILED: SemanticException Exception while 
 processing
 org.apache.hadoop.hive.ql.parse.SemanticException: Exception while processing
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: java.lang.NullPointerException
   at java.util.ArrayList.init(ArrayList.java:131)
   at 
 org.apache.hadoop.hive.ql.plan.CreateTableDesc.init(CreateTableDesc.java:128)
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:99)
   ... 16 more
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=compile 
 start=1379535241411 end=1379535242332 duration=921
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242332 end=1379535242332 duration=0
 13/09/18 13:14:02 INFO ql.Driver: PERFLOG method=releaseLocks
 13/09/18 13:14:02 INFO ql.Driver: /PERFLOG method=releaseLocks 
 start=1379535242333 end=1379535242333 duration=0
 This is probably a critical blocker for people who are trying to test Hive 
 0.10 in their staging environments prior to the upgrade from 0.9

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4910) Hadoop 2 archives broken

2013-09-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773907#comment-13773907
 ] 

Ashutosh Chauhan commented on HIVE-4910:


+1 Lets go with this fix. We can always improve it later.

 Hadoop 2 archives broken
 

 Key: HIVE-4910
 URL: https://issues.apache.org/jira/browse/HIVE-4910
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Tests
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Minor
 Fix For: 0.11.1

 Attachments: HIVE-4910.patch, HIVE-4910.patch


 Hadoop 2 archive tests are broken. The issue stems from the fact that har uri 
 construction does not really have a port in the URI when unit tests are run. 
 This means that an invalid uri is constructed resulting in failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5154) Remove unnecessary array creation in ReduceSinkOperator

2013-09-21 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773912#comment-13773912
 ] 

Phabricator commented on HIVE-5154:
---

ashutoshc has accepted the revision HIVE-5154 [jira] Remove unnecessary array 
creation in ReduceSinkOperator.

  +1

REVISION DETAIL
  https://reviews.facebook.net/D12549

BRANCH
  HIVE-5154

ARCANIST PROJECT
  hive

To: JIRA, ashutoshc, navis


 Remove unnecessary array creation in ReduceSinkOperator
 ---

 Key: HIVE-5154
 URL: https://issues.apache.org/jira/browse/HIVE-5154
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-5154.D12549.1.patch


 Key array is created for each row, which seemed not necessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request 14221: HIVE-4113: Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Yin Huai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14221/
---

(Updated Sept. 21, 2013, 10:26 p.m.)


Review request for hive.


Bugs: HIVE-4113
https://issues.apache.org/jira/browse/HIVE-4113


Repository: hive-git


Description
---

Modifies ColumnProjectionUtils such there are two flags. One for the column ids 
and one indicating whether all columns should be read. Additionally the patch 
updates all locations which uses the old method of empty string indicating all 
columns should be read.

The automatic formatter generated by ant eclipse-files is fairly aggressive so 
there are some unrelated import/whitespace cleanup.

This one is based on https://reviews.apache.org/r/11770/ and has been rebased 
to the latest trunk.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 381bcbe 
  conf/hive-default.xml.template 6531e55 
  contrib/src/test/results/clientpositive/serde_typedbytes.q.out 8c22399 
  contrib/src/test/results/clientpositive/serde_typedbytes2.q.out 1e4881f 
  contrib/src/test/results/clientpositive/serde_typedbytes3.q.out 0186983 
  contrib/src/test/results/clientpositive/serde_typedbytes5.q.out ece8e43 
  contrib/src/test/results/clientpositive/udf_row_sequence.q.out f745840 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
 766056b 
  hbase-handler/src/test/results/positive/hbase_queries.q.out 0bd55f6 
  
hbase-handler/src/test/results/positive/hbase_single_sourced_multi_insert.q.out 
92e8175 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatBaseInputFormat.java
 553446a 
  
hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatPartitioned.java
 577e06d 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoader.java
 d38bb8d 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 31a52ba 
  ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java df2ccf1 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java ab0494e 
  ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java a5a8943 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 0f29a0e 
  ql/src/java/org/apache/hadoop/hive/ql/io/BucketizedHiveInputFormat.java 
49145b7 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java cccdc1b 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java a83f223 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileRecordReader.java 9521060 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java 50c5093 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java 
ed14e82 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java 1ede6d7 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java 52e9e6b 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 2259977 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java b97d869 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
 0550bf6 
  ql/src/test/org/apache/hadoop/hive/ql/io/PerformTestRCFileAndSeqFile.java 
fb9fca1 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestRCFile.java dd1276d 
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
83c5c38 
  ql/src/test/queries/clientpositive/binary_table_colserde.q eadf07d 
  ql/src/test/results/clientpositive/auto_join0.q.out a75c01c 
  ql/src/test/results/clientpositive/auto_join15.q.out 6fb0ea6 
  ql/src/test/results/clientpositive/auto_join18.q.out 945af67 
  ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out 500df42 
  ql/src/test/results/clientpositive/auto_join20.q.out 6dd8ff7 
  ql/src/test/results/clientpositive/auto_join27.q.out aac778c 
  ql/src/test/results/clientpositive/auto_join30.q.out b5b313c 
  ql/src/test/results/clientpositive/auto_join31.q.out ee8204f 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 53ce112 
  ql/src/test/results/clientpositive/auto_smb_mapjoin_14.q.out 2bc99fa 
  ql/src/test/results/clientpositive/auto_sortmerge_join_10.q.out 0cd7734 
  ql/src/test/results/clientpositive/auto_sortmerge_join_6.q.out 1274b76 
  ql/src/test/results/clientpositive/auto_sortmerge_join_9.q.out 96fcd2b 
  ql/src/test/results/clientpositive/binary_output_format.q.out ad245f2 
  ql/src/test/results/clientpositive/binary_table_colserde.q.out 69a6c6e 
  ql/src/test/results/clientpositive/bucket5.q.out 41e4a3e 
  ql/src/test/results/clientpositive/bucketizedhiveinputformat.q.out e4beebc 
  ql/src/test/results/clientpositive/bucketmapjoin1.q.out 122d061 
  ql/src/test/results/clientpositive/bucketmapjoin2.q.out 955b8a2 
  ql/src/test/results/clientpositive/bucketmapjoin3.q.out e0b86ae 
  ql/src/test/results/clientpositive/bucketmapjoin4.q.out bed6a0a 
  

[jira] [Updated] (HIVE-4113) Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated HIVE-4113:
---

Attachment: HIVE-4113.9.patch

 Optimize select count(1) with RCFile and Orc
 

 Key: HIVE-4113
 URL: https://issues.apache.org/jira/browse/HIVE-4113
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Gopal V
Assignee: Yin Huai
 Fix For: 0.12.0

 Attachments: HIVE-4113-0.patch, HIVE-4113.1.patch, HIVE-4113.2.patch, 
 HIVE-4113.3.patch, HIVE-4113.4.patch, HIVE-4113.5.patch, HIVE-4113.6.patch, 
 HIVE-4113.7.patch, HIVE-4113.8.patch, HIVE-4113.9.patch, HIVE-4113.patch, 
 HIVE-4113.patch


 select count(1) loads up every column  every row when used with RCFile.
 select count(1) from store_sales_10_rc gives
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 31.73 sec   HDFS Read: 234914410 
 HDFS Write: 8 SUCCESS
 {code}
 Where as, select count(ss_sold_date_sk) from store_sales_10_rc; reads far 
 less
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 29.75 sec   HDFS Read: 28145994 
 HDFS Write: 8 SUCCESS
 {code}
 Which is 11% of the data size read by the COUNT(1).
 This was tracked down to the following code in RCFile.java
 {code}
   } else {
 // TODO: if no column name is specified e.g, in select count(1) from 
 tt;
 // skip all columns, this should be distinguished from the case:
 // select * from tt;
 for (int i = 0; i  skippedColIDs.length; i++) {
   skippedColIDs[i] = false;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3420) Inefficiency in hbase handler when process query including rowkey range scan

2013-09-21 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773917#comment-13773917
 ] 

Phabricator commented on HIVE-3420:
---

ashutoshc has accepted the revision HIVE-3420 [jira] Inefficiency in hbase 
handler when process query including rowkey range scan.

  +1

REVISION DETAIL
  https://reviews.facebook.net/D7311

BRANCH
  DPAL-1943

ARCANIST PROJECT
  hive

To: JIRA, ashutoshc, navis


 Inefficiency in hbase handler when process query including rowkey range scan
 

 Key: HIVE-3420
 URL: https://issues.apache.org/jira/browse/HIVE-3420
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
 Environment: Hive-0.9.0 + HBase-0.94.1
Reporter: Gang Deng
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3420.D7311.1.patch

   Original Estimate: 2h
  Remaining Estimate: 2h

 When query hive with hbase rowkey range, hive map tasks do not leverage 
 startrow, endrow information in tablesplit. For example, if the rowkeys fit 
 into 5 hbase files, then where will be 5 map tasks. Ideally, each task will 
 process 1 file. But in current implementation, each task processes 5 files 
 repeatedly. The behavior not only waste network bandwidth, but also worse the 
 lock contention in HBase block cache as each task have to access the same 
 block. The problem code is in HiveHBaseTableInputFormat.convertFilte as below:
 ……
 if (tableSplit != null) {
   tableSplit = new TableSplit(
 tableSplit.getTableName(),
 startRow,
 stopRow,
 tableSplit.getRegionLocation());
 }
 scan.setStartRow(startRow);
 scan.setStopRow(stopRow);
 ……
 As tableSplit already include startRow, endRow information of file, the 
 better implementation will be:
 ……
 byte[] splitStart = startRow;
 byte[] splitStop = stopRow;
 if (tableSplit != null) {
 
if(tableSplit.getStartRow() != null){
 splitStart = startRow.length == 0 ||
   Bytes.compareTo(tableSplit.getStartRow(), startRow) = 0 ?
 tableSplit.getStartRow() : startRow;
 }
 if(tableSplit.getEndRow() != null){
 splitStop = (stopRow.length == 0 ||
   Bytes.compareTo(tableSplit.getEndRow(), stopRow) = 0) 
   tableSplit.getEndRow().length  0 ?
 tableSplit.getEndRow() : stopRow;
 }   
   tableSplit = new TableSplit(
 tableSplit.getTableName(),
 splitStart,
 splitStop,
 tableSplit.getRegionLocation());
 }
 scan.setStartRow(splitStart);
 scan.setStopRow(splitStop);
 ……
 In my test, the changed code will improve performance more than 30%.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4113) Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated HIVE-4113:
---

Status: Open  (was: Patch Available)

 Optimize select count(1) with RCFile and Orc
 

 Key: HIVE-4113
 URL: https://issues.apache.org/jira/browse/HIVE-4113
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Gopal V
Assignee: Yin Huai
 Fix For: 0.12.0

 Attachments: HIVE-4113-0.patch, HIVE-4113.1.patch, HIVE-4113.2.patch, 
 HIVE-4113.3.patch, HIVE-4113.4.patch, HIVE-4113.5.patch, HIVE-4113.6.patch, 
 HIVE-4113.7.patch, HIVE-4113.8.patch, HIVE-4113.9.patch, HIVE-4113.patch, 
 HIVE-4113.patch


 select count(1) loads up every column  every row when used with RCFile.
 select count(1) from store_sales_10_rc gives
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 31.73 sec   HDFS Read: 234914410 
 HDFS Write: 8 SUCCESS
 {code}
 Where as, select count(ss_sold_date_sk) from store_sales_10_rc; reads far 
 less
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 29.75 sec   HDFS Read: 28145994 
 HDFS Write: 8 SUCCESS
 {code}
 Which is 11% of the data size read by the COUNT(1).
 This was tracked down to the following code in RCFile.java
 {code}
   } else {
 // TODO: if no column name is specified e.g, in select count(1) from 
 tt;
 // skip all columns, this should be distinguished from the case:
 // select * from tt;
 for (int i = 0; i  skippedColIDs.length; i++) {
   skippedColIDs[i] = false;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4113) Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated HIVE-4113:
---

Attachment: HIVE-4113.10.patch

.10 patch is the correct update.

 Optimize select count(1) with RCFile and Orc
 

 Key: HIVE-4113
 URL: https://issues.apache.org/jira/browse/HIVE-4113
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Gopal V
Assignee: Yin Huai
 Fix For: 0.12.0

 Attachments: HIVE-4113-0.patch, HIVE-4113.10.patch, 
 HIVE-4113.1.patch, HIVE-4113.2.patch, HIVE-4113.3.patch, HIVE-4113.4.patch, 
 HIVE-4113.5.patch, HIVE-4113.6.patch, HIVE-4113.7.patch, HIVE-4113.8.patch, 
 HIVE-4113.9.patch, HIVE-4113.patch, HIVE-4113.patch


 select count(1) loads up every column  every row when used with RCFile.
 select count(1) from store_sales_10_rc gives
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 31.73 sec   HDFS Read: 234914410 
 HDFS Write: 8 SUCCESS
 {code}
 Where as, select count(ss_sold_date_sk) from store_sales_10_rc; reads far 
 less
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 29.75 sec   HDFS Read: 28145994 
 HDFS Write: 8 SUCCESS
 {code}
 Which is 11% of the data size read by the COUNT(1).
 This was tracked down to the following code in RCFile.java
 {code}
   } else {
 // TODO: if no column name is specified e.g, in select count(1) from 
 tt;
 // skip all columns, this should be distinguished from the case:
 // select * from tt;
 for (int i = 0; i  skippedColIDs.length; i++) {
   skippedColIDs[i] = false;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request 14221: HIVE-4113: Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Yin Huai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14221/
---

(Updated Sept. 21, 2013, 10:47 p.m.)


Review request for hive.


Bugs: HIVE-4113
https://issues.apache.org/jira/browse/HIVE-4113


Repository: hive-git


Description
---

Modifies ColumnProjectionUtils such there are two flags. One for the column ids 
and one indicating whether all columns should be read. Additionally the patch 
updates all locations which uses the old method of empty string indicating all 
columns should be read.

The automatic formatter generated by ant eclipse-files is fairly aggressive so 
there are some unrelated import/whitespace cleanup.

This one is based on https://reviews.apache.org/r/11770/ and has been rebased 
to the latest trunk.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 381bcbe 
  conf/hive-default.xml.template 6531e55 
  contrib/src/test/results/clientpositive/serde_typedbytes.q.out 8c22399 
  contrib/src/test/results/clientpositive/serde_typedbytes2.q.out 1e4881f 
  contrib/src/test/results/clientpositive/serde_typedbytes3.q.out 0186983 
  contrib/src/test/results/clientpositive/serde_typedbytes5.q.out ece8e43 
  contrib/src/test/results/clientpositive/udf_row_sequence.q.out f745840 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
 766056b 
  hbase-handler/src/test/results/positive/hbase_queries.q.out 0bd55f6 
  
hbase-handler/src/test/results/positive/hbase_single_sourced_multi_insert.q.out 
92e8175 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatBaseInputFormat.java
 553446a 
  
hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatPartitioned.java
 577e06d 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoader.java
 d38bb8d 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 31a52ba 
  ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java df2ccf1 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java ab0494e 
  ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java a5a8943 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 0f29a0e 
  ql/src/java/org/apache/hadoop/hive/ql/io/BucketizedHiveInputFormat.java 
49145b7 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java cccdc1b 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java a83f223 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileRecordReader.java 9521060 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java 50c5093 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java 
ed14e82 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java 1ede6d7 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java 52e9e6b 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 2259977 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java b97d869 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
 0550bf6 
  ql/src/test/org/apache/hadoop/hive/ql/io/PerformTestRCFileAndSeqFile.java 
fb9fca1 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestRCFile.java dd1276d 
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
83c5c38 
  ql/src/test/queries/clientpositive/binary_table_colserde.q eadf07d 
  ql/src/test/results/clientpositive/auto_join0.q.out a75c01c 
  ql/src/test/results/clientpositive/auto_join15.q.out 6fb0ea6 
  ql/src/test/results/clientpositive/auto_join18.q.out 945af67 
  ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out 500df42 
  ql/src/test/results/clientpositive/auto_join20.q.out 6dd8ff7 
  ql/src/test/results/clientpositive/auto_join27.q.out aac778c 
  ql/src/test/results/clientpositive/auto_join30.q.out b5b313c 
  ql/src/test/results/clientpositive/auto_join31.q.out ee8204f 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 53ce112 
  ql/src/test/results/clientpositive/auto_smb_mapjoin_14.q.out 2bc99fa 
  ql/src/test/results/clientpositive/auto_sortmerge_join_10.q.out 0cd7734 
  ql/src/test/results/clientpositive/auto_sortmerge_join_6.q.out 1274b76 
  ql/src/test/results/clientpositive/auto_sortmerge_join_9.q.out 96fcd2b 
  ql/src/test/results/clientpositive/binary_output_format.q.out ad245f2 
  ql/src/test/results/clientpositive/binary_table_colserde.q.out 69a6c6e 
  ql/src/test/results/clientpositive/bucket5.q.out 41e4a3e 
  ql/src/test/results/clientpositive/bucketizedhiveinputformat.q.out e4beebc 
  ql/src/test/results/clientpositive/bucketmapjoin1.q.out 122d061 
  ql/src/test/results/clientpositive/bucketmapjoin2.q.out 955b8a2 
  ql/src/test/results/clientpositive/bucketmapjoin3.q.out e0b86ae 
  ql/src/test/results/clientpositive/bucketmapjoin4.q.out bed6a0a 
  

[jira] [Commented] (HIVE-4113) Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Yin Huai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773920#comment-13773920
 ] 

Yin Huai commented on HIVE-4113:


Thanks Ashutosh for updating golden files :)

 Optimize select count(1) with RCFile and Orc
 

 Key: HIVE-4113
 URL: https://issues.apache.org/jira/browse/HIVE-4113
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Gopal V
Assignee: Yin Huai
 Fix For: 0.12.0

 Attachments: HIVE-4113-0.patch, HIVE-4113.10.patch, 
 HIVE-4113.1.patch, HIVE-4113.2.patch, HIVE-4113.3.patch, HIVE-4113.4.patch, 
 HIVE-4113.5.patch, HIVE-4113.6.patch, HIVE-4113.7.patch, HIVE-4113.8.patch, 
 HIVE-4113.9.patch, HIVE-4113.patch, HIVE-4113.patch


 select count(1) loads up every column  every row when used with RCFile.
 select count(1) from store_sales_10_rc gives
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 31.73 sec   HDFS Read: 234914410 
 HDFS Write: 8 SUCCESS
 {code}
 Where as, select count(ss_sold_date_sk) from store_sales_10_rc; reads far 
 less
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 29.75 sec   HDFS Read: 28145994 
 HDFS Write: 8 SUCCESS
 {code}
 Which is 11% of the data size read by the COUNT(1).
 This was tracked down to the following code in RCFile.java
 {code}
   } else {
 // TODO: if no column name is specified e.g, in select count(1) from 
 tt;
 // skip all columns, this should be distinguished from the case:
 // select * from tt;
 for (int i = 0; i  skippedColIDs.length; i++) {
   skippedColIDs[i] = false;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5318) Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10

2013-09-21 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773932#comment-13773932
 ] 

Xuefu Zhang commented on HIVE-5318:
---

[~ashutoshc] Test was planed when patch is submitted.

As to the nullness of these lists, your assumption might be well correct, but 
that's not what I see in the code.

Here is the caller code in ImportSemanticAnalyzer.

{code}
tblDesc =  new CreateTableDesc(
table.getTableName(),
false, // isExternal: set to false here, can be overwritten by the
   // IMPORT stmt
table.getSd().getCols(),
table.getPartitionKeys(),
table.getSd().getBucketCols(),
table.getSd().getSortCols(),
table.getSd().getNumBuckets(),
null, null, null, null, null, // these 5 delims passed as serde 
params
null, // comment passed as table params
table.getSd().getInputFormat(),
table.getSd().getOutputFormat(),
null, // location: set to null here, can be
  // overwritten by the IMPORT stmt
table.getSd().getSerdeInfo().getSerializationLib(),
null, // storagehandler passed as table params
table.getSd().getSerdeInfo().getParameters(),
table.getParameters(), false,
(null == table.getSd().getSkewedInfo()) ? null : 
table.getSd().getSkewedInfo()
.getSkewedColNames(),
(null == table.getSd().getSkewedInfo()) ? null : 
table.getSd().getSkewedInfo()
.getSkewedColValues());

{code}

From the snippet we can see that, it's possible that the last two lists can be 
null. Also, the partition columns was passed from the thrift table object, 
for which null is clearly a valid value.

For reference, this is the metadata for an exported, simple table with two 
columns and two rows of data:

{code}
{partitions:[],table:{\1\:{\str\:\j1_41\},\2\:{\str\:\default\},\3\:{\str\:\johndee\},\4\:{\i32\:1371900915},\5\:{\i32\:0},\6\:{\i32\:0},\7\:{\rec\:{\1\:{\lst\:[\rec\,2,{\1\:{\str\:\a\},\2\:{\str\:\string\}},{\1\:{\str\:\b\},\2\:{\str\:\int\}}]},\2\:{\str\:\hdfs://hivebase01:8020/user/hive/warehouse/j1_41\},\3\:{\str\:\org.apache.hadoop.mapred.TextInputFormat\},\4\:{\str\:\org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat\},\5\:{\tf\:0},\6\:{\i32\:-1},\7\:{\rec\:{\2\:{\str\:\org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe\},\3\:{\map\:[\str\,\str\,2,{\serialization.format\:\,\,\field.delim\:\,\}]}}},\8\:{\lst\:[\str\,0]},\9\:{\lst\:[\rec\,0]},\10\:{\map\:[\str\,\str\,0,{}]}}},\8\:{\lst\:[\rec\,0]},\9\:{\map\:[\str\,\str\,1,{\transient_lastDdlTime\:\1371900931\}]},\12\:{\str\:\MANAGED_TABLE\}},version:0.1}
{code}

This piece of meta data contains no partition columns, or skewedkey/values, etc.

Could you clarify if you meant to say that the list should not null but with 
zero element? For unknown reason, the code doesn't reflect that either. for 
instance, Utilities.getFieldSchemaString() has code to handle a null list of 
partition columns.

Any further insight is appreciated.


 Import Throws Error when Importing from a table export Hive 0.9 to Hive 0.10
 

 Key: HIVE-5318
 URL: https://issues.apache.org/jira/browse/HIVE-5318
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.9.0, 0.10.0
Reporter: Brad Ruderman
Assignee: Xuefu Zhang
Priority: Critical
 Attachments: HIVE-5318.patch


 When Exporting hive tables using the hive command in Hive 0.9 EXPORT table 
 TO 'hdfs_path' then importing to another hive 0.10 instance using IMPORT 
 FROM 'hdfs_path', hive throws this error:
 13/09/18 13:14:02 ERROR ql.Driver: FAILED: SemanticException Exception while 
 processing
 org.apache.hadoop.hive.ql.parse.SemanticException: Exception while processing
   at 
 org.apache.hadoop.hive.ql.parse.ImportSemanticAnalyzer.analyzeInternal(ImportSemanticAnalyzer.java:277)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:347)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:706)

Re: Review Request 14221: HIVE-4113: Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Yin Huai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14221/
---

(Updated Sept. 22, 2013, 1:49 a.m.)


Review request for hive.


Changes
---

fix skewjoin.q


Bugs: HIVE-4113
https://issues.apache.org/jira/browse/HIVE-4113


Repository: hive-git


Description
---

Modifies ColumnProjectionUtils such there are two flags. One for the column ids 
and one indicating whether all columns should be read. Additionally the patch 
updates all locations which uses the old method of empty string indicating all 
columns should be read.

The automatic formatter generated by ant eclipse-files is fairly aggressive so 
there are some unrelated import/whitespace cleanup.

This one is based on https://reviews.apache.org/r/11770/ and has been rebased 
to the latest trunk.


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 381bcbe 
  conf/hive-default.xml.template 6531e55 
  contrib/src/test/results/clientpositive/serde_typedbytes.q.out 8c22399 
  contrib/src/test/results/clientpositive/serde_typedbytes2.q.out 1e4881f 
  contrib/src/test/results/clientpositive/serde_typedbytes3.q.out 0186983 
  contrib/src/test/results/clientpositive/serde_typedbytes5.q.out ece8e43 
  contrib/src/test/results/clientpositive/udf_row_sequence.q.out f745840 
  
hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
 766056b 
  hbase-handler/src/test/results/positive/hbase_queries.q.out 0bd55f6 
  
hbase-handler/src/test/results/positive/hbase_single_sourced_multi_insert.q.out 
92e8175 
  
hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatBaseInputFormat.java
 553446a 
  
hcatalog/core/src/test/java/org/apache/hive/hcatalog/mapreduce/TestHCatPartitioned.java
 577e06d 
  
hcatalog/hcatalog-pig-adapter/src/test/java/org/apache/hive/hcatalog/pig/TestHCatLoader.java
 d38bb8d 
  ql/src/java/org/apache/hadoop/hive/ql/Driver.java 31a52ba 
  ql/src/java/org/apache/hadoop/hive/ql/exec/FetchTask.java df2ccf1 
  ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java ab0494e 
  ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java a5a8943 
  ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 0f29a0e 
  ql/src/java/org/apache/hadoop/hive/ql/io/BucketizedHiveInputFormat.java 
49145b7 
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java cccdc1b 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java a83f223 
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFileRecordReader.java 9521060 
  ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java 50c5093 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java 
ed14e82 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java 1ede6d7 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java 52e9e6b 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java 2259977 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java b97d869 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
 48587ba 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/MetadataOnlyOptimizer.java
 0550bf6 
  ql/src/test/org/apache/hadoop/hive/ql/io/PerformTestRCFileAndSeqFile.java 
fb9fca1 
  ql/src/test/org/apache/hadoop/hive/ql/io/TestRCFile.java dd1276d 
  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java 
83c5c38 
  ql/src/test/queries/clientpositive/binary_table_colserde.q eadf07d 
  ql/src/test/results/clientpositive/auto_join0.q.out a75c01c 
  ql/src/test/results/clientpositive/auto_join15.q.out 6fb0ea6 
  ql/src/test/results/clientpositive/auto_join18.q.out 945af67 
  ql/src/test/results/clientpositive/auto_join18_multi_distinct.q.out 500df42 
  ql/src/test/results/clientpositive/auto_join20.q.out 6dd8ff7 
  ql/src/test/results/clientpositive/auto_join27.q.out aac778c 
  ql/src/test/results/clientpositive/auto_join30.q.out b5b313c 
  ql/src/test/results/clientpositive/auto_join31.q.out ee8204f 
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out 53ce112 
  ql/src/test/results/clientpositive/auto_smb_mapjoin_14.q.out 2bc99fa 
  ql/src/test/results/clientpositive/auto_sortmerge_join_10.q.out 0cd7734 
  ql/src/test/results/clientpositive/auto_sortmerge_join_6.q.out 1274b76 
  ql/src/test/results/clientpositive/auto_sortmerge_join_9.q.out 96fcd2b 
  ql/src/test/results/clientpositive/binary_output_format.q.out ad245f2 
  ql/src/test/results/clientpositive/binary_table_colserde.q.out 69a6c6e 
  ql/src/test/results/clientpositive/bucket5.q.out 41e4a3e 
  ql/src/test/results/clientpositive/bucketizedhiveinputformat.q.out e4beebc 
  ql/src/test/results/clientpositive/bucketmapjoin1.q.out 122d061 
  ql/src/test/results/clientpositive/bucketmapjoin2.q.out 955b8a2 
  

[jira] [Commented] (HIVE-4910) Hadoop 2 archives broken

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773947#comment-13773947
 ] 

Hudson commented on HIVE-4910:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #110 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/110/])
HIVE-4910 : Hadoop 2 archives broken (Vikram Dixit K via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525297)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/test/queries/clientpositive/archive_excludeHadoop20.q
* /hive/trunk/ql/src/test/results/clientpositive/archive_excludeHadoop20.q.out


 Hadoop 2 archives broken
 

 Key: HIVE-4910
 URL: https://issues.apache.org/jira/browse/HIVE-4910
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Tests
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4910.patch, HIVE-4910.patch


 Hadoop 2 archive tests are broken. The issue stems from the fact that har uri 
 construction does not really have a port in the URI when unit tests are run. 
 This means that an invalid uri is constructed resulting in failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4732) Reduce or eliminate the expensive Schema equals() check for AvroSerde

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773949#comment-13773949
 ] 

Hudson commented on HIVE-4732:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #110 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/110/])
HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for 
AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525290)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
* /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java


 Reduce or eliminate the expensive Schema equals() check for AvroSerde
 -

 Key: HIVE-4732
 URL: https://issues.apache.org/jira/browse/HIVE-4732
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Reporter: Mark Wagner
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: HIVE-4732.1.patch, HIVE-4732.4.patch, HIVE-4732.5.patch, 
 HIVE-4732.6.patch, HIVE-4732.7.patch, HIVE-4732.v1.patch, HIVE-4732.v4.patch


 The AvroSerde spends a significant amount of time checking schema equality. 
 Changing to compare hashcodes (which can be computed once then reused) will 
 improve performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5306) Use new GenericUDF instead of basic UDF for UDFAbs class

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773946#comment-13773946
 ] 

Hudson commented on HIVE-5306:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #110 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/110/])
HIVE-5306 : Use new GenericUDF instead of basic UDF for UDFAbs class (Mohammad 
Kamrul Islam via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525294)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFAbs.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFAbs.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestGenericUDFAbs.java


 Use new GenericUDF instead of basic UDF for UDFAbs class
 

 Key: HIVE-5306
 URL: https://issues.apache.org/jira/browse/HIVE-5306
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: HIVE-5306.1.patch, HIVE-5306.2.patch, HIVE-5306.3.patch, 
 HIVE-5306.4.patch, HIVE-5306.5.patch, HIVE-5306.6.patch


 GenericUDF class is the latest  and recommended base class for any UDFs.
 This JIRA is to change the current UDFAbs class extended from GenericUDF.
 The general benefit of GenericUDF is described in comments as 
 * The GenericUDF are superior to normal UDFs in the following ways: 1. It can
  * accept arguments of complex types, and return complex types. 2. It can 
 accept
  * variable length of arguments. 3. It can accept an infinite number of 
 function
  * signature - for example, it's easy to write a GenericUDF that accepts
  * arrayint, arrayarrayint and so on (arbitrary levels of nesting). 4. 
 It
  * can do short-circuit evaluations using DeferedObject.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3764) Support metastore version consistency check

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773948#comment-13773948
 ] 

Hudson commented on HIVE-3764:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #110 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/110/])
HIVE-3764 : Support metastore version consistency check (Prasad Mujumdar via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525288)
* /hive/trunk/build-common.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/metastore/scripts/upgrade/derby/014-HIVE-3764.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/hive-schema-0.12.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/hive-schema-0.13.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.10.0-to-0.11.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.11.0-to-0.12.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.12.0-to-0.13.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade.order.derby
* /hive/trunk/metastore/scripts/upgrade/mysql/014-HIVE-3764.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/hive-schema-0.12.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/hive-schema-0.13.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade-0.11.0-to-0.12.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade.order.mysql
* /hive/trunk/metastore/scripts/upgrade/oracle/014-HIVE-3764.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.12.0.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.13.0.oracle.sql
* 
/hive/trunk/metastore/scripts/upgrade/oracle/upgrade-0.10.0-to-0.11.0.mysql.sql
* 
/hive/trunk/metastore/scripts/upgrade/oracle/upgrade-0.11.0-to-0.12.0.oracle.sql
* 
/hive/trunk/metastore/scripts/upgrade/oracle/upgrade-0.12.0-to-0.13.0.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/upgrade.order.oracle
* /hive/trunk/metastore/scripts/upgrade/postgres/014-HIVE-3764.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/hive-schema-0.12.0.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/hive-schema-0.13.0.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/upgrade-0.11.0-to-0.12.0.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/upgrade-0.12.0-to-0.13.0.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/upgrade.order.postgres
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreSchemaInfo.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MVersionTable.java
* /hive/trunk/metastore/src/model/package.jdo
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


 Support metastore version consistency check
 ---

 Key: HIVE-3764
 URL: https://issues.apache.org/jira/browse/HIVE-3764
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-3764-12.3.patch, HIVE-3764.1.patch, 
 HIVE-3764.2.patch, HIVE-3764-trunk.3.patch


 Today there's no version/compatibility information stored in hive metastore. 
 Also the datanucleus configuration property to automatically create missing 
 tables is enabled by default. If you happen to start an older or newer hive 
 or don't run the correct upgrade scripts during migration, the metastore 
 would end up corrupted. The autoCreate schema is not always sufficient to 
 upgrade metastore when migrating to newer release. It's not supported with 
 all databases. Besides the migration often involves altering existing table, 
 changing or moving data etc.
 Hence it's very useful to have some consistency check to make sure that hive 
 is using correct metastore and for production systems the schema is not 
 automatically by running hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

[jira] [Commented] (HIVE-4910) Hadoop 2 archives broken

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773958#comment-13773958
 ] 

Hudson commented on HIVE-4910:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #178 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/178/])
HIVE-4910 : Hadoop 2 archives broken (Vikram Dixit K via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525297)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/test/queries/clientpositive/archive_excludeHadoop20.q
* /hive/trunk/ql/src/test/results/clientpositive/archive_excludeHadoop20.q.out


 Hadoop 2 archives broken
 

 Key: HIVE-4910
 URL: https://issues.apache.org/jira/browse/HIVE-4910
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Tests
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4910.patch, HIVE-4910.patch


 Hadoop 2 archive tests are broken. The issue stems from the fact that har uri 
 construction does not really have a port in the URI when unit tests are run. 
 This means that an invalid uri is constructed resulting in failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5306) Use new GenericUDF instead of basic UDF for UDFAbs class

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773957#comment-13773957
 ] 

Hudson commented on HIVE-5306:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #178 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/178/])
HIVE-5306 : Use new GenericUDF instead of basic UDF for UDFAbs class (Mohammad 
Kamrul Islam via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525294)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFAbs.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFAbs.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestGenericUDFAbs.java


 Use new GenericUDF instead of basic UDF for UDFAbs class
 

 Key: HIVE-5306
 URL: https://issues.apache.org/jira/browse/HIVE-5306
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: HIVE-5306.1.patch, HIVE-5306.2.patch, HIVE-5306.3.patch, 
 HIVE-5306.4.patch, HIVE-5306.5.patch, HIVE-5306.6.patch


 GenericUDF class is the latest  and recommended base class for any UDFs.
 This JIRA is to change the current UDFAbs class extended from GenericUDF.
 The general benefit of GenericUDF is described in comments as 
 * The GenericUDF are superior to normal UDFs in the following ways: 1. It can
  * accept arguments of complex types, and return complex types. 2. It can 
 accept
  * variable length of arguments. 3. It can accept an infinite number of 
 function
  * signature - for example, it's easy to write a GenericUDF that accepts
  * arrayint, arrayarrayint and so on (arbitrary levels of nesting). 4. 
 It
  * can do short-circuit evaluations using DeferedObject.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4732) Reduce or eliminate the expensive Schema equals() check for AvroSerde

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773959#comment-13773959
 ] 

Hudson commented on HIVE-4732:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #178 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/178/])
HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for 
AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525290)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
* /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java


 Reduce or eliminate the expensive Schema equals() check for AvroSerde
 -

 Key: HIVE-4732
 URL: https://issues.apache.org/jira/browse/HIVE-4732
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Reporter: Mark Wagner
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: HIVE-4732.1.patch, HIVE-4732.4.patch, HIVE-4732.5.patch, 
 HIVE-4732.6.patch, HIVE-4732.7.patch, HIVE-4732.v1.patch, HIVE-4732.v4.patch


 The AvroSerde spends a significant amount of time checking schema equality. 
 Changing to compare hashcodes (which can be computed once then reused) will 
 improve performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5306) Use new GenericUDF instead of basic UDF for UDFAbs class

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773961#comment-13773961
 ] 

Hudson commented on HIVE-5306:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #449 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/449/])
HIVE-5306 : Use new GenericUDF instead of basic UDF for UDFAbs class (Mohammad 
Kamrul Islam via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525294)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFAbs.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFAbs.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/udf/TestGenericUDFAbs.java


 Use new GenericUDF instead of basic UDF for UDFAbs class
 

 Key: HIVE-5306
 URL: https://issues.apache.org/jira/browse/HIVE-5306
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: HIVE-5306.1.patch, HIVE-5306.2.patch, HIVE-5306.3.patch, 
 HIVE-5306.4.patch, HIVE-5306.5.patch, HIVE-5306.6.patch


 GenericUDF class is the latest  and recommended base class for any UDFs.
 This JIRA is to change the current UDFAbs class extended from GenericUDF.
 The general benefit of GenericUDF is described in comments as 
 * The GenericUDF are superior to normal UDFs in the following ways: 1. It can
  * accept arguments of complex types, and return complex types. 2. It can 
 accept
  * variable length of arguments. 3. It can accept an infinite number of 
 function
  * signature - for example, it's easy to write a GenericUDF that accepts
  * arrayint, arrayarrayint and so on (arbitrary levels of nesting). 4. 
 It
  * can do short-circuit evaluations using DeferedObject.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4910) Hadoop 2 archives broken

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773962#comment-13773962
 ] 

Hudson commented on HIVE-4910:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #449 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/449/])
HIVE-4910 : Hadoop 2 archives broken (Vikram Dixit K via Ashutosh Chauhan) 
(hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525297)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/test/queries/clientpositive/archive_excludeHadoop20.q
* /hive/trunk/ql/src/test/results/clientpositive/archive_excludeHadoop20.q.out


 Hadoop 2 archives broken
 

 Key: HIVE-4910
 URL: https://issues.apache.org/jira/browse/HIVE-4910
 Project: Hive
  Issue Type: Bug
  Components: Query Processor, Tests
Affects Versions: 0.11.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-4910.patch, HIVE-4910.patch


 Hadoop 2 archive tests are broken. The issue stems from the fact that har uri 
 construction does not really have a port in the URI when unit tests are run. 
 This means that an invalid uri is constructed resulting in failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4732) Reduce or eliminate the expensive Schema equals() check for AvroSerde

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773964#comment-13773964
 ] 

Hudson commented on HIVE-4732:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #449 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/449/])
HIVE-4732 : Reduce or eliminate the expensive Schema equals() check for 
AvroSerde (Mohammad Kamrul Islam via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525290)
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
* /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java


 Reduce or eliminate the expensive Schema equals() check for AvroSerde
 -

 Key: HIVE-4732
 URL: https://issues.apache.org/jira/browse/HIVE-4732
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Reporter: Mark Wagner
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: HIVE-4732.1.patch, HIVE-4732.4.patch, HIVE-4732.5.patch, 
 HIVE-4732.6.patch, HIVE-4732.7.patch, HIVE-4732.v1.patch, HIVE-4732.v4.patch


 The AvroSerde spends a significant amount of time checking schema equality. 
 Changing to compare hashcodes (which can be computed once then reused) will 
 improve performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3764) Support metastore version consistency check

2013-09-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773963#comment-13773963
 ] 

Hudson commented on HIVE-3764:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #449 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/449/])
HIVE-3764 : Support metastore version consistency check (Prasad Mujumdar via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1525288)
* /hive/trunk/build-common.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/metastore/scripts/upgrade/derby/014-HIVE-3764.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/hive-schema-0.12.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/hive-schema-0.13.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.10.0-to-0.11.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.11.0-to-0.12.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.12.0-to-0.13.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade.order.derby
* /hive/trunk/metastore/scripts/upgrade/mysql/014-HIVE-3764.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/hive-schema-0.12.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/hive-schema-0.13.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade-0.11.0-to-0.12.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade-0.12.0-to-0.13.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade.order.mysql
* /hive/trunk/metastore/scripts/upgrade/oracle/014-HIVE-3764.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.12.0.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.13.0.oracle.sql
* 
/hive/trunk/metastore/scripts/upgrade/oracle/upgrade-0.10.0-to-0.11.0.mysql.sql
* 
/hive/trunk/metastore/scripts/upgrade/oracle/upgrade-0.11.0-to-0.12.0.oracle.sql
* 
/hive/trunk/metastore/scripts/upgrade/oracle/upgrade-0.12.0-to-0.13.0.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/upgrade.order.oracle
* /hive/trunk/metastore/scripts/upgrade/postgres/014-HIVE-3764.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/hive-schema-0.12.0.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/hive-schema-0.13.0.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/upgrade-0.11.0-to-0.12.0.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/upgrade-0.12.0-to-0.13.0.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/upgrade.order.postgres
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreSchemaInfo.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MVersionTable.java
* /hive/trunk/metastore/src/model/package.jdo
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreControlledCommit.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetastoreVersion.java


 Support metastore version consistency check
 ---

 Key: HIVE-3764
 URL: https://issues.apache.org/jira/browse/HIVE-3764
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.8.0, 0.9.0, 0.10.0, 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.13.0

 Attachments: HIVE-3764-12.3.patch, HIVE-3764.1.patch, 
 HIVE-3764.2.patch, HIVE-3764-trunk.3.patch


 Today there's no version/compatibility information stored in hive metastore. 
 Also the datanucleus configuration property to automatically create missing 
 tables is enabled by default. If you happen to start an older or newer hive 
 or don't run the correct upgrade scripts during migration, the metastore 
 would end up corrupted. The autoCreate schema is not always sufficient to 
 upgrade metastore when migrating to newer release. It's not supported with 
 all databases. Besides the migration often involves altering existing table, 
 changing or moving data etc.
 Hence it's very useful to have some consistency check to make sure that hive 
 is using correct metastore and for production systems the schema is not 
 automatically by running hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4113) Optimize select count(1) with RCFile and Orc

2013-09-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13773974#comment-13773974
 ] 

Hive QA commented on HIVE-4113:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12604421/HIVE-4113.11.patch

{color:green}SUCCESS:{color} +1 3143 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/856/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/856/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Optimize select count(1) with RCFile and Orc
 

 Key: HIVE-4113
 URL: https://issues.apache.org/jira/browse/HIVE-4113
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Reporter: Gopal V
Assignee: Yin Huai
 Fix For: 0.12.0

 Attachments: HIVE-4113-0.patch, HIVE-4113.10.patch, 
 HIVE-4113.11.patch, HIVE-4113.1.patch, HIVE-4113.2.patch, HIVE-4113.3.patch, 
 HIVE-4113.4.patch, HIVE-4113.5.patch, HIVE-4113.6.patch, HIVE-4113.7.patch, 
 HIVE-4113.8.patch, HIVE-4113.9.patch, HIVE-4113.patch, HIVE-4113.patch


 select count(1) loads up every column  every row when used with RCFile.
 select count(1) from store_sales_10_rc gives
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 31.73 sec   HDFS Read: 234914410 
 HDFS Write: 8 SUCCESS
 {code}
 Where as, select count(ss_sold_date_sk) from store_sales_10_rc; reads far 
 less
 {code}
 Job 0: Map: 5  Reduce: 1   Cumulative CPU: 29.75 sec   HDFS Read: 28145994 
 HDFS Write: 8 SUCCESS
 {code}
 Which is 11% of the data size read by the COUNT(1).
 This was tracked down to the following code in RCFile.java
 {code}
   } else {
 // TODO: if no column name is specified e.g, in select count(1) from 
 tt;
 // skip all columns, this should be distinguished from the case:
 // select * from tt;
 for (int i = 0; i  skippedColIDs.length; i++) {
   skippedColIDs[i] = false;
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5301) Add a schema tool for offline metastore schema upgrade

2013-09-21 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5301:
--

Attachment: HIVE-5301.3.patch

 Add a schema tool for offline metastore schema upgrade
 --

 Key: HIVE-5301
 URL: https://issues.apache.org/jira/browse/HIVE-5301
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar
 Fix For: 0.12.0

 Attachments: HIVE-5301.1.patch, HIVE-5301.3.patch, 
 HIVE-5301-with-HIVE-3764.0.patch


 HIVE-3764 is addressing metastore version consistency.
 Besides it would be helpful to add a tool that can leverage this version 
 information to figure out the required set of upgrade scripts, and execute 
 those against the configured metastore. Now that Hive includes Beeline 
 client, it can be used to execute the scripts.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira