[jira] [Commented] (HIVE-2577) Expose the HiveConf in HiveConnection API

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547926#comment-13547926
 ] 

Hudson commented on HIVE-2577:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2577: Expose the HiveConf in HiveConnection API (Nicolas Lalevee via 
Ashutosh Chauhan) (Revision 1304068)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304068
Files : 
* /hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveConnection.java


 Expose the HiveConf in HiveConnection API
 -

 Key: HIVE-2577
 URL: https://issues.apache.org/jira/browse/HIVE-2577
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Nicolas Lalevée
Assignee: Nicolas Lalevée
 Fix For: 0.9.0

 Attachments: HIVE-2577-r1201637.patch


 When running the jdbc code in a local mode, there no way to programatically 
 manage the hive conf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3149) Dynamically generated paritions deleted by Block level merge

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547927#comment-13547927
 ] 

Hudson commented on HIVE-3149:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3149 Dynamically generated paritions deleted by Block level merge
(Kevin Wilfong via namit) (Revision 1350946)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1350946
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/MergeWork.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/RCFileMergeMapper.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
* /hive/trunk/ql/src/test/queries/clientpositive/merge_dynamic_partition4.q
* /hive/trunk/ql/src/test/results/clientpositive/merge_dynamic_partition4.q.out


 Dynamically generated paritions deleted by Block level merge
 

 Key: HIVE-3149
 URL: https://issues.apache.org/jira/browse/HIVE-3149
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Blocker
 Fix For: 0.10.0

 Attachments: HIVE-3149.1.patch.txt


 When creating partitions in a table using dynamic partitions and a Block 
 level merge is executed at the end of the query, some partitions may be lost. 
  Specifically if the values of two or more dynamic partition keys end in the 
 same sequence of numbers, all but the largest will be dropped.
 I was not able to confirm it, but I suspect that if a map reduce job is 
 speculated as part of the merge, the duplicate data will not be deleted 
 either.
 E.g.
 insert overwrite table merge_dynamic_part partition (ds = '2008-04-08', hr)
 select key, value, if(key % 2 == 0, 'a1', 'b1') as hr from 
 srcpart_merge_dp_rc where ds = '2008-04-08';
 In this query, if a Block level merge is executed at the end, only one of the 
 partitions ds=2008-04-08/hr=a1 and ds=2008-04-08/hr=b1 will appear in the 
 final table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-895) Add SerDe for Avro serialized data

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547928#comment-13547928
 ] 

Hudson commented on HIVE-895:
-

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-895 Add SerDe for Avro serialized data (Jakob Homan via egc) (Revision 
1345420)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1345420
Files : 
* /hive/trunk/data/files/doctors.avro
* /hive/trunk/data/files/episodes.avro
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroContainerInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroContainerOutputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordWriter.java
* /hive/trunk/ql/src/test/queries/clientpositive/avro_change_schema.q
* /hive/trunk/ql/src/test/queries/clientpositive/avro_evolved_schemas.q
* /hive/trunk/ql/src/test/queries/clientpositive/avro_joins.q
* /hive/trunk/ql/src/test/queries/clientpositive/avro_sanity_test.q
* /hive/trunk/ql/src/test/queries/clientpositive/avro_schema_error_message.q
* /hive/trunk/ql/src/test/queries/clientpositive/avro_schema_literal.q
* /hive/trunk/ql/src/test/results/clientpositive/avro_change_schema.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_evolved_schemas.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_joins.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_sanity_test.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_schema_error_message.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_schema_literal.q.out
* /hive/trunk/serde/ivy.xml
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroGenericRecordWritable.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroObjectInspectorGenerator.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeException.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerdeUtils.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerializer.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/BadSchemaException.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/InstanceCache.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/ReaderWriterSchemaPair.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/SchemaResolutionProblem.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/SchemaToTypeInfo.java
* /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroObjectInspectorGenerator.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerde.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerdeUtils.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerializer.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestGenericAvroRecordWritable.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestInstanceCache.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestSchemaReEncoder.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestThatEvolvedSchemasActAsWeWant.java
* /hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/Utils.java


 Add SerDe for Avro serialized data
 --

 Key: HIVE-895
 URL: https://issues.apache.org/jira/browse/HIVE-895
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Affects Versions: 0.9.0
Reporter: Jeff Hammerbacher
Assignee: Jakob Homan
 Fix For: 0.10.0, 0.9.1

 Attachments: doctors.avro, episodes.avro, HIVE-895-draft.patch, 
 HIVE-895.patch, hive-895.patch.1.txt


 As Avro continues to mature, having a SerDe to allow HiveQL queries over Avro 
 data seems like a solid win.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3152) Disallow certain character patterns in partition names

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547929#comment-13547929
 ] 

Hudson commented on HIVE-3152:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3152. Disallow certain character patterns in partition names. (Ivan 
Gorbachev via kevinwilfong) (Revision 1401784)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1401784
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreEventListener.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/PartitionNameWhitelistPreEventListener.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/AddPartitionEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/AlterPartitionEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/AlterTableEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/CreateDatabaseEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/CreateTableEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/DropDatabaseEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/DropPartitionEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/DropTableEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/ListenerEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/LoadPartitionDoneEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/PreCreateTableEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/PreDropDatabaseEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/PreDropTableEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/PreEventContext.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/PreLoadPartitionDoneEvent.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEventListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestPartitionNameWhitelistPreEventHook.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/MetaDataExportListener.java
* /hive/trunk/ql/src/test/queries/clientnegative/add_partition_with_whitelist.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/dynamic_partitions_with_whitelist.q
* /hive/trunk/ql/src/test/queries/clientpositive/add_partition_no_whitelist.q
* /hive/trunk/ql/src/test/queries/clientpositive/add_partition_with_whitelist.q
* 
/hive/trunk/ql/src/test/results/clientnegative/add_partition_with_whitelist.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/dynamic_partitions_with_whitelist.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/add_partition_no_whitelist.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/add_partition_with_whitelist.q.out


 Disallow certain character patterns in partition names
 --

 Key: HIVE-3152
 URL: https://issues.apache.org/jira/browse/HIVE-3152
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Andrew Poland
Assignee: Ivan Gorbachev
Priority: Minor
  Labels: api-addition, configuration-addition
 Fix For: 0.10.0

 Attachments: jira-3152.0.patch, jira-3152.1.patch, jira-3152.2.patch


 New event listener to allow metastore to reject a partition name if it 
 contains undesired character patterns such as unicode and commas.
 Match pattern is implemented as a regular expression
 Modifies append_partition to call a new MetaStorePreventListener 
 implementation, PreAppendPartitionEvent.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3153) Release codecs and output streams between flushes of RCFile

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547930#comment-13547930
 ] 

Hudson commented on HIVE-3153:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3153 : Release codecs and output streams between flushes of RCFile 
(Owen Omalley via Ashutosh Chauhan) (Revision 1367692)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367692
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java


 Release codecs and output streams between flushes of RCFile
 ---

 Key: HIVE-3153
 URL: https://issues.apache.org/jira/browse/HIVE-3153
 Project: Hive
  Issue Type: Improvement
  Components: Compression
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.10.0

 Attachments: hive-3153.patch


 Currently, RCFile writer holds a compression codec per a file and a 
 compression output stream per a column. Especially for queries that use 
 dynamic partitions this quickly consumes a lot of memory.
 I'd like flushRecords to get a codec from the pool and create the compression 
 output stream in flushRecords.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3051) JDBC cannot find metadata for tables/columns containing uppercase character

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547931#comment-13547931
 ] 

Hudson commented on HIVE-3051:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3051 JDBC cannot find metadata for tables/columns containing uppercase
character (Navis via namit) (Revision 1342428)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342428
Files : 
* 
/hive/trunk/jdbc/src/java/org/apache/hadoop/hive/jdbc/HiveDatabaseMetaData.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java


 JDBC cannot find metadata for tables/columns containing uppercase character
 ---

 Key: HIVE-3051
 URL: https://issues.apache.org/jira/browse/HIVE-3051
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.10.0


 {code}
 create table TEST_TABLE ( ... );
 ...
 ResultSet rs = databaseMetaData.getColumns(null, null, TEST_TABLE, null); 
 // empty
 {code}
 Trivial, but hive shell or thrift client accepts above use-case by converting 
 strings to lower case. This should be consistent with JDBC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3052) TestHadoop20SAuthBridge always uses the same port

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547932#comment-13547932
 ] 

Hudson commented on HIVE-3052:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3052 : TestHadoop20SAuthBridge always uses the same port (Navis via 
Ashutosh Chauhan) (Revision 1344797)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1344797
Files : 
* 
/hive/trunk/shims/src/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java


 TestHadoop20SAuthBridge always uses the same port
 -

 Key: HIVE-3052
 URL: https://issues.apache.org/jira/browse/HIVE-3052
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.10.0

 Attachments: HIVE-3052.1.patch.txt


 Similar to https://issues.apache.org/jira/browse/HIVE-2959
 TestHadoop20SAuthBridge uses fixed port 1 (and 10010) for testing which 
 is default port of hive server, making test fail if someone is testing 
 it(hive server).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3156) Support MiniMR tests for Hadoop-0.23

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547933#comment-13547933
 ] 

Hudson commented on HIVE-3156:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3341 [jira] Making hive tests run against different MR versions
(Sushanth Sowmyan via Ashutosh Chauhan)

Summary:
HIVE-3341 Making hive tests run against different MR versions

After we build hive, we want to have the ability to run unit tests against 
specific hadoop versions. Currently, the classpath constructed has multiple 
hadoop jars, which makes compiling okay, but running non-deterministic.

An example is HIVE-3156, where running against 0.23 shows issues with a couple 
of tests (which should either be shimmed out, or separated into directories the 
way it's done in the shims/ directory) - It would also be nice to find these 
issues out at test-compile time itself, rather than having them fail at test 
runtime.

With this patch, we can set ant variable hadoop.mr.rev to 20, 20S or 23 to test 
against a particular version.

Test Plan: current tests continue to work - this is a build system change

Reviewers: JIRA, ashutoshc, cwsteinbach

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D4557 (Revision 1372745)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1372745
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/ivy/common-configurations.xml
* /hive/trunk/shims/build.xml
* /hive/trunk/shims/src/common-secure/test
* /hive/trunk/shims/src/common-secure/test/org
* /hive/trunk/shims/src/common-secure/test/org/apache
* /hive/trunk/shims/src/common-secure/test/org/apache/hadoop
* /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive
* /hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift
* 
/hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java
* 
/hive/trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestZooKeeperTokenStore.java
* 
/hive/trunk/shims/src/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java
* 
/hive/trunk/shims/src/test/org/apache/hadoop/hive/thrift/TestZooKeeperTokenStore.java


 Support MiniMR tests for Hadoop-0.23
 

 Key: HIVE-3156
 URL: https://issues.apache.org/jira/browse/HIVE-3156
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.1
 Environment: Hive-0.9.1 + Hadoop-0.23.0
Reporter: rohithsharma

 When MiniMR tests are ran with hadoop-0.23.0, tests fails throwing following 
 exception.
 java.lang.UnsupportedOperationException
   at 
 org.apache.hadoop.mapred.MiniMRCluster.getJobTrackerPort(MiniMRCluster.java:88)
   at org.apache.hadoop.hive.ql.QTestUtil.initConf(QTestUtil.java:215)
 In hadoop-0.23 ,few mapreduce API's are not supported like 
 getJobTrackerPort(). 
 Here, we need to use API's based on hadoop versions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2585) Collapse hive.metastore.uris and hive.metastore.local

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547934#comment-13547934
 ] 

Hudson commented on HIVE-2585:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2585 Collapse hive.metastore.uris and hive.metastore.local (Ashutosh 
Chauhan via egc) (Revision 1352333)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1352333
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/data/conf/hive-site.xml
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMarkPartitionRemote.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreAuthorization.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEventListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStoreIpAddress.java
* 
/hive/trunk/shims/src/test/org/apache/hadoop/hive/thrift/TestHadoop20SAuthBridge.java


 Collapse hive.metastore.uris and hive.metastore.local
 -

 Key: HIVE-2585
 URL: https://issues.apache.org/jira/browse/HIVE-2585
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2585.D2559.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2585.D2559.2.patch, hive-2585_3.patch


 We should just have hive.metastore.uris. If it is empty, we shall assume 
 local mode, if non-empty we shall use that string to connect to remote 
 metastore. Having two different keys for same information is confusing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3057) metastore.HiveMetaStore$HMSHandler should set the thread local raw store to null in shutdown()

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547935#comment-13547935
 ] 

Hudson commented on HIVE-3057:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3057: metastore.HiveMetaStore should set the thread local raw store to 
null in shutdown() (Travis Crawford via Ashutosh Chauhan) (Revision 1344795)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1344795
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java


 metastore.HiveMetaStore$HMSHandler should set the thread local raw store to 
 null in shutdown()
 --

 Key: HIVE-3057
 URL: https://issues.apache.org/jira/browse/HIVE-3057
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.8.1, 0.9.0
Reporter: Feng Peng
Assignee: Travis Crawford
 Fix For: 0.10.0, 0.9.1

 Attachments: HIVE-3057.1.patch


 The shutdown() function of metastore.HiveMetaStore$HMSHandler does not set 
 the thread local RawStore variable (in threadLocalMS) to null. Subsequent 
 getMS() calls may get the wrong RawStore object.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3058) hive.transform.escape.input breaks tab delimited data

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547936#comment-13547936
 ] 

Hudson commented on HIVE-3058:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3058 hive.transform.escape.input breaks tab delimited data
(Kevin Wilfong via namit) (Revision 1343033)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343033
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/newline.q
* /hive/trunk/ql/src/test/results/clientpositive/newline.q.out


 hive.transform.escape.input breaks tab delimited data
 -

 Key: HIVE-3058
 URL: https://issues.apache.org/jira/browse/HIVE-3058
 Project: Hive
  Issue Type: Bug
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3058.1.patch.txt


 With the hive.transform.escape.input set, all tabs going into a script, 
 including those used to separate columns are escaped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3059) revert HIVE-2703

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547937#comment-13547937
 ] 

Hudson commented on HIVE-3059:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3059. Revert HIVE-2703, add testcase for non-string partition columns
   passed to transform operator, updated TestJdbcDriver.
(Kevin Wilfong via namit) (Revision 1343331)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343331
Files : 
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/partcols1.q
* /hive/trunk/ql/src/test/results/clientpositive/partcols1.q.out


 revert HIVE-2703
 

 Key: HIVE-3059
 URL: https://issues.apache.org/jira/browse/HIVE-3059
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0

 Attachments: hive.3059.1.patch, hive.3059.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3056) Create a new metastore tool to bulk update location field in Db/Table/Partition records

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547938#comment-13547938
 ] 

Hudson commented on HIVE-3056:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3056. Ability to bulk update location field in Db/Table/Partition 
records (Shreepadma Venugopalan via cws) (Revision 1380500)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380500
Files : 
* /hive/trunk/bin/ext/metatool.sh
* /hive/trunk/bin/metatool
* /hive/trunk/build.xml
* /hive/trunk/eclipse-templates/TestHiveMetaTool.launchtemplate
* /hive/trunk/metastore/ivy.xml
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/tools
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaTool.java


 Create a new metastore tool to bulk update location field in 
 Db/Table/Partition records 
 

 Key: HIVE-3056
 URL: https://issues.apache.org/jira/browse/HIVE-3056
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Reporter: Carl Steinbach
Assignee: Shreepadma Venugopalan
 Fix For: 0.10.0

 Attachments: HIVE-3056.2.patch.txt, HIVE-3056.3.patch.txt, 
 HIVE-3056.4.patch.txt, HIVE-3056.5.patch.txt, HIVE-3056.7.patch.txt, 
 HIVE-3056.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1634) Allow access to Primitive types stored in binary format in HBase

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547939#comment-13547939
 ] 

Hudson commented on HIVE-1634:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2958 [jira] GROUP BY causing ClassCastException [LazyDioInteger cannot 
be
cast LazyInteger]
(Navis Ryu via Ashutosh Chauhan)

Summary:
DPAL- GROUP BY causing ClassCastException [LazyDioInteger cannot be cast
LazyInteger]

This relates to https://issues.apache.org/jira/browse/HIVE-1634.

The following work fine:

CREATE EXTERNAL TABLE tim_hbase_occurrence (
  id int,
  scientific_name string,
  data_resource_id int
) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH
SERDEPROPERTIES (
  hbase.columns.mapping = :key#b,v:scientific_name#s,v:data_resource_id#b
) TBLPROPERTIES(
  hbase.table.name = mini_occurrences,
  hbase.table.default.storage.type = binary
);
SELECT * FROM tim_hbase_occurrence LIMIT 3;
SELECT * FROM tim_hbase_occurrence WHERE data_resource_id=1081 LIMIT 3;

However, the following fails:

SELECT data_resource_id, count(*) FROM tim_hbase_occurrence GROUP BY
data_resource_id;

The error given:

0 TS
2012-04-17 16:58:45,693 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Initialization Done 7 MAP
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator:
Processing alias tim_hbase_occurrence for file
hdfs://c1n2.gbif.org/user/hive/warehouse/tim_hbase_occurrence
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7
forwarding 1 rows
2012-04-17 16:58:45,714 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 0
forwarding 1 rows
2012-04-17 16:58:45,716 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 1
forwarding 1 rows
2012-04-17 16:58:45,723 FATAL ExecMapper:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row {id:1444,scientific_name:null,data_resource_id:1081}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:548)
at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:737)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at
org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:529)
... 9 more
Caused by: java.lang.ClassCastException:
org.apache.hadoop.hive.serde2.lazydio.LazyDioInteger cannot be cast to
org.apache.hadoop.hive.serde2.lazy.LazyInteger
at
org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyIntObjectInspector.copyObject(LazyIntObjectInspector.java:43)
at
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:239)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:150)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.deepCopyElements(KeyWrapperFactory.java:142)
at
org.apache.hadoop.hive.ql.exec.KeyWrapperFactory$ListKeyWrapper.copyKey(KeyWrapperFactory.java:119)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processHashAggr(GroupByOperator.java:750)
at
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:722)
... 18 more

Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2871 (Revision 1328157)
HIVE-1634: Allow access 

[jira] [Commented] (HIVE-1538) FilterOperator is applied twice with ppd on.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547940#comment-13547940
 ] 

Hudson commented on HIVE-1538:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2791: filter is still removed due to regression of HIVE-1538 althougth 
HIVE-2344 (binlijin via hashutosh) (Revision 1291916)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1291916
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* /hive/trunk/ql/src/test/queries/clientpositive/ppd2.q
* /hive/trunk/ql/src/test/results/clientpositive/ppd2.q.out


 FilterOperator is applied twice with ppd on.
 

 Key: HIVE-1538
 URL: https://issues.apache.org/jira/browse/HIVE-1538
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.8.0

 Attachments: patch-1538-1.txt, patch-1538-2.txt, patch-1538-3.txt, 
 patch-1538-4.txt, patch-1538.txt


 With hive.optimize.ppd set to true, FilterOperator is applied twice. And it 
 seems second operator is always filtering zero rows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3800) testCliDriver_combine2 fails on hadoop-1

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547941#comment-13547941
 ] 

Hudson commented on HIVE-3800:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3800 : testCliDriver_combine2 fails on hadoop-1 (Gunther Hagleitner 
via Ashutosh Chauhan) (Revision 1426261)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1426261
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/combine2.q
* /hive/trunk/ql/src/test/queries/clientpositive/combine2_hadoop20.q
* /hive/trunk/ql/src/test/results/clientpositive/combine2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/combine2_hadoop20.q.out


 testCliDriver_combine2 fails on hadoop-1
 

 Key: HIVE-3800
 URL: https://issues.apache.org/jira/browse/HIVE-3800
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.11.0

 Attachments: HIVE-3800.2.patch, HIVE-3800.patch


 Actually functionality is working correctly, but incorrect include/exclude 
 macro make cause the wrong query file to be run.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3805) Resolve TODO in TUGIBasedProcessor

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547943#comment-13547943
 ] 

Hudson commented on HIVE-3805:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3805 Resolve TODO in TUGIBasedProcessor
(Kevin Wilfong via namit) (Revision 1428705)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1428705
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/TUGIBasedProcessor.java


 Resolve TODO in TUGIBasedProcessor
 --

 Key: HIVE-3805
 URL: https://issues.apache.org/jira/browse/HIVE-3805
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.11.0

 Attachments: HIVE-3805.1.patch.txt


 There's a TODO in TUGIBasedProcessor
 // TODO get rid of following reflection after THRIFT-1465 is fixed.
 Now that we have upgraded to Thrift 9 THRIFT-1465 is available.
 This will also fix an issue where fb303 counters cannot be collected if the 
 TUGIBasedProcessor is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3802) testCliDriver_input39 fails on hadoop-1

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547944#comment-13547944
 ] 

Hudson commented on HIVE-3802:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3802 : testCliDriver_input39 fails on hadoop-1 (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1426265)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1426265
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/input39.q
* /hive/trunk/ql/src/test/queries/clientpositive/input39_hadoop20.q
* /hive/trunk/ql/src/test/results/clientpositive/input39.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input39_hadoop20.q.out


 testCliDriver_input39 fails on hadoop-1
 ---

 Key: HIVE-3802
 URL: https://issues.apache.org/jira/browse/HIVE-3802
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.11.0

 Attachments: HIVE-3802.patch


 This test is marked as flaky and disabled for all versions, but hadoop-1 was 
 missed in that list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3146) Support external hive tables whose data are stored in Azure blob store/Azure Storage Volumes (ASV)

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547945#comment-13547945
 ] 

Hudson commented on HIVE-3146:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3146 : Support external hive tables whose data are stored in Azure 
blob store/Azure Storage Volumes (ASV) (Kanna Karanam via Ashutosh Chauhan) 
(Revision 1356524)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1356524
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java


 Support external hive tables whose data are stored in Azure blob store/Azure 
 Storage Volumes (ASV)
 --

 Key: HIVE-3146
 URL: https://issues.apache.org/jira/browse/HIVE-3146
 Project: Hive
  Issue Type: Sub-task
  Components: Windows
Affects Versions: 0.10.0
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: HIVE-3146.1.patch.txt, HIVE-3146.2.patch.txt, 
 HIVE-3146.3.patch.txt


 Support external hive tables whose data are stored in Azure blob store/Azure 
 Storage Volumes (ASV)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3048) Collect_set Aggregate does uneccesary check for value.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547946#comment-13547946
 ] 

Hudson commented on HIVE-3048:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3048 : Collect_set Aggregate does uneccesary check for value. (Ed 
Capriolo via Ashutosh Chauhan) (Revision 1354079)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1354079
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFCollectSet.java


 Collect_set Aggregate does uneccesary check for value.
 --

 Key: HIVE-3048
 URL: https://issues.apache.org/jira/browse/HIVE-3048
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.8.1
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Fix For: 0.10.0

 Attachments: HIVE-3048.patch.1.txt


 Sets already de-duplicate for free no need for existence check.
 {noformat}
  private void putIntoSet(Object p, MkArrayAggregationBuffer myagg) {
   if (myagg.container.contains(p))
 return;
Object pCopy = ObjectInspectorUtils.copyToStandardObject(p,
this.inputOI);
myagg.container.add(pCopy);
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3809) Concurrency issue in RCFile: multiple threads can use the same decompressor

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547947#comment-13547947
 ] 

Hudson commented on HIVE-3809:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3809 : Concurrency issue in RCFile: multiple threads can use the same 
decompressor (Mikhail Bautin via Ashutosh Chauhan) (Revision 1426524)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1426524
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java


 Concurrency issue in RCFile: multiple threads can use the same decompressor
 ---

 Key: HIVE-3809
 URL: https://issues.apache.org/jira/browse/HIVE-3809
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
Assignee: Mikhail Bautin
Priority: Critical
 Fix For: 0.11.0

 Attachments: 
 0001-HIVE-3809-Concurrency-issue-in-RCFile-multiple-threa.patch, 
 0001-HIVE-3809-Decompressors-should-only-be-returned-to-t.patch, D7419.1.patch


 RCFile is not thread-safe, even if each reader is only used by one thread as 
 intended, because it is possible to return decompressors to the pool multiple 
 times by calling close on the reader multiple times. Then, different threads 
 can pick up the same decompressor twice from the pool, resulting in 
 decompression failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3801) testCliDriver_loadpart_err fails on hadoop-1

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547942#comment-13547942
 ] 

Hudson commented on HIVE-3801:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3801 : testCliDriver_loadpart_err fails on hadoop-1 (Gunther 
Hagleitner via Ashutosh Chauhan) (Revision 1426263)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1426263
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/loadpart_err.q


 testCliDriver_loadpart_err fails on hadoop-1
 

 Key: HIVE-3801
 URL: https://issues.apache.org/jira/browse/HIVE-3801
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.11.0

 Attachments: HIVE-3801.patch


 This test is marked as flaky and disabled for all versions, but hadoop-1 was 
 missed in that list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3049) setup classpath for templates correctly for eclipse

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547948#comment-13547948
 ] 

Hudson commented on HIVE-3049:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3049 setup classpath for templates correctly for eclipse
(Shuai Ding via namit) (Revision 1342785)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342785
Files : 
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/eclipse-templates/HiveCLI.launchtemplate
* /hive/trunk/eclipse-templates/TestCliDriver.launchtemplate
* /hive/trunk/eclipse-templates/TestEmbeddedHiveMetaStore.launchtemplate
* /hive/trunk/eclipse-templates/TestHBaseCliDriver.launchtemplate
* /hive/trunk/eclipse-templates/TestHive.launchtemplate
* /hive/trunk/eclipse-templates/TestHiveMetaStoreChecker.launchtemplate
* /hive/trunk/eclipse-templates/TestJdbc.launchtemplate
* /hive/trunk/eclipse-templates/TestMTQueries.launchtemplate
* /hive/trunk/eclipse-templates/TestRemoteHiveMetaStore.launchtemplate
* /hive/trunk/eclipse-templates/TestTruncate.launchtemplate


 setup classpath for templates correctly for eclipse
 ---

 Key: HIVE-3049
 URL: https://issues.apache.org/jira/browse/HIVE-3049
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Shuai Ding
 Fix For: 0.10.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3806) Ptest failing due to Argument list too long errors

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547949#comment-13547949
 ] 

Hudson commented on HIVE-3806:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3806 Ptest failing due to Argument list too long errors
(Bhushan Mandhani via namit) (Revision 1422621)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1422621
Files : 
* /hive/trunk/testutils/ptest/hivetest.py


 Ptest failing due to Argument list too long errors
 

 Key: HIVE-3806
 URL: https://issues.apache.org/jira/browse/HIVE-3806
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Reporter: Bhushan Mandhani
Assignee: Bhushan Mandhani
Priority: Minor
 Fix For: 0.11.0

 Attachments: HIVE-3806.1.patch.txt


 ptest creates a really huge shell command to delete from each test host those 
 .q files that it should not be running. For TestCliDriver, the command has 
 become long enough that it is over the threshold allowed by the shell. We 
 should rewrite it so that the same semantics is captured in a shorter command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3140) Comment indenting is broken for describe in CLI

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547950#comment-13547950
 ] 

Hudson commented on HIVE-3140:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3140 Comment indenting is broken for describe in CLI
(Zhenxiao Luo via namit) (Revision 1418405)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1418405
Files : 
* /hive/trunk/contrib/src/test/results/clientpositive/fileformat_base64.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_s3.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_queries.q.out
* /hive/trunk/hwi/src/test/org/apache/hadoop/hive/hwi/TestHWISessionManager.java
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/MetaDataFormatUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/TextMetaDataFormatter.java
* /hive/trunk/ql/src/test/queries/clientpositive/describe_comment_indent.q
* /hive/trunk/ql/src/test/results/clientnegative/desc_failure2.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/drop_partition_filter_failure2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_part_no_drop.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_tbl2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_tbl3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_tbl4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_tbl5.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_tbl_no_drop.q.out
* /hive/trunk/ql/src/test/results/clientnegative/set_hiveconf_validation0.q.out
* /hive/trunk/ql/src/test/results/clientnegative/set_hiveconf_validation1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_index.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_merge_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_merge_stats.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/alter_partition_format_loc.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_table_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_view_rename.q.out
* /hive/trunk/ql/src/test/results/clientpositive/archive_corrupt.q.out
* /hive/trunk/ql/src/test/results/clientpositive/authorization_7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/autogen_colalias.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_change_schema.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_evolved_schemas.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_joins.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_sanity_test.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_schema_error_message.q.out
* /hive/trunk/ql/src/test/results/clientpositive/avro_schema_literal.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ba_table1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ba_table2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ba_table_union.q.out
* /hive/trunk/ql/src/test/results/clientpositive/binary_table_bincolserde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/binary_table_colserde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucket_groupby.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/columnarserde_create_shortcut.q.out
* /hive/trunk/ql/src/test/results/clientpositive/combine3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/convert_enum_to_string.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_default_prop.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_escape.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/create_insert_outputformat.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_like_view.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_nested_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_view.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_view_partitioned.q.out
* /hive/trunk/ql/src/test/results/clientpositive/database.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ddltime.q.out
* /hive/trunk/ql/src/test/results/clientpositive/describe_comment_indent.q.out
* /hive/trunk/ql/src/test/results/clientpositive/describe_syntax.q.out
* 

[jira] [Commented] (HIVE-2698) Enable Hadoop-1.0.0 in Hive

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547951#comment-13547951
 ] 

Hudson commented on HIVE-2698:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2698 [jira] Enable Hadoop-1.0.0 in Hive
(Enis Söztutar via Carl Steinbach)

Summary:
third version of the patch

Hadoop-1.0.0 is recently released, which is AFAIK, API compatible to the 0.20S
release.

Test Plan: EMPTY

Reviewers: JIRA, cwsteinbach

Reviewed By: cwsteinbach

CC: cwsteinbach, enis

Differential Revision: https://reviews.facebook.net/D1389 (Revision 1236023)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1236023
Files : 
* /hive/trunk/build.properties


 Enable Hadoop-1.0.0 in Hive
 ---

 Key: HIVE-2698
 URL: https://issues.apache.org/jira/browse/HIVE-2698
 Project: Hive
  Issue Type: New Feature
  Components: Security, Shims
Affects Versions: 0.9.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: hadoop, hadoop-1.0, jars
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2698.D1389.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2698.D1389.2.patch, HIVE-2698.D1389.2.patch, 
 HIVE-2698_v1.patch, HIVE-2698_v2.patch, HIVE-2698_v3.patch


 Hadoop-1.0.0 is recently released, which is AFAIK, API compatible to the 
 0.20S release. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3029) Update ShimLoader to work with Hadoop 2.x

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547952#comment-13547952
 ] 

Hudson commented on HIVE-3029:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3029. Update ShimLoader to work with Hadoop 2.x (Carl Steinbach via 
cws) (Revision 1374101)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1374101
Files : 
* /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/ShimLoader.java


 Update ShimLoader to work with Hadoop 2.x
 -

 Key: HIVE-3029
 URL: https://issues.apache.org/jira/browse/HIVE-3029
 Project: Hive
  Issue Type: Bug
  Components: Shims
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-3029.D3255.1.patch, 
 HIVE-3029.2.patch.txt, HIVE-3029.D3255.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3028) Fix javadoc again

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547953#comment-13547953
 ] 

Hudson commented on HIVE-3028:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3028 : Fix javadoc again (Owen Omalley via Ashutosh Chauhan) (Revision 
1340507)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1340507
Files : 
* /hive/trunk/build.xml


 Fix javadoc again
 -

 Key: HIVE-3028
 URL: https://issues.apache.org/jira/browse/HIVE-3028
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.10.0

 Attachments: hive-3028.patch


 HIVE-2646 broke the javadoc, because javadoc needs the Hadoop jars on the 
 classpath.
 {quote}
   [javadoc] Building index for all classes...
   [javadoc] Generating 
 /Users/owen/work/eclipse/hive/build/dist/docs/api/stylesheet.css...
   [javadoc] 3 errors
   [javadoc] 168 warnings
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-967) Implement show create table

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547954#comment-13547954
 ] 

Hudson commented on HIVE-967:
-

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-967. Implement show create table (Feng Lu via kevinwilfong) 
(Revision 1398896)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1398896
Files : 
* /hive/trunk/build.xml
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/DDLWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ShowCreateTableDesc.java
* 
/hive/trunk/ql/src/test/queries/clientnegative/show_create_table_does_not_exist.q
* /hive/trunk/ql/src/test/queries/clientnegative/show_create_table_index.q
* /hive/trunk/ql/src/test/queries/clientpositive/show_create_table_alter.q
* /hive/trunk/ql/src/test/queries/clientpositive/show_create_table_db_table.q
* /hive/trunk/ql/src/test/queries/clientpositive/show_create_table_delimited.q
* /hive/trunk/ql/src/test/queries/clientpositive/show_create_table_partitioned.q
* /hive/trunk/ql/src/test/queries/clientpositive/show_create_table_serde.q
* /hive/trunk/ql/src/test/queries/clientpositive/show_create_table_view.q
* 
/hive/trunk/ql/src/test/results/clientnegative/show_create_table_does_not_exist.q.out
* /hive/trunk/ql/src/test/results/clientnegative/show_create_table_index.q.out
* /hive/trunk/ql/src/test/results/clientpositive/show_create_table_alter.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/show_create_table_db_table.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/show_create_table_delimited.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/show_create_table_partitioned.q.out
* /hive/trunk/ql/src/test/results/clientpositive/show_create_table_serde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/show_create_table_view.q.out


 Implement show create table
 -

 Key: HIVE-967
 URL: https://issues.apache.org/jira/browse/HIVE-967
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Query Processor
Reporter: Adam Kramer
Assignee: Feng Lu
 Fix For: 0.10.0

 Attachments: HIVE-967.2.patch.txt, HIVE-967.3.patch.txt, 
 HIVE-967.4.patch.txt, HIVE-967.5.patch.txt, HIVE-967.6.patch.txt, 
 HIVE-967.patch.txt, HiveShowCreateTable.jar, show_create.txt


 SHOW CREATE TABLE would be very useful in cases where you are trying to 
 figure out the partitioning and/or bucketing scheme for a table. Perhaps this 
 could be implemented by having new tables automatically SET PROPERTIES 
 (create_command='raw text of the create statement')?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3178) retry not honored in RetryingRawMetastore

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547955#comment-13547955
 ] 

Hudson commented on HIVE-3178:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3178. retry not honored in RetryingRawMetastore. (Namit Jain via 
kevinwilfong) (Revision 1353059)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1353059
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingRawStore.java


 retry not honored in RetryingRawMetastore
 -

 Key: HIVE-3178
 URL: https://issues.apache.org/jira/browse/HIVE-3178
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0

 Attachments: hive.3178.1.patch


 The retrymetastore catches JDOException, but they are always wrapped by 
 reflection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3030) escape more chars for script operator

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547956#comment-13547956
 ] 

Hudson commented on HIVE-3030:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3030 escape more chars for script operator (Namit Jain via Siying 
Dong) (Revision 1341589)

 Result = ABORTED
sdong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1341589
Files : 
* /hive/trunk/bin/hive
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/data/scripts/doubleescapedtab.py
* /hive/trunk/data/scripts/escapedcarriagereturn.py
* /hive/trunk/data/scripts/escapednewline.py
* /hive/trunk/data/scripts/escapedtab.py
* /hive/trunk/data/scripts/newline.py
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TextRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TextRecordWriter.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java
* /hive/trunk/ql/src/test/queries/clientpositive/newline.q
* /hive/trunk/ql/src/test/results/clientpositive/newline.q.out


 escape more chars for script operator
 -

 Key: HIVE-3030
 URL: https://issues.apache.org/jira/browse/HIVE-3030
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0


 Only new line was being escaped.
 The same behavior needs to be done for carriage returns, and tabs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3031) hive docs target does not work

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547957#comment-13547957
 ] 

Hudson commented on HIVE-3031:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3031 : hive docs target does not work (Sushanth Sowmyan via Ashutosh 
Chauhan) (Revision 1340348)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1340348
Files : 
* /hive/trunk/build.xml
* /hive/trunk/ivy.xml


 hive docs target does not work
 --

 Key: HIVE-3031
 URL: https://issues.apache.org/jira/browse/HIVE-3031
 Project: Hive
  Issue Type: Bug
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
  Labels: build, docs
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-3031.D3279.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-3031.D3279.2.patch


 Running ant docs does not work for two reasons:
 a) ant, when called from the docs target, doesn't know what to do with ivy, 
 presumably because the ivy-init-antlib target hasn't been called.
 b) The velocity jar is not pulled in by ivy, since there's no dependency 
 added to it in ivy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2418) replace or translate function in hive

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547960#comment-13547960
 ] 

Hudson commented on HIVE-2418:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2418 Translate/Replace UDF (Mark Grover via egc) (Revision 1346933)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1346933
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTranslate.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_translate.q
* /hive/trunk/ql/src/test/results/clientpositive/show_functions.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_translate.q.out


 replace or translate function in hive
 -

 Key: HIVE-2418
 URL: https://issues.apache.org/jira/browse/HIVE-2418
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.9.0
 Environment: hive-0.7.0
Reporter: kranthikiran
Assignee: Mark Grover
  Labels: cql
 Fix For: 0.10.0

 Attachments: hive-2418.1.patch.txt, udf_translate_v1.patch, 
 udf_translate_v2_with_1_negative_test.patch, 
 udf_translate_v2_with_3_negative_tests.patch, 
 udf_translate_v3_with_1_negative_test.patch, 
 udf_translate_v3_with_3_negative_tests.patch

   Original Estimate: 96h
  Remaining Estimate: 96h

 replace or translate function in hive

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3019) Add JUnit to list of test dependencies managed by Ivy

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547962#comment-13547962
 ] 

Hudson commented on HIVE-3019:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3019 [jira] Add JUnit to list of test dependencies managed by Ivy

Summary: HIVE-3019. Add JUnit to list of test dependencies managed by Ivy

Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D3171 (Revision 1351467)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1351467
Files : 
* /hive/trunk/cli/ivy.xml
* /hive/trunk/contrib/ivy.xml
* /hive/trunk/hbase-handler/ivy.xml
* /hive/trunk/hwi/ivy.xml
* /hive/trunk/jdbc/ivy.xml
* /hive/trunk/metastore/ivy.xml
* /hive/trunk/pdk/ivy.xml
* /hive/trunk/ql/ivy.xml
* /hive/trunk/serde/ivy.xml
* /hive/trunk/service/ivy.xml
* /hive/trunk/shims/ivy.xml


 Add JUnit to list of test dependencies managed by Ivy
 -

 Key: HIVE-3019
 URL: https://issues.apache.org/jira/browse/HIVE-3019
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-3019.D3171.1.patch, 
 HIVE-3019.D3171.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1653) Ability to enforce correct stats

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547964#comment-13547964
 ] 

Hudson commented on HIVE-1653:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-1653. Ability to enforce correct stats. (njain via kevinwilfong) 
(Revision 1366103)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1366103
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/StatsTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/FileSinkDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/StatsWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/stats
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/stats/DummyStatsAggregator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/stats/DummyStatsPublisher.java
* /hive/trunk/ql/src/test/queries/clientnegative/stats_aggregator_error_1.q
* /hive/trunk/ql/src/test/queries/clientnegative/stats_aggregator_error_2.q
* /hive/trunk/ql/src/test/queries/clientnegative/stats_publisher_error_1.q
* /hive/trunk/ql/src/test/queries/clientnegative/stats_publisher_error_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/stats_aggregator_error_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/stats_publisher_error_1.q
* /hive/trunk/ql/src/test/results/clientnegative/stats_aggregator_error_1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/stats_aggregator_error_2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/stats_publisher_error_1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/stats_publisher_error_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/stats_aggregator_error_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/stats_publisher_error_1.q.out


 Ability to enforce correct stats
 

 Key: HIVE-1653
 URL: https://issues.apache.org/jira/browse/HIVE-1653
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0


 This is a follow-up for https://issues.apache.org/jira/browse/HIVE-1361.
 If one of the mappers/reducers cannot publish stats, it may lead to wrong 
 aggregated stats.
 There should be a way to avoid this - at the least, a configuration variable 
 which fails the 
 task if stats cannot be published

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2589) Newly created partition should inherit properties from table

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547965#comment-13547965
 ] 

Hudson commented on HIVE-2589:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2589 : Newly created partition should inherit properties from table 
(Ashutosh Chauhan) (Revision 1234235)
HIVE-2719. Revert HIVE-2589 (He Yongqiang via cws) (Revision 1232766)
HIVE-2589: Newly created partition should inherit properties from table 
(Ashutosh Chauhan) (Revision 1230390)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1234235
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props.q
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_empty.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_with_star.q
* /hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_empty.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_with_star.q.out

cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1232766
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props.q
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_empty.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_with_star.q
* /hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_empty.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_with_star.q.out

hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1230390
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props.q
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_empty.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_with_star.q
* /hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_empty.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_with_star.q.out


 Newly created partition should inherit properties from table
 

 Key: HIVE-2589
 URL: https://issues.apache.org/jira/browse/HIVE-2589
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.8.1, 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2589.D1335.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2589.D1335.2.patch, hive-2589_1.patch, 
 hive-2589_2.patch, hive-2589_3.patch, hive-2589_4.patch, 
 hive-2589_branch8.patch, hive-2589.patch, hive-2589.patch


 This will make all the info contained in table properties available to 
 partitions. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2021) Add a configuration property that sets the variable substitution max depth

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547966#comment-13547966
 ] 

Hudson commented on HIVE-2021:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2021 : Add a configuration property that sets the variable 
substitution max depth (Edward Capriolo via Ashutosh Chauhan) (Revision 1353808)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1353808
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/VariableSubstitution.java


 Add a configuration property that sets the variable substitution max depth
 --

 Key: HIVE-2021
 URL: https://issues.apache.org/jira/browse/HIVE-2021
 Project: Hive
  Issue Type: Improvement
  Components: Configuration, Query Processor
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Edward Capriolo
  Labels: configuration-addition
 Fix For: 0.10.0

 Attachments: hive-2021.patch.txt.1


 The VariableSubstitution class contains a hardcoded MAX_SUBST=40 value which 
 defines the maximum number of variable references that are allowed to appear 
 in a single Hive statement. This value should be configurable via hiveconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3168) LazyBinaryObjectInspector.getPrimitiveJavaObject copies beyond length of underlying BytesWritable

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547967#comment-13547967
 ] 

Hudson commented on HIVE-3168:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3168: LazyBinaryObjectInspector.getPrimitiveJavaObject copies beyond 
length of underlying BytesWritable (Thejas Nair via Ashutosh Chauhan) (Revision 
1360812)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1360812
Files : 
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyUtils.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyBinaryObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaBinaryObjectInspector.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/WritableBinaryObjectInspector.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/lazybinary/TestLazyBinarySerDe.java


 LazyBinaryObjectInspector.getPrimitiveJavaObject copies beyond length of 
 underlying BytesWritable
 -

 Key: HIVE-3168
 URL: https://issues.apache.org/jira/browse/HIVE-3168
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.10.0

 Attachments: HIVE-3168.1.patch, HIVE-3168.2.patch, HIVE-3168.3.patch


 LazyBinaryObjectInspector.getPrimitiveJavaObject copies the full capacity of 
 the LazyBinary's underlying BytesWritable object, which can be greater than 
 the size of the actual contents. 
 This leads to additional characters at the end of the ByteArrayRef returned. 
 When the LazyBinary object gets re-used, there can be remnants of the later 
 portion of previous entry. 
 This was not seen while reading through hive queries, which I think is 
 because a copy elsewhere seems to create LazyBinary with length == capacity. 
 (probably LazyBinary copy constructor). This was seen when MR or pig used 
 Hcatalog to read the data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3161) drop the temporary function at end of autogen_colalias.q

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547969#comment-13547969
 ] 

Hudson commented on HIVE-3161:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3161. A minor test update
(Namit Jain via Carl Steinbach)

Summary: The correct long term fix is HIVE-3160

Test Plan: manual

Differential Revision: https://reviews.facebook.net/D3723 (Revision 1352719)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1352719
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/autogen_colalias.q
* /hive/trunk/ql/src/test/results/clientpositive/autogen_colalias.q.out


 drop the temporary function at end of autogen_colalias.q
 

 Key: HIVE-3161
 URL: https://issues.apache.org/jira/browse/HIVE-3161
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0


 This should not be needed once HIVE-3160 if fixed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3022) Add hive.exec.rcfile.use.explicit.header to hive-default.xml.template

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547970#comment-13547970
 ] 

Hudson commented on HIVE-3022:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3022 Add hive.exec.rcfile.use.explicit.header to 
hive-default.xml.template
(Kevin Wilfong via namit) (Revision 1338311)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1338311
Files : 
* /hive/trunk/conf/hive-default.xml.template


 Add hive.exec.rcfile.use.explicit.header to hive-default.xml.template
 -

 Key: HIVE-3022
 URL: https://issues.apache.org/jira/browse/HIVE-3022
 Project: Hive
  Issue Type: Task
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Trivial
 Fix For: 0.10.0

 Attachments: HIVE-3022.1.patch.txt


 I forgot to do this as part of HIVE-3018

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2691) Specify location of log4j configuration files via configuration properties

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547972#comment-13547972
 ] 

Hudson commented on HIVE-2691:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2691 : Specify location of log4j configuration files via configuration 
properties (Zhenxiao Luo via Ashutosh Chauhan) (Revision 1418858)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1418858
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/common/LogUtils.java
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/common/src/test/org/apache/hadoop/hive/conf/TestHiveConf.java
* /hive/trunk/common/src/test/org/apache/hadoop/hive/conf/TestHiveLogging.java
* /hive/trunk/common/src/test/resources/hive-exec-log4j-test.properties
* /hive/trunk/common/src/test/resources/hive-log4j-test.properties
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/history/TestHiveHistory.java


 Specify location of log4j configuration files via configuration properties
 --

 Key: HIVE-2691
 URL: https://issues.apache.org/jira/browse/HIVE-2691
 Project: Hive
  Issue Type: New Feature
  Components: Configuration, Logging
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
 Fix For: 0.11.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1131.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.4.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.5.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.6.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D2667.1.patch, HIVE-2691.1.patch.txt, 
 HIVE-2691.2.patch.txt, HIVE-2691.D2667.1.patch


 Oozie needs to be able to override the default location of the log4j 
 configuration
 files from the Hive command line, e.g:
 {noformat}
 hive -hiveconf hive.log4j.file=/home/carl/hive-log4j.properties -hiveconf 
 hive.log4j.exec.file=/home/carl/hive-exec-log4j.properties
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1719) Move RegexSerDe out of hive-contrib and over to hive-serde

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547971#comment-13547971
 ] 

Hudson commented on HIVE-1719:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-1719 [jira] Move RegexSerDe out of hive-contrib and over to hive-serde
(Shreepadma Venugopalan via Carl Steinbach)

Summary:
Regex Serde Changes

RegexSerDe is as much a part of the standard Hive distribution as the other 
SerDes
currently in hive-serde. I think we should move it over to the hive-serde 
module so that
users don't have to go to the added effort of manually registering the contrib 
jar before
using it.

Test Plan: EMPTY

Reviewers: JIRA, cwsteinbach

Reviewed By: cwsteinbach

Differential Revision: https://reviews.facebook.net/D3249 (Revision 1340256)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1340256
Files : 
* /hive/trunk/contrib/src/test/queries/clientnegative/serde_regex.q
* /hive/trunk/contrib/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/contrib/src/test/results/clientnegative/serde_regex.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_regex.q.out
* /hive/trunk/ql/src/test/queries/clientnegative/serde_regex.q
* /hive/trunk/ql/src/test/queries/clientnegative/serde_regex2.q
* /hive/trunk/ql/src/test/queries/clientnegative/serde_regex3.q
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/results/clientnegative/serde_regex.q.out
* /hive/trunk/ql/src/test/results/clientnegative/serde_regex2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/serde_regex3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/serde_regex.q.out
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java


 Move RegexSerDe out of hive-contrib and over to hive-serde
 --

 Key: HIVE-1719
 URL: https://issues.apache.org/jira/browse/HIVE-1719
 Project: Hive
  Issue Type: Task
  Components: Serializers/Deserializers
Reporter: Carl Steinbach
Assignee: Shreepadma Venugopalan
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-1719.D3051.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-1719.D3051.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-1719.D3141.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-1719.D3249.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-1719.D3249.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-1719.D3249.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-1719.D3249.4.patch, HIVE-1719.3.patch, 
 HIVE-1719.D3249.1.patch


 RegexSerDe is as much a part of the standard Hive distribution as the other 
 SerDes
 currently in hive-serde. I think we should move it over to the hive-serde 
 module so that
 users don't have to go to the added effort of manually registering the 
 contrib jar before
 using it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2690) a bug in 'alter table concatenate' that causes filenames getting double url encoded

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547973#comment-13547973
 ] 

Hudson commented on HIVE-2690:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2690 a bug in 'alter table concatenate' that causes filenames getting
double url encoded (He Yongqiang via namit) (Revision 1227151)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1227151
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/alter_merge_2.q
* /hive/trunk/ql/src/test/results/clientpositive/alter_merge_2.q.out


 a bug in 'alter table concatenate' that causes filenames getting double url 
 encoded
 ---

 Key: HIVE-2690
 URL: https://issues.apache.org/jira/browse/HIVE-2690
 Project: Hive
  Issue Type: Bug
Reporter: He Yongqiang
Assignee: He Yongqiang
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2690.D1095.1.patch, 
 HIVE-2690.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2694) Add FORMAT UDF

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547976#comment-13547976
 ] 

Hudson commented on HIVE-2694:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2694. Add FORMAT UDF (Zhenxiao Luo via cws) (Revision 1348976)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1348976
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFormatNumber.java
* /hive/trunk/ql/src/test/queries/clientnegative/udf_format_number_wrong1.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_format_number_wrong2.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_format_number_wrong3.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_format_number_wrong4.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_format_number_wrong5.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_format_number_wrong6.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_format_number_wrong7.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_format_number.q
* /hive/trunk/ql/src/test/results/clientnegative/udf_format_number_wrong1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_format_number_wrong2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_format_number_wrong3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_format_number_wrong4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_format_number_wrong5.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_format_number_wrong6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_format_number_wrong7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/show_functions.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_format_number.q.out


 Add FORMAT UDF
 --

 Key: HIVE-2694
 URL: https://issues.apache.org/jira/browse/HIVE-2694
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2694.D1149.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2694.D1149.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2694.D1149.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2694.D2673.1.patch, HIVE-2694.1.patch.txt, 
 HIVE-2694.2.patch.txt, HIVE-2694.3.patch.txt, HIVE-2694.4.patch.txt, 
 HIVE-2694.D2673.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1977) DESCRIBE TABLE syntax doesn't support specifying a database qualified table name

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547977#comment-13547977
 ] 

Hudson commented on HIVE-1977:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-1977. DESCRIBE TABLE syntax doesn't support specifying a database 
qualified table name (Zhenxiao Luo via cws) (Revision 1406338)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406338
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/DescTableDesc.java
* /hive/trunk/ql/src/test/queries/clientnegative/desc_failure3.q
* /hive/trunk/ql/src/test/queries/clientpositive/describe_syntax.q
* /hive/trunk/ql/src/test/results/clientnegative/desc_failure3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/describe_syntax.q.out


 DESCRIBE TABLE syntax doesn't support specifying a database qualified table 
 name
 

 Key: HIVE-1977
 URL: https://issues.apache.org/jira/browse/HIVE-1977
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema, Query Processor, SQL
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-1977.1.patch.txt, HIVE-1977.2.patch.txt, 
 HIVE-1977.3.patch.txt, HIVE-1977.4.patch.txt, HIVE-1977.5.patch.txt, 
 HIVE-1977.6.patch.txt


 The syntax for DESCRIBE is broken. It should be:
 {code}
 DESCRIBE [EXTENDED] [database DOT]table [column]
 {code}
 but is actually
 {code}
 DESCRIBE [EXTENDED] table[DOT col_name]
 {code}
 Ref: http://dev.mysql.com/doc/refman/5.0/en/describe.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3160) All temporary functions should be dropped at the end of each .q file

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547974#comment-13547974
 ] 

Hudson commented on HIVE-3160:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3161. A minor test update
(Namit Jain via Carl Steinbach)

Summary: The correct long term fix is HIVE-3160

Test Plan: manual

Differential Revision: https://reviews.facebook.net/D3723 (Revision 1352719)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1352719
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/autogen_colalias.q
* /hive/trunk/ql/src/test/results/clientpositive/autogen_colalias.q.out


 All temporary functions should be dropped at the end of each .q file
 

 Key: HIVE-3160
 URL: https://issues.apache.org/jira/browse/HIVE-3160
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: Namit Jain
Assignee: Srinivas Vemuri

 Currently, only the tables and databases are dropped.
 A test like show_functions.q can get intermittent errors.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2372) java.io.IOException: error=7, Argument list too long

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547980#comment-13547980
 ] 

Hudson commented on HIVE-2372:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2372 Argument list too long when streaming (Sergey Tryuber via egc) 
(Revision 1342841)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342841
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestOperators.java


 java.io.IOException: error=7, Argument list too long
 

 Key: HIVE-2372
 URL: https://issues.apache.org/jira/browse/HIVE-2372
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Sergey Tryuber
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-2372.1.patch.txt, HIVE-2372.2.patch.txt


 I execute a huge query on a table with a lot of 2-level partitions. There is 
 a perl reducer in my query. Maps worked ok, but every reducer fails with the 
 following exception:
 2011-08-11 04:58:29,865 INFO org.apache.hadoop.hive.ql.exec.ScriptOperator: 
 Executing [/usr/bin/perl, reducer.pl, my_argument]
 2011-08-11 04:58:29,866 INFO org.apache.hadoop.hive.ql.exec.ScriptOperator: 
 tablename=null
 2011-08-11 04:58:29,866 INFO org.apache.hadoop.hive.ql.exec.ScriptOperator: 
 partname=null
 2011-08-11 04:58:29,866 INFO org.apache.hadoop.hive.ql.exec.ScriptOperator: 
 alias=null
 2011-08-11 04:58:29,935 FATAL ExecReducer: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
 processing row (tag=0) 
 {key:{reducesinkkey0:129390185139228,reducesinkkey1:8AF163CA6F},value:{_col0:8AF163CA6F,_col1:2011-07-27
  
 22:48:52,_col2:129390185139228,_col3:2006,_col4:4100,_col5:10017388=6,_col6:1063,_col7:NULL,_col8:address.com,_col9:NULL,_col10:NULL},alias:0}
   at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:256)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:468)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:416)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
   at org.apache.hadoop.mapred.Child.main(Child.java:262)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Cannot 
 initialize ScriptOperator
   at 
 org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:320)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
   at 
 org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
   ... 7 more
 Caused by: java.io.IOException: Cannot run program /usr/bin/perl: 
 java.io.IOException: error=7, Argument list too long
   at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
   at 
 org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:279)
   ... 15 more
 Caused by: java.io.IOException: java.io.IOException: error=7, Argument list 
 too long
   at java.lang.UNIXProcess.init(UNIXProcess.java:148)
   at java.lang.ProcessImpl.start(ProcessImpl.java:65)
   at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
   ... 16 more
 It seems to me, I found the cause. ScriptOperator.java puts a lot of configs 
 as environment variables to the child reduce process. One of variables is 
 mapred.input.dir, which in my case more than 150KB. There are a huge amount 
 of input directories in this variable. In short, the problem is that Linux 
 (up to 2.6.23 kernel version) limits summary size of environment variables 
 for child processes to 132KB. This problem could be solved by upgrading the 
 kernel. But strings limitations still be 132KB per string in environment 
 variable. So such huge variable doesn't work even on my home computer 
 (2.6.32). You can read 

[jira] [Commented] (HIVE-2246) Dedupe tables' column schemas from partitions in the metastore db

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547981#comment-13547981
 ] 

Hudson commented on HIVE-2246:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3424. Error by upgrading a Hive 0.7.0 database to 0.8.0 
(008-HIVE-2246.mysql.sql) (Alexander Alten-Lorenz via cws) (Revision 1380483)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380483
Files : 
* /hive/trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql


 Dedupe tables' column schemas from partitions in the metastore db
 -

 Key: HIVE-2246
 URL: https://issues.apache.org/jira/browse/HIVE-2246
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sohan Jain
Assignee: Sohan Jain
 Fix For: 0.8.0

 Attachments: HIVE-2246.2.patch, HIVE-2246.3.patch, HIVE-2246.4.patch, 
 HIVE-2246.8.patch


 Note: this patch proposes a schema change, and is therefore incompatible with 
 the current metastore.
 We can re-organize the JDO models to reduce space usage to keep the metastore 
 scalable for the future.  Currently, partitions are the fastest growing 
 objects in the metastore, and the metastore keeps a separate copy of the 
 columns list for each partition.  We can normalize the metastore db by 
 decoupling Columns from Storage Descriptors and not storing duplicate lists 
 of the columns for each partition. 
 An idea is to create an additional level of indirection with a Column 
 Descriptor that has a list of columns.  A table has a reference to its 
 latest Column Descriptor (note: a table may have more than one Column 
 Descriptor in the case of schema evolution).  Partitions and Indexes can 
 reference the same Column Descriptors as their parent table.
 Currently, the COLUMNS table in the metastore has roughly (number of 
 partitions + number of tables) * (average number of columns pertable) rows.  
 We can reduce this to (number of tables) * (average number of columns per 
 table) rows, while incurring a small cost proportional to the number of 
 tables to store the Column Descriptors.
 Please see the latest review board for additional implementation details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2206) add a new optimizer for query correlation discovery and optimization

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547982#comment-13547982
 ] 

Hudson commented on HIVE-2206:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2206:add a new optimizer for query correlation discovery and 
optimization (Yin Huai via He Yongqiang) (Revision 1392105)

 Result = ABORTED
heyongqiang : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392105
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/ql/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/OperatorType.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/BaseReduceSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CorrelationCompositeOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CorrelationLocalSimulativeReduceSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CorrelationReducerDispatchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecReducer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/CorrelationOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/CorrelationOptimizerUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/BaseReduceSinkDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CorrelationCompositeDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CorrelationLocalSimulativeReduceSinkDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CorrelationReducerDispatchDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ReduceSinkDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestExecDriver.java
* /hive/trunk/ql/src/test/queries/clientpositive/correlationoptimizer1.q
* /hive/trunk/ql/src/test/queries/clientpositive/correlationoptimizer2.q
* /hive/trunk/ql/src/test/queries/clientpositive/correlationoptimizer3.q
* /hive/trunk/ql/src/test/queries/clientpositive/correlationoptimizer4.q
* /hive/trunk/ql/src/test/queries/clientpositive/correlationoptimizer5.q
* /hive/trunk/ql/src/test/results/clientpositive/correlationoptimizer1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/correlationoptimizer2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/correlationoptimizer3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/correlationoptimizer4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/correlationoptimizer5.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml


 add a new optimizer for query correlation discovery and optimization
 

 Key: HIVE-2206
 URL: https://issues.apache.org/jira/browse/HIVE-2206
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: He Yongqiang
Assignee: Yin Huai
 Attachments: HIVE-2206.10-r1384442.patch.txt, 
 HIVE-2206.11-r1385084.patch.txt, HIVE-2206.12-r1386996.patch.txt, 
 HIVE-2206.13-r1389072.patch.txt, HIVE-2206.14-r1389704.patch.txt, 
 HIVE-2206.15-r1392491.patch.txt, HIVE-2206.16-r1399936.patch.txt, 
 HIVE-2206.17-r1404933.patch.txt, HIVE-2206.18-r1407720.patch.txt, 
 HIVE-2206.19-r1410581.patch.txt, HIVE-2206.1.patch.txt, 
 HIVE-2206.2.patch.txt, HIVE-2206.3.patch.txt, HIVE-2206.4.patch.txt, 
 HIVE-2206.5-1.patch.txt, HIVE-2206.5.patch.txt, HIVE-2206.6.patch.txt, 
 HIVE-2206.7.patch.txt, HIVE-2206.8.r1224646.patch.txt, 
 HIVE-2206.8-r1237253.patch.txt, testQueries.2.q, YSmartPatchForHive.patch


 This issue proposes a new logical optimizer called Correlation 

[jira] [Commented] (HIVE-2203) Extend concat_ws() UDF to support arrays of strings

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547983#comment-13547983
 ] 

Hudson commented on HIVE-2203:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2203. Extend concat_ws() UDF to support arrays of strings (Zhenxiao 
Luo via cws) (Revision 1234150)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1234150
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcatWS.java
* /hive/trunk/ql/src/test/queries/clientnegative/udf_concat_ws_wrong1.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_concat_ws_wrong2.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_concat_ws_wrong3.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_concat_ws.q
* /hive/trunk/ql/src/test/results/clientnegative/udf_concat_ws_wrong1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_concat_ws_wrong2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_concat_ws_wrong3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_concat_ws.q.out


 Extend concat_ws() UDF to support arrays of strings
 ---

 Key: HIVE-2203
 URL: https://issues.apache.org/jira/browse/HIVE-2203
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
Priority: Minor
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2203.D1065.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2203.D1071.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2203.D1113.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2203.D1119.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2203.D1137.1.patch, HIVE-2203.D1137.1.patch


 concat_ws() should support the following type of input parameters:
 concat_ws(string separator, arraystring)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1367) cluster by multiple columns does not work if parenthesis is present

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547984#comment-13547984
 ] 

Hudson commented on HIVE-1367:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-1367 cluster by multiple columns does not work if parenthesis is 
present
(Zhenxiao Luo via namit) (Revision 1392091)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1392091
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/test/queries/clientpositive/parenthesis_star_by.q
* /hive/trunk/ql/src/test/results/clientpositive/parenthesis_star_by.q.out


 cluster by multiple columns does not work if parenthesis is present
 ---

 Key: HIVE-1367
 URL: https://issues.apache.org/jira/browse/HIVE-1367
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-1367.1.patch.txt


 The following query:
 select ...  from src cluster by (key, value)
 throws a compile error:
 whereas the query
 select ...  from src cluster by key, value
 works fine

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1362) Column level scalar valued statistics on Tables and Partitions

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547985#comment-13547985
 ] 

Hudson commented on HIVE-1362:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3686. Fix compile errors introduced by the interaction of HIVE-1362 
and HIVE-3524. (Shreepadma Venugopalan via kevinwilfong) (Revision 1406783)
HIVE-1362. Column level scalar valued statistics on Tables and Partitions 
(Shreepadma Venugopalan via cws) (Revision 1406465)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406783
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java

cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1406465
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/data/files/UserVisits.dat
* /hive/trunk/data/files/binary.txt
* /hive/trunk/data/files/bool.txt
* /hive/trunk/data/files/double.txt
* /hive/trunk/data/files/employee.dat
* /hive/trunk/data/files/employee2.dat
* /hive/trunk/data/files/employee_part.txt
* /hive/trunk/data/files/int.txt
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore_server.skeleton.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/BinaryColumnStatsData.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/BooleanColumnStatsData.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatistics.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatisticsData.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatisticsDesc.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ColumnStatisticsObj.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/DoubleColumnStatsData.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/EnvironmentContext.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/InvalidInputException.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/LongColumnStatsData.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Schema.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/StringColumnStatsData.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore/hive_metastore_types.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore-remote
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
* /hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb
* /hive/trunk/metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MPartitionColumnStatistics.java
* 
/hive/trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MTableColumnStatistics.java
* /hive/trunk/metastore/src/model/package.jdo
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/if/queryplan.thrift
* /hive/trunk/ql/ivy.xml
* /hive/trunk/ql/src/gen/thrift/gen-cpp/queryplan_types.cpp
* /hive/trunk/ql/src/gen/thrift/gen-cpp/queryplan_types.h
* 

[jira] [Commented] (HIVE-2498) Group by operator does not estimate size of Timestamp Binary data correctly

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547986#comment-13547986
 ] 

Hudson commented on HIVE-2498:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2498 Group by operator does not estimate size of Timestamp  Binary 
data correctly. Ashutosh Chauhan (via egc) (Revision 1357839)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1357839
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java


 Group by operator does not estimate size of Timestamp  Binary data correctly
 -

 Key: HIVE-2498
 URL: https://issues.apache.org/jira/browse/HIVE-2498
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1, 0.9.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2498.D1185.1.patch, 
 hive-2498_1.patch, hive-2498_2.patch, hive-2498.patch


 It currently defaults to default case and returns constant value, whereas we 
 can do better by getting actual size at runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2490) Add reset operation and average time attribute to Metrics MBean.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547987#comment-13547987
 ] 

Hudson commented on HIVE-2490:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2490: Add reset operation and average time attribute to Metrics MBean. 
(kevinwilfong via hashutosh) (Revision 1293352)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1293352
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/common/metrics/Metrics.java
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/metrics/MetricsMBeanImpl.java


 Add reset operation and average time attribute to Metrics MBean.
 

 Key: HIVE-2490
 URL: https://issues.apache.org/jira/browse/HIVE-2490
 Project: Hive
  Issue Type: New Feature
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: HIVE-2490.1.patch.txt


 We should add a reset operation to the Metrics MBean, which will set all the 
 counters to 0.
 Note: Deleting the counters from the map of attributes was not suggested 
 because that could break any scripts that get the list of attributes from the 
 bean and then the values of each attribute.  Also, 0 is unlikely to be an 
 actual value for any counter, and it will not break the increment 
 functionality. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3520) ivysettings.xml does not let you override .m2/repository

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547989#comment-13547989
 ] 

Hudson commented on HIVE-3520:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3520 : ivysettings.xml does not let you override .m2/repository (Raja 
Aluri via Ashutosh Chauhan) (Revision 1410143)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1410143
Files : 
* /hive/trunk/ivy/ivysettings.xml


 ivysettings.xml does not let you override .m2/repository
 

 Key: HIVE-3520
 URL: https://issues.apache.org/jira/browse/HIVE-3520
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Giridharan Kesavan
Assignee: Raja Aluri
 Fix For: 0.10.0

 Attachments: HIVE-3520.patch


 ivysettings.xml does not let you override .m2/repository. In other words 
 repo.dir ivysetting should be an overridable property

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3522) Make separator for Entity name configurable

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547988#comment-13547988
 ] 

Hudson commented on HIVE-3522:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3522. Make separator for Entity name configurable. (Raghotham Murthy 
via kevinwilfong) (Revision 1395668)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1395668
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/hooks/Entity.java


 Make separator for Entity name configurable
 ---

 Key: HIVE-3522
 URL: https://issues.apache.org/jira/browse/HIVE-3522
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Raghotham Murthy
Assignee: Raghotham Murthy
Priority: Trivial
 Fix For: 0.10.0

 Attachments: hive-3522.1.patch, hive-3522.2.patch, hive-3522.3.patch


 Right now its hard-coded to '@'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3319) Fix the “TestHiveHistory”, “TestHiveConf”, “TestExecDriver” unit tests on Windows by fixing the path related issues.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547990#comment-13547990
 ] 

Hudson commented on HIVE-3319:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3319 : Fix the TestHiveHistory, TestHiveConf, TestExecDriver unit 
tests on Windows by fixing the path related issues. (Kanna Karanam via Ashutosh 
Chauhan) (Revision 1368330)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368330
Files : 
* /hive/trunk/common/src/test/org/apache/hadoop/hive/conf/TestHiveConf.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestExecDriver.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/history/TestHiveHistory.java


 Fix the “TestHiveHistory”, “TestHiveConf”,  “TestExecDriver” unit tests on 
 Windows by fixing the path related issues.
 --

 Key: HIVE-3319
 URL: https://issues.apache.org/jira/browse/HIVE-3319
 Project: Hive
  Issue Type: Sub-task
  Components: Tests, Windows
Affects Versions: 0.10.0, 0.9.1
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: HIVE-3319.1.patch.txt


 TestHiveConf : Location of the “hive-site.xml” path on Unix is different from 
 Windows so need to compare the paths after generating the OS independent URI 
 path.
 TestHiveHistory: Generate the testFileDirectory from URI object instead of 
 manually constructing it to make it work on Unix as well as on Windows
 TestExecDriver: 
 1) Generate the testFileDirectory from URI object instead of manually 
 constructing it to make it work on Unix as well as on Windows
 2) change the absolute path to relative path of the “Cat” utility to make it 
 work on windows with CygWin tools in the Windows Path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3317) Fix “TestDosToUnix” unit tests on Windows by closing the leaking file handle in DosToUnix.java.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547991#comment-13547991
 ] 

Hudson commented on HIVE-3317:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3317 Fix “TestDosToUnix” unit tests on Windows by closing the leaking 
file handle in DosToUnix.java. (Kanna Karanam via Ashutosh Chauhan) (Revision 
1367838)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367838
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/util/DosToUnix.java


 Fix “TestDosToUnix” unit tests on Windows by closing the leaking file handle 
 in DosToUnix.java.
 ---

 Key: HIVE-3317
 URL: https://issues.apache.org/jira/browse/HIVE-3317
 Project: Hive
  Issue Type: Sub-task
  Components: Import/Export, Windows
Affects Versions: 0.10.0, 0.9.1
 Environment: Windows
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: HIVE-3317.1.patch.txt, HIVE-3317.2.patch.txt


 Windows can’t delete the files if there are any open file handles on it so it 
 is required to close them properly after completing the validation in 
 DosToUnix utilities.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3314) Extract global limit configuration to optimizer

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547992#comment-13547992
 ] 

Hudson commented on HIVE-3314:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3314 Extract global limit configuration to optimizer
(Navis via namit) (Revision 1367405)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367405
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GlobalLimitOptimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/GlobalLimitCtx.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java


 Extract global limit configuration to optimizer
 ---

 Key: HIVE-3314
 URL: https://issues.apache.org/jira/browse/HIVE-3314
 Project: Hive
  Issue Type: Task
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.10.0

 Attachments: HIVE-3314.1.patch.txt


 SemanticAnalyzer is growing bigger and bigger. If some codes can be separated 
 cleanly, it would be better to do that for simplicity.
 Was in part of HIVE-2925. Suggested to separate issue as 
 https://issues.apache.org/jira/browse/HIVE-2925?focusedCommentId=13423754page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13423754

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3315) Propagates filters which are on the join condition transitively

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547993#comment-13547993
 ] 

Hudson commented on HIVE-3315:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3315 Propagates filters which are on the join condition transitively
(Navis via namit) (Revision 1391108)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391108
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/PredicateTransitivePropagate.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_nullsafe.q
* /hive/trunk/ql/src/test/results/clientpositive/auto_join11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join16.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join27.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join28.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join29.q.out
* /hive/trunk/ql/src/test/results/clientpositive/filter_join_breaktask.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join16.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join40.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join_nullsafe.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/mapjoin_filter_on_outerjoin.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ppd_gby_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ppd_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ppd_join2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ppd_join3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/regex_col.q.out
* /hive/trunk/ql/src/test/results/clientpositive/skewjoin.q.out


 Propagates filters which are on the join condition transitively 
 

 Key: HIVE-3315
 URL: https://issues.apache.org/jira/browse/HIVE-3315
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3315.1.patch.txt, HIVE-3315.2.patch.txt, 
 HIVE-3315.3.patch.txt, HIVE-3315.D4497.5.patch, HIVE-3315.D4497.6.patch, 
 HIVE-3315.D4497.7.patch


 explain select src1.key from src src1 join src src2 on src1.key=src2.key and 
 src1.key  100;
 In this case, filter on join condition src1.key  100 can be propagated 
 transitively to src2 by src2.key  100. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3519) partition to directory comparison in CombineHiveInputFormat needs to accept partitions dir without scheme

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547994#comment-13547994
 ] 

Hudson commented on HIVE-3519:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3519 : partition to directory comparison in CombineHiveInputFormat 
needs to accept partitions dir without scheme (Thejas Nair via Ashutosh 
Chauhan) (Revision 1401935)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1401935
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveFileFormatUtils.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/TestSymlinkTextInputFormat.java


 partition to directory comparison in CombineHiveInputFormat needs to accept 
 partitions dir without scheme
 -

 Key: HIVE-3519
 URL: https://issues.apache.org/jira/browse/HIVE-3519
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.10.0

 Attachments: HIVE-3519.1.patch, HIVE-3519.2.patch


 TestSymlinkTextInputFormat.testCombine throws following exception. The test 
 case is just printing out the stacktrace when that happens instead of failing.
 {code}
 java.io.IOException: cannot find dir = 
 file:/Users/thejas/hive-trunk/ql/TestSymlinkTextInputFormat/datadir1/combinefile1_1
  in pathToPartitionInfo: 
 [/Users/thejas/hive-trunk/ql/TestSymlinkTextInputFormat/datadir2/combinefile2_1,
  
 /Users/thejas/hive-trunk/ql/TestSymlinkTextInputFormat/datadir1/combinefile1_1]
 at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:288)
 at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getPartitionDescFromPathRecursively(HiveFileFormatUtils.java:256)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:289)
 at 
 org.apache.hadoop.hive.ql.io.TestSymlinkTextInputFormat.testCombine(TestSymlinkTextInputFormat.java:186)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at junit.framework.TestCase.runTest(TestCase.java:168)
 at junit.framework.TestCase.runBare(TestCase.java:134)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at junit.framework.TestSuite.runTest(TestSuite.java:232)
 at junit.framework.TestSuite.run(TestSuite.java:227)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
 at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3518) QTestUtil side-effects

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547995#comment-13547995
 ] 

Hudson commented on HIVE-3518:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3518 QTestUtil side-effects
(Navis via namit) (Revision 1397843)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397843
Files : 
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java


 QTestUtil side-effects
 --

 Key: HIVE-3518
 URL: https://issues.apache.org/jira/browse/HIVE-3518
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure, Tests
Reporter: Ivan Gorbachev
Assignee: Navis
 Fix For: 0.10.0

 Attachments: HIVE-3518.D5865.1.patch, HIVE-3518.D5865.2.patch, 
 metadata_export_drop.q


 It seems that QTestUtil has side-effects. This test 
 ([^metadata_export_drop.q]) causes failure of other tests on cleanup stage:
 {quote}
 Exception: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
 Relative path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
 path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
 at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:845)
 at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:821)
 at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:445)
 at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:300)
 at org.apache.hadoop.hive.cli.TestCliDriver.tearDown(TestCliDriver.java:87)
 at junit.framework.TestCase.runBare(TestCase.java:140)
 at junit.framework.TestResult$1.protect(TestResult.java:110)
 at junit.framework.TestResult.runProtected(TestResult.java:128)
 at junit.framework.TestResult.run(TestResult.java:113)
 at junit.framework.TestCase.run(TestCase.java:124)
 at junit.framework.TestSuite.runTest(TestSuite.java:232)
 at junit.framework.TestSuite.run(TestSuite.java:227)
 at 
 org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130)
 at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
 at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
 Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
 Relative path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
 at org.apache.hadoop.fs.Path.initialize(Path.java:140)
 at org.apache.hadoop.fs.Path.init(Path.java:132)
 at 
 org.apache.hadoop.fs.ProxyFileSystem.swizzleParamPath(ProxyFileSystem.java:56)
 at org.apache.hadoop.fs.ProxyFileSystem.mkdirs(ProxyFileSystem.java:214)
 at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
 at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1120)
 at 
 org.apache.hadoop.hive.ql.parse.MetaDataExportListener.export_meta_data(MetaDataExportListener.java:81)
 at 
 org.apache.hadoop.hive.ql.parse.MetaDataExportListener.onEvent(MetaDataExportListener.java:106)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1024)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table(HiveMetaStore.java:1185)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:566)
 at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:839)
 ... 17 more
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
 at java.net.URI.checkPath(URI.java:1787)
 at java.net.URI.init(URI.java:735)
 at org.apache.hadoop.fs.Path.initialize(Path.java:137)
 ... 28 more
 {quote}
 Flushing 'hive.metastore.pre.event.listeners' into empty string solves the 
 issue. During debugging I figured out this property wan't cleaned for other 
 tests after it was set in metadata_export_drop.q.
 How to reproduce:
 {code} ant test -Dtestcase=TestCliDriver -Dqfile=metadata_export_drop.q,some 
 test.q{code}
 where some test.q means any test which contains CREATE statement. For 
 example, sample10.q

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

[jira] [Commented] (HIVE-3323) enum to string conversions

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547996#comment-13547996
 ] 

Hudson commented on HIVE-3323:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3323 : enum to string conversions (Travis Crawford via Ashutosh 
Chauhan) (Revision 1382211)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1382211
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/convert_enum_to_string.q
* /hive/trunk/ql/src/test/results/clientpositive/convert_enum_to_string.q.out
* /hive/trunk/serde/if/test/megastruct.thrift
* 
/hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MegaStruct.java
* 
/hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MiniStruct.java
* 
/hive/trunk/serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde2/thrift/test/MyEnum.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorFactory.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaStringObjectInspector.java


 enum to string conversions
 --

 Key: HIVE-3323
 URL: https://issues.apache.org/jira/browse/HIVE-3323
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.10.0
Reporter: Travis Crawford
Assignee: Travis Crawford
 Fix For: 0.10.0

 Attachments: HIVE-3323_enum_to_string.1.patch, 
 HIVE-3323_enum_to_string.2.patch, HIVE-3323_enum_to_string.3.patch, 
 HIVE-3323_enum_to_string.4.patch, HIVE-3323_enum_to_string.5.patch, 
 HIVE-3323_enum_to_string.6.patch, HIVE-3323_enum_to_string.8.patch


 When using serde-reported schemas with the ThriftDeserializer, Enum fields 
 are presented as {{structvalue:int}}
 Many users expect to work with the string values, which is both easier and 
 more meaningful as the string value communicates what is represented.
 Hive should provide a mechanism to optionally convert enum values to strings.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3515) metadata_export_drop.q causes failure of other tests

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547997#comment-13547997
 ] 

Hudson commented on HIVE-3515:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3515 metadata_export_drop.q causes failure of other tests
(Ivan Gorbachev via namit) (Revision 1391848)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391848
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/metadata_export_drop.q


 metadata_export_drop.q causes failure of other tests
 

 Key: HIVE-3515
 URL: https://issues.apache.org/jira/browse/HIVE-3515
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Ivan Gorbachev
Assignee: Ivan Gorbachev
 Fix For: 0.10.0

 Attachments: jira-3515.1.patch


 metadata_export_drop.q causes failure of other tests on cleanup stage.
 {quote}
 Exception: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
 Relative path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
 path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
   at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:845)
   at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:821)
   at org.apache.hadoop.hive.ql.QTestUtil.cleanUp(QTestUtil.java:445)
   at org.apache.hadoop.hive.ql.QTestUtil.shutdown(QTestUtil.java:300)
   at 
 org.apache.hadoop.hive.cli.TestCliDriver.tearDown(TestCliDriver.java:87)
   at junit.framework.TestCase.runBare(TestCase.java:140)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:232)
   at junit.framework.TestSuite.run(TestSuite.java:227)
   at 
 org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
 Caused by: java.lang.IllegalArgumentException: java.net.URISyntaxException: 
 Relative path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
   at org.apache.hadoop.fs.Path.initialize(Path.java:140)
   at org.apache.hadoop.fs.Path.init(Path.java:132)
   at 
 org.apache.hadoop.fs.ProxyFileSystem.swizzleParamPath(ProxyFileSystem.java:56)
   at org.apache.hadoop.fs.ProxyFileSystem.mkdirs(ProxyFileSystem.java:214)
   at 
 org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1120)
   at 
 org.apache.hadoop.hive.ql.parse.MetaDataExportListener.export_meta_data(MetaDataExportListener.java:81)
   at 
 org.apache.hadoop.hive.ql.parse.MetaDataExportListener.onEvent(MetaDataExportListener.java:106)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1024)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table(HiveMetaStore.java:1185)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:566)
   at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:839)
   ... 17 more
 Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
 file:../build/ql/test/data/exports/HIVE-3427/src.2012-09-28-11-38-17
   at java.net.URI.checkPath(URI.java:1787)
   at java.net.URI.init(URI.java:735)
   at org.apache.hadoop.fs.Path.initialize(Path.java:137)
   ... 28 more
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3320) Handle “CRLF” line endings to avoid the extra spacing in generated test outputs in Windows. (Utilities.Java :: readColumn)

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547998#comment-13547998
 ] 

Hudson commented on HIVE-3320:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3320 : Handle CRLF line endings to avoid the extra spacing in 
generated test outputs in Windows. (Utilities.Java :: readColumn) (Kanna 
Karanam via Ashutosh Chauhan) (Revision 1368624)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1368624
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java


 Handle “CRLF” line endings to avoid the extra spacing in generated test 
 outputs in Windows. (Utilities.Java :: readColumn)
 --

 Key: HIVE-3320
 URL: https://issues.apache.org/jira/browse/HIVE-3320
 Project: Hive
  Issue Type: Sub-task
  Components: CLI, Tests, Windows
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: bucket1.q.out.patch.txt, HIVE-3320.1.patch.txt


 Existing functionality in Hive is looking for the “LF” character to extract 
 the lines from generated output before writing to the streaming object. 
 Whereas in Windows, the line endings are “CRLF” characters so it is leaving 
 the “CR” character before writing each line to the streaming object. This CR 
 is introducing the extra empty lines in the generated test output and causing 
 several test cases to fail on Windows.
 Attached the generated output for one of the unit test failures with extra 
 space. (bucket1.q)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3514) Refactor Partition Pruner so that logic can be reused.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547999#comment-13547999
 ] 

Hudson commented on HIVE-3514:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3514 Refactor Partition Pruner so that logic can be reused.
(Gang Tim Liu via namit) (Revision 1394358)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1394358
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/PrunerExpressionOperatorFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/PrunerOperatorFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/PrunerUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/ExprProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/OpProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java


 Refactor Partition Pruner so that logic can be reused.
 --

 Key: HIVE-3514
 URL: https://issues.apache.org/jira/browse/HIVE-3514
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Gang Tim Liu
Assignee: Gang Tim Liu
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3514.patch, HIVE-3514.patch.2, HIVE-3514.patch.3, 
 HIVE-3514.patch.4, HIVE-3514.patch.5


 Partition Pruner has logic reusable like
 1. walk through operator tree
 2. walk through operation tree
 3. create pruning predicate
 The first candidate is list bucketing pruner.
 Some consideration:
 1. refactor for general use case not just list bucketing
 2. avoid over-refactor by focusing on pieces targeted for reuse

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3512) Log client IP address with command in metastore's startFunction method

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548000#comment-13548000
 ] 

Hudson commented on HIVE-3512:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3512 Log client IP address with command in metastore's startFunction 
method
(Kevin Wilfong via namit) (Revision 1391397)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1391397
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java


 Log client IP address with command in metastore's startFunction method
 --

 Key: HIVE-3512
 URL: https://issues.apache.org/jira/browse/HIVE-3512
 Project: Hive
  Issue Type: Improvement
  Components: Logging, Metastore
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3512.1.patch.txt


 We have the client IP address for metastore commands available in the 
 HMSHandler.  It would make determining the source of commands (reads in 
 particular) much easier if the IP address was logged with the command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3533) ZooKeeperHiveLockManager does not respect the option to keep locks alive even after the current session has closed

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548002#comment-13548002
 ] 

Hudson commented on HIVE-3533:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3533 ZooKeeperHiveLockManager does not respect the option to keep 
locks alive even after
the current session has closed (Matt Martin via namit) (Revision 1395026)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1395026
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java


 ZooKeeperHiveLockManager does not respect the option to keep locks alive even 
 after the current session has closed
 --

 Key: HIVE-3533
 URL: https://issues.apache.org/jira/browse/HIVE-3533
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.9.0
Reporter: Matt Martin
Assignee: Matt Martin
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3533.1.patch.txt


 The HiveLockManager interface defines the following method:
 public ListHiveLock lock(ListHiveLockObj objs,
   boolean keepAlive) throws LockException;
 ZooKeeperHiveLockManager implements HiveLockManager, but the current 
 implementation of the lock method never actually references the keepAlive 
 parameter.  As a result, all of the locks acquired by the lock method are 
 ephemeral.  In other words, Zookeeper-based locks only exist as long as the 
 underlying Zookeeper session exists.  As soon as the Zookeeper session ends, 
 any Zookeeper-based locks are automatically released.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3531) Simple lock manager for dedicated hive server

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548001#comment-13548001
 ] 

Hudson commented on HIVE-3531:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3531 [jira] Simple lock manager for dedicated hive server
(Navis Ryu via Carl Steinbach)

Summary:
DPAL-1906 Implement simple lock manager for hive server

In many cases, we uses hive server as a sole proxy for executing all the 
queries. For that, current default lock manager based on zookeeper seemed a 
little heavy. Simple in-memory lock manager could be enough.

Test Plan: TestDedicatedLockManager

Reviewers: JIRA, cwsteinbach

Reviewed By: cwsteinbach

Differential Revision: https://reviews.facebook.net/D5871 (Revision 1414590)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1414590
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/EmbeddedLockManager.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveLockObject.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/lockmgr
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestEmbeddedLockManager.java


 Simple lock manager for dedicated hive server
 -

 Key: HIVE-3531
 URL: https://issues.apache.org/jira/browse/HIVE-3531
 Project: Hive
  Issue Type: Improvement
  Components: Locking, Server Infrastructure
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.11.0

 Attachments: HIVE-3531.D5871.1.patch, HIVE-3531.D5871.2.patch, 
 HIVE-3531.D5871.3.patch


 In many cases, we uses hive server as a sole proxy for executing all the 
 queries. For that, current default lock manager based on zookeeper seemed a 
 little heavy. Simple in-memory lock manager could be enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3327) Remove the Unix specific absolute path of “Cat” utility in several .q files to make them run on Windows with CygWin in path.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548003#comment-13548003
 ] 

Hudson commented on HIVE-3327:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3327 :Remove the Unix specific absolute path of Cat utility in several 
.q files to make them run on Windows with CygWin in path. (Kanna Karanam via 
Ashutosh Chauhan) (Revision 1369895)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1369895
Files : 
* /hive/trunk/contrib/src/test/queries/clientpositive/serde_typedbytes.q
* /hive/trunk/contrib/src/test/queries/clientpositive/serde_typedbytes2.q
* /hive/trunk/contrib/src/test/queries/clientpositive/serde_typedbytes3.q
* /hive/trunk/contrib/src/test/queries/clientpositive/serde_typedbytes4.q
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes2.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes3.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes4.q.out
* /hive/trunk/ql/src/test/queries/clientnegative/clusterbydistributeby.q
* /hive/trunk/ql/src/test/queries/clientnegative/clusterbyorderby.q
* /hive/trunk/ql/src/test/queries/clientnegative/clusterbysortby.q
* /hive/trunk/ql/src/test/queries/clientnegative/orderbysortby.q
* /hive/trunk/ql/src/test/queries/clientpositive/input14.q
* /hive/trunk/ql/src/test/queries/clientpositive/input14_limit.q
* /hive/trunk/ql/src/test/queries/clientpositive/input17.q
* /hive/trunk/ql/src/test/queries/clientpositive/input18.q
* /hive/trunk/ql/src/test/queries/clientpositive/input34.q
* /hive/trunk/ql/src/test/queries/clientpositive/input35.q
* /hive/trunk/ql/src/test/queries/clientpositive/input36.q
* /hive/trunk/ql/src/test/queries/clientpositive/input38.q
* /hive/trunk/ql/src/test/queries/clientpositive/input5.q
* /hive/trunk/ql/src/test/queries/clientpositive/mapreduce1.q
* /hive/trunk/ql/src/test/queries/clientpositive/mapreduce2.q
* /hive/trunk/ql/src/test/queries/clientpositive/mapreduce3.q
* /hive/trunk/ql/src/test/queries/clientpositive/mapreduce4.q
* /hive/trunk/ql/src/test/queries/clientpositive/mapreduce7.q
* /hive/trunk/ql/src/test/queries/clientpositive/mapreduce8.q
* /hive/trunk/ql/src/test/queries/clientpositive/newline.q
* /hive/trunk/ql/src/test/queries/clientpositive/nullscript.q
* /hive/trunk/ql/src/test/queries/clientpositive/partcols1.q
* /hive/trunk/ql/src/test/queries/clientpositive/ppd_transform.q
* /hive/trunk/ql/src/test/queries/clientpositive/query_with_semi.q
* /hive/trunk/ql/src/test/queries/clientpositive/regexp_extract.q
* /hive/trunk/ql/src/test/queries/clientpositive/select_transform_hint.q
* /hive/trunk/ql/src/test/queries/clientpositive/transform_ppr1.q
* /hive/trunk/ql/src/test/queries/clientpositive/transform_ppr2.q
* /hive/trunk/ql/src/test/results/clientpositive/input14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input14_limit.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input17.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input18.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input34.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input35.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input36.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input38.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapreduce1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapreduce2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapreduce3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapreduce4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapreduce7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapreduce8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/newline.q.out
* /hive/trunk/ql/src/test/results/clientpositive/nullscript.q.out
* /hive/trunk/ql/src/test/results/clientpositive/partcols1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ppd_transform.q.out
* /hive/trunk/ql/src/test/results/clientpositive/query_with_semi.q.out
* /hive/trunk/ql/src/test/results/clientpositive/regexp_extract.q.out
* /hive/trunk/ql/src/test/results/clientpositive/select_transform_hint.q.out
* /hive/trunk/ql/src/test/results/clientpositive/transform_ppr1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/transform_ppr2.q.out


 Remove the Unix specific absolute path of “Cat” utility in several .q files 
 to make them run on Windows with CygWin in path.
 

 Key: HIVE-3327
 URL: https://issues.apache.org/jira/browse/HIVE-3327
 

[jira] [Commented] (HIVE-3529) Incorrect partition bucket/sort metadata when overwriting partition with different metadata from table

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548004#comment-13548004
 ] 

Hudson commented on HIVE-3529:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3529 Incorrect partition bucket/sort metadata when overwriting 
partition with different metadata from table
(Kevin Wilfong via namit) (Revision 1403363)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403363
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/alter_numbuckets_partitioned_table2.q
* /hive/trunk/ql/src/test/queries/clientpositive/alter_table_serde2.q
* 
/hive/trunk/ql/src/test/results/clientpositive/alter_numbuckets_partitioned_table.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/alter_numbuckets_partitioned_table2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_table_serde2.q.out


 Incorrect partition bucket/sort metadata when overwriting partition with 
 different metadata from table
 --

 Key: HIVE-3529
 URL: https://issues.apache.org/jira/browse/HIVE-3529
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.10.0

 Attachments: HIVE-3529.1.patch.txt


 If you have a partition with bucket/sort metadata set, then you alter the 
 table to have different bucket/sort metadata, and insert overwrite the 
 partition with hive.enforce.bucketing=true and/or hive.enforce.sorting=true, 
 the partition data will be bucketed/sorted by the table's metadata, but the 
 partition will have the same metadata.
 This could result in wrong results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3523) Hive info logging is broken

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548006#comment-13548006
 ] 

Hudson commented on HIVE-3523:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3523. Hive info logging is broken (Carl Steinbach via cws) (Revision 
1399929)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399929
Files : 
* /hive/trunk/common/src/java/conf/hive-log4j.properties
* /hive/trunk/ql/src/java/conf/hive-exec-log4j.properties


 Hive info logging is broken
 ---

 Key: HIVE-3523
 URL: https://issues.apache.org/jira/browse/HIVE-3523
 Project: Hive
  Issue Type: Bug
  Components: Logging
Affects Versions: 0.10.0
Reporter: Shreepadma Venugopalan
Assignee: Carl Steinbach
 Fix For: 0.10.0

 Attachments: HIVE-3523.1.patch.txt, HIVE-3523.D5811.1.patch


 Hive Info logging is broken on trunk. hive -hiveconf 
 hive.root.logger=INFO,console doesn't print the output of LOG.info statements 
 to the console. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3525) Avro Maps with Nullable Values fail with NPE

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548007#comment-13548007
 ] 

Hudson commented on HIVE-3525:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3525. Avro Maps with Nullable Values fail with NPE (Sean Busbey via 
cws) (Revision 1399935)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1399935
Files : 
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerializer.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroDeserializer.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroObjectInspectorGenerator.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/avro/TestAvroSerializer.java


 Avro Maps with Nullable Values fail with NPE
 

 Key: HIVE-3525
 URL: https://issues.apache.org/jira/browse/HIVE-3525
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 0.10.0

 Attachments: HIVE-3525.1.patch.txt, HIVE-3525.2.patch.txt


 When working against current trunk@1393794, using a backing Avro schema that 
 has a Map field with nullable values causes a NPE on deserialization when the 
 map contains a null value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3400) Add Retries to Hive MetaStore Connections

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548008#comment-13548008
 ] 

Hudson commented on HIVE-3400:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3400 : Add Retries to Hive MetaStore Connections (Bhushan Mandhani via 
Ashutosh Chauhan) (Revision 1418190)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1418190
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingRawStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMarkPartitionRemote.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreAuthorization.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEndFunctionListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEventListener.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java


 Add Retries to Hive MetaStore Connections
 -

 Key: HIVE-3400
 URL: https://issues.apache.org/jira/browse/HIVE-3400
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Bhushan Mandhani
Assignee: Bhushan Mandhani
Priority: Minor
  Labels: metastore
 Fix For: 0.10.0

 Attachments: HIVE-3400.1.patch.txt, HIVE-3400.2.patch.txt, 
 HIVE-3400.3.patch.txt


 Currently, when using Thrift to access the MetaStore, if the Thrift host 
 dies, there is no mechanism to reconnect to some other host even if the 
 MetaStore URIs variable in the Conf contains multiple hosts. Hive should 
 retry and reconnect rather than throwing a communication link error.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3401) Diversify grammar for split sampling

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548009#comment-13548009
 ] 

Hudson commented on HIVE-3401:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3787 Regression introduced from HIVE-3401
(Navis via namit) (Revision 1423289)
HIVE-3401 Diversify grammar for split sampling
(Navis via namit) (Revision 1419365)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1423289
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientnegative/split_sample_wrong_format2.q
* /hive/trunk/ql/src/test/queries/clientpositive/split_sample.q
* /hive/trunk/ql/src/test/results/clientnegative/split_sample_out_of_range.q.out
* /hive/trunk/ql/src/test/results/clientnegative/split_sample_wrong_format.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/split_sample_wrong_format2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/split_sample.q.out

namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1419365
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapRedTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SplitSample.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/split_sample.q
* /hive/trunk/ql/src/test/results/clientnegative/split_sample_out_of_range.q.out
* /hive/trunk/ql/src/test/results/clientnegative/split_sample_wrong_format.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/nonmr_fetch.q.out
* /hive/trunk/ql/src/test/results/clientpositive/plan_json.q.out
* /hive/trunk/ql/src/test/results/clientpositive/split_sample.q.out


 Diversify grammar for split sampling
 

 Key: HIVE-3401
 URL: https://issues.apache.org/jira/browse/HIVE-3401
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.11.0

 Attachments: HIVE-3401.D4821.2.patch, HIVE-3401.D4821.3.patch, 
 HIVE-3401.D4821.4.patch, HIVE-3401.D4821.5.patch, HIVE-3401.D4821.6.patch, 
 HIVE-3401.D4821.7.patch


 Current split sampling only supports grammar like TABLESAMPLE(n PERCENT). But 
 some users wants to specify just the size of input. It can be easily 
 calculated with a few commands but it seemed good to support more grammars 
 something like TABLESAMPLE(500M). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3302) Race condition in query plan for merging at the end of a query

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548010#comment-13548010
 ] 

Hudson commented on HIVE-3302:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3302 Race condition in query plan for merging at the end of a query
(Kevin Wilfong via namit) (Revision 1369375)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1369375
Files : 
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes2.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes3.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes5.q.out
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ConditionalResolverMergeFiles.java
* /hive/trunk/ql/src/test/queries/clientpositive/merge_dynamic_partition5.q
* /hive/trunk/ql/src/test/results/clientpositive/binary_output_format.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin_negative.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin_negative2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/case_sensitivity.q.out
* /hive/trunk/ql/src/test/results/clientpositive/cast1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_file_format.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/index_auto_mult_tables_compact.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_multiple.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_partitioned.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_update.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_compression.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input34.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input35.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input36.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input38.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_dynamicserde.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_testsequencefile.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_testxpath.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_testxpath2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/insert1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/insert_into4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/insert_into5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/insert_into6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join25.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join26.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join27.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join28.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join29.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join32.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join34.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join35.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join36.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join37.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join39.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join_map_ppr.q.out
* /hive/trunk/ql/src/test/results/clientpositive/lineage1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/merge1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/merge2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/merge3.q.out
* 

[jira] [Commented] (HIVE-3301) Fix quote printing bug in mapreduce_stack_trace.q testcase failure when running hive on hadoop23

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548011#comment-13548011
 ] 

Hudson commented on HIVE-3301:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3301 : Fix quote printing bug in mapreduce_stack_trace.q testcase 
failure when running hive on hadoop23 (Zhenxiao Luo via Ashutosh Chauhan) 
(Revision 1366233)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1366233
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/errors/TaskLogProcessor.java
* 
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
* 
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


 Fix quote printing bug in mapreduce_stack_trace.q testcase failure when 
 running hive on hadoop23
 

 Key: HIVE-3301
 URL: https://issues.apache.org/jira/browse/HIVE-3301
 Project: Hive
  Issue Type: Bug
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-3301.1.patch.txt, HIVE-3301.2.patch.txt, 
 HIVE-3301.3.patch.txt


 When running hive on hadoop0.23, mapreduce_stack_trace.q is failing due to 
 quote printing bug:
 quote is printed as: 'quot;', instead of 
 Seems not able to state the bug clearly in html:
 quote is printed as 'address sign' + 'quot' + semicolon
 not the expected 'quote sign'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3300) LOAD DATA INPATH fails if a hdfs file with same name is added to table

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548012#comment-13548012
 ] 

Hudson commented on HIVE-3300:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3300 LOAD DATA INPATH fails if a hdfs file with same name is added to 
table
(Navis via namit) (Revision 1429686)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1429686
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/test/queries/clientpositive/load_fs2.q
* /hive/trunk/ql/src/test/results/clientpositive/load_fs2.q.out


 LOAD DATA INPATH fails if a hdfs file with same name is added to table
 --

 Key: HIVE-3300
 URL: https://issues.apache.org/jira/browse/HIVE-3300
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Affects Versions: 0.10.0
 Environment: ubuntu linux, hadoop 1.0.3, hive 0.9
Reporter: Bejoy KS
Assignee: Navis
 Fix For: 0.11.0

 Attachments: HIVE-3300.1.patch.txt, HIVE-3300.D4383.3.patch, 
 HIVE-3300.D4383.4.patch


 If we are loading data from local fs to hive tables using 'LOAD DATA LOCAL 
 INPATH' and if a file with the same name exists in the table's location then 
 the new file will be suffixed by *_copy_1.
 But if we do the 'LOAD DATA INPATH'  for a file in hdfs then there is no 
 rename happening but just a move task is getting triggered. Since a file with 
 same name exists in same hdfs location, hadoop fs move operation throws an 
 error.
 hive LOAD DATA INPATH '/userdata/bejoy/site.txt' INTO TABLE test.site;
 Loading data to table test.site
 Failed with exception null
 FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MoveTask
 hive 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3202) Add hive command for resetting hive confs

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548013#comment-13548013
 ] 

Hudson commented on HIVE-3202:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3202 Add reset command for resetting configuration. Navis Ryu (via 
egc) (Revision 1362329)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1362329
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/CommandProcessorFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/processors/ResetProcessor.java
* /hive/trunk/ql/src/test/queries/clientpositive/reset_conf.q
* /hive/trunk/ql/src/test/results/clientpositive/reset_conf.q.out


 Add hive command for resetting hive confs
 -

 Key: HIVE-3202
 URL: https://issues.apache.org/jira/browse/HIVE-3202
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.10.0

 Attachments: HIVE-3202.patch.txt


 For the purpose of optimization we set various configs per query. It's worthy 
 but all those configs should be reset every time for next query.
 Just simple reset command would make it less painful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3410) All operators's conf should inherit from a common class

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548014#comment-13548014
 ] 

Hudson commented on HIVE-3410:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3410 All operators's conf should inherit from a common class
(Namit via Carl) (Revision 1378659)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378659
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/QueryPlan.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecMapper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapRedTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapredLocalTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SkewJoinHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Task.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TerminalOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveFileFormatUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/lib/DefaultGraphWalker.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketMapJoinOptimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPruner.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMROperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRProcContext.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRRedSink1.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRRedSink2.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRRedSink3.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRRedSink4.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRUnion1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GlobalLimitOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GroupByOptimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/JoinReorder.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkDeDuplication.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedMergeBucketMapJoinOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteGBUsingIndex.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndex.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndexCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/ExprProcCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/ExprProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/LineageCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/lineage/OpProcFactory.java
* 

[jira] [Commented] (HIVE-3411) Filter predicates on outer join overlapped on single alias is not handled properly

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548015#comment-13548015
 ] 

Hudson commented on HIVE-3411:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3411 Filter predicates on outer join overlapped on single alias is not 
handled properly
(Navis via namit) (Revision 1390010)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390010
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/AbstractMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SkewJoinHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/HashTableSinkDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/JoinDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/join_filters_overlap.q
* /hive/trunk/ql/src/test/results/clientpositive/auto_join29.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join_filters_overlap.q.out
* /hive/trunk/ql/src/test/results/clientpositive/louter_join_ppr.q.out
* /hive/trunk/ql/src/test/results/clientpositive/outer_join_ppr.q.out
* /hive/trunk/ql/src/test/results/clientpositive/router_join_ppr.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union22.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/join1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml


 Filter predicates on outer join overlapped on single alias is not handled 
 properly
 --

 Key: HIVE-3411
 URL: https://issues.apache.org/jira/browse/HIVE-3411
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
 Environment: ubuntu 10.10
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3411.1.patch.txt, HIVE-3411.2.patch.txt, 
 HIVE-3411.D5169.5.patch, HIVE-3411.D5169.6.patch


 Currently, join predicates on outer join are evaluated in join operator (or 
 HashSink for MapJoin) and the result value is tagged to end of each values(as 
 a boolean), which is used for joining values. But when predicates are 
 overlapped on single alias, all the predicates are evaluated with AND 
 conjunction, which makes invalid result. 
 For example with table a with values,
 {noformat}
 100 40
 100 50
 100 60
 {noformat}
 Query below has overlapped predicates on alias b, which is making all the 
 values on b are tagged with true(filtered)
 {noformat}
 select * from a right outer join a b on (a.key=b.key AND a.value=50 AND 
 b.value=50) left outer join a c on (b.key=c.key AND b.value=60 AND 
 c.value=60);
 NULL  NULL100 40  NULLNULL
 NULL  NULL100 50  NULLNULL
 NULL  NULL100 60  NULLNULL
 -- Join predicate
 Join Operator
   condition map:
Right Outer Join0 to 1
Left Outer Join1 to 2
   condition expressions:
 0 {VALUE._col0} {VALUE._col1}
 1 {VALUE._col0} {VALUE._col1}
 2 {VALUE._col0} {VALUE._col1}
   filter predicates:
 0 
 1 {(VALUE._col1 = 50)} {(VALUE._col1 = 60)}
 2 
 {noformat}
 but this should be 
 {noformat}
 NULL  NULL100 40  NULLNULL
 100   50  100 50  NULLNULL
 NULL  NULL100 60  100 60
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3412) Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548016#comment-13548016
 ] 

Hudson commented on HIVE-3412:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3412. Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 
2.2.0-alpha (Zhenxiao Luo via cws) (Revision 1380479)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380479
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/repair.q
* /hive/trunk/ql/src/test/queries/clientpositive/repair_hadoop23.q
* /hive/trunk/ql/src/test/results/clientpositive/repair.q.out
* /hive/trunk/ql/src/test/results/clientpositive/repair_hadoop23.q.out


 Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha
 -

 Key: HIVE-3412
 URL: https://issues.apache.org/jira/browse/HIVE-3412
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-3412.1.patch.txt, HIVE-3412.2.patch.txt


 TestCliDriver.repair fails on the following Hadoop versions:
 0.23.3, 3.0.0, 2.2.0-alpha
 repair.q fails with dfs -mkdir:
 [junit] mkdir: `../build/ql/test/data/warehouse/repairtable/p1=a/p2=a': No 
 such file or directory
 The problem is, after fixing HADOOP-8551, which changes the hdfs Shell syntax 
 for mkdir:
 https://issues.apache.org/jira/browse/HADOOP-8551
 all dfs -mkdir commands should provide -p in order to execute without 
 error.
 This is an intentional change in HDFS. And HADOOP-8551 will be included in 
 0.23.3, 3.0.0, 2.2.0-alpha versions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3204) Windows: Fix the unit tests which contains “!cmd” commands (Unix shell commands)

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548017#comment-13548017
 ] 

Hudson commented on HIVE-3204:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3204 :Windows: Fix the unit tests which contains cmd commands (Unix 
shell commands) (Kanna Karanam via Ashutosh Chauhan) (Revision 1360825)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1360825
Files : 
* /hive/trunk/ql/src/test/queries/clientnegative/exim_00_unsupported_schema.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_01_nonpart_over_loaded.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_02_all_part_over_overlap.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_03_nonpart_noncompat_colschema.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_04_nonpart_noncompat_colnumber.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_05_nonpart_noncompat_coltype.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_06_nonpart_noncompat_storage.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_07_nonpart_noncompat_ifof.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_08_nonpart_noncompat_serde.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_09_nonpart_noncompat_serdeparam.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_10_nonpart_noncompat_bucketing.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_11_nonpart_noncompat_sorting.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_13_nonnative_import.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_14_nonpart_part.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_15_part_nonpart.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_16_part_noncompat_schema.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_17_part_spec_underspec.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_18_part_spec_missing.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_19_external_over_existing.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_20_managed_location_over_existing.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_21_part_managed_external.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_22_export_authfail.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_23_import_exist_authfail.q
* /hive/trunk/ql/src/test/queries/clientnegative/exim_24_import_part_authfail.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/exim_25_import_nonexist_authfail.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_00_nonpart_empty.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_01_nonpart.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_02_00_part_empty.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_02_part.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_03_nonpart_over_compat.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_04_all_part.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_04_evolved_parts.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_05_some_part.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_06_one_part.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/exim_07_all_part_over_nonoverlap.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_08_nonpart_rename.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_09_part_spec_nonoverlap.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_10_external_managed.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_11_managed_external.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_12_external_location.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_13_managed_location.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/exim_14_managed_location_over_existing.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_15_external_part.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_16_part_external.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_17_part_managed.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_18_part_external.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/exim_19_00_part_external_location.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/exim_19_part_external_location.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_20_part_managed_location.q
* /hive/trunk/ql/src/test/queries/clientpositive/exim_21_export_authsuccess.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/exim_22_import_exist_authsuccess.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/exim_23_import_part_authsuccess.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/exim_24_import_nonexist_authsuccess.q
* /hive/trunk/ql/src/test/queries/clientpositive/insertexternal1.q
* /hive/trunk/ql/src/test/queries/clientpositive/multi_insert.q
* 

[jira] [Commented] (HIVE-3205) Bucketed mapjoin on partitioned table which has no partition throws NPE

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548018#comment-13548018
 ] 

Hudson commented on HIVE-3205:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3205 Bucketed mapjoin on partitioned table which has no partition 
throws NPE
(Navis via namit) (Revision 1363639)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1363639
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapredLocalTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketMapJoinOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/PrunedPartitionList.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/FetchWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredLocalWork.java
* /hive/trunk/ql/src/test/queries/clientpositive/bucketmapjoin1.q
* /hive/trunk/ql/src/test/queries/clientpositive/smb_mapjoin9.q
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/smb_mapjoin9.q.out


 Bucketed mapjoin on partitioned table which has no partition throws NPE
 ---

 Key: HIVE-3205
 URL: https://issues.apache.org/jira/browse/HIVE-3205
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
 Environment: ubuntu 10.04
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.10.0


 {code}
 create table hive_test_smb_bucket1 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 create table hive_test_smb_bucket2 (key int, value string) partitioned by (ds 
 string) clustered by (key) sorted by (key) into 2 buckets;
 set hive.optimize.bucketmapjoin = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 SELECT /* + MAPJOIN(b) */ b.key as k1, b.value, b.ds, a.key as k2
 FROM hive_test_smb_bucket1 a JOIN
 hive_test_smb_bucket2 b
 ON a.key = b.key WHERE a.ds = '2010-10-15' and b.ds='2010-10-15' and  b.key 
 IS NOT NULL;
 {code}
 throws NPE
 {noformat}
 2012-06-28 08:59:13,459 ERROR ql.Driver (SessionState.java:printError(400)) - 
 FAILED: NullPointerException null
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.optimizer.BucketMapJoinOptimizer$BucketMapjoinOptProc.process(BucketMapJoinOptimizer.java:269)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
   at 
 org.apache.hadoop.hive.ql.optimizer.BucketMapJoinOptimizer.transform(BucketMapJoinOptimizer.java:100)
   at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:87)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7564)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:245)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:50)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:245)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:744)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:607)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   

[jira] [Commented] (HIVE-3207) FileUtils.tar does not close input files

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548020#comment-13548020
 ] 

Hudson commented on HIVE-3207:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3207 : FileUtils.tar does not close input files (Navis Ryu via 
Ashutosh Chauhan) (Revision 1356108)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1356108
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/common/FileUtils.java


 FileUtils.tar does not close input files
 

 Key: HIVE-3207
 URL: https://issues.apache.org/jira/browse/HIVE-3207
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.10.0

 Attachments: HIVE-3207.1.patch.txt


 It should close input files too. I missed this in HIVE-3206. (sorry)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3206) FileUtils.tar assumes wrong directory in some cases

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548019#comment-13548019
 ] 

Hudson commented on HIVE-3206:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3206 FileUtils.tar assumes wrong directory in some cases. Navis Ryu 
(via egc) (Revision 1354816)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1354816
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/common/FileUtils.java


 FileUtils.tar assumes wrong directory in some cases
 ---

 Key: HIVE-3206
 URL: https://issues.apache.org/jira/browse/HIVE-3206
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
 Fix For: 0.10.0

 Attachments: hive-3206.1.patch.txt


 Bucket mapjoin throws exception archiving stored hashtables. 
 {noformat}
 hive set hive.optimize.bucketmapjoin = true;
 hive select /*+mapjoin(a)*/ a.key, a.value, b.value 
  from srcbucket_mapjoin_part a join srcbucket_mapjoin_part_2 b 
  on a.key=b.key;
 Total MapReduce jobs = 1
 12/06/28 12:36:18 WARN conf.HiveConf: DEPRECATED: Ignoring hive-default.xml 
 found on the CLASSPATH at /home/navis/hive/conf/hive-default.xml
 Execution log at: 
 /tmp/navis/navis_20120628123636_5298a863-605c-4b98-bbb3-0a132c85c5a3.log
 2012-06-28 12:36:18   Starting to launch local task to process map join;  
 maximum memory = 932118528
 2012-06-28 12:36:18   Processing rows:153 Hashtable size: 153 
 Memory usage:   1771376 rate:   0.002
 2012-06-28 12:36:18   Dump the hashtable into file: 
 file:/tmp/navis/hive_2012-06-28_12-36-17_003_3016196240171705142/-local-10002/HashTable-Stage-1/MapJoin-a-00-srcbucket22.txt.hashtable
 2012-06-28 12:36:18   Upload 1 File to: 
 file:/tmp/navis/hive_2012-06-28_12-36-17_003_3016196240171705142/-local-10002/HashTable-Stage-1/MapJoin-a-00-srcbucket22.txt.hashtable
  File size: 9644
 2012-06-28 12:36:19   Processing rows:309 Hashtable size: 156 
 Memory usage:   1844568 rate:   0.002
 2012-06-28 12:36:19   Dump the hashtable into file: 
 file:/tmp/navis/hive_2012-06-28_12-36-17_003_3016196240171705142/-local-10002/HashTable-Stage-1/MapJoin-a-00-srcbucket23.txt.hashtable
 2012-06-28 12:36:19   Upload 1 File to: 
 file:/tmp/navis/hive_2012-06-28_12-36-17_003_3016196240171705142/-local-10002/HashTable-Stage-1/MapJoin-a-00-srcbucket23.txt.hashtable
  File size: 10023
 2012-06-28 12:36:19   End of local task; Time Taken: 0.773 sec.
 Execution completed successfully
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 Launching Job 1 out of 1
 Number of reduce tasks is set to 0 since there's no reduce operator
 java.io.IOException: This archives contains unclosed entries.
   at 
 org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.finish(TarArchiveOutputStream.java:214)
   at org.apache.hadoop.hive.common.FileUtils.tar(FileUtils.java:276)
   at 
 org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:391)
   at 
 org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:137)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1324)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1110)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:944)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:744)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:607)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
 Job Submission failed with exception 'java.io.IOException(This archives 
 contains unclosed entries.)'
 java.lang.IllegalArgumentException: Can not create a Path from an empty string
   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:82)
   at org.apache.hadoop.fs.Path.init(Path.java:90)
   at 
 

[jira] [Commented] (HIVE-3303) Fix error code inconsistency bug in mapreduce_stack_trace.q and mapreduce_stack_trace_turnoff.q when running hive on hadoop23

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548021#comment-13548021
 ] 

Hudson commented on HIVE-3303:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3303: Fix error code inconsistency bug in mapreduce_stack_trace.q and 
mapreduce_stack_trace_turnoff.q when running hive on hadoop23 (Zhenxiao Luo via 
Ashutosh Chauhan) (Revision 1367413)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367413
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_hadoop20.q
* /hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q
* /hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_hadoop20.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff_hadoop20.q.out


 Fix error code inconsistency bug in mapreduce_stack_trace.q and 
 mapreduce_stack_trace_turnoff.q when running hive on hadoop23
 -

 Key: HIVE-3303
 URL: https://issues.apache.org/jira/browse/HIVE-3303
 Project: Hive
  Issue Type: Bug
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-3303.1.patch.txt


 when running hive on hadoop23, mapreduce_stack_trace.q and 
 mapreduce_stack_trace_turnoff.q are having inconsistent error code diffs:
 [junit] diff -a 
 /home/cloudera/Code/hive/build/ql/test/logs/clientnegative/mapreduce_stack_trace.q.out
  
 /home/cloudera/Code/hive/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out
 [junit]  FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.MapRedTask
 [junit]  FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.MapRedTask. Unable to initialize custom script.
 [junit] diff -a 
 /home/cloudera/Code/hive/build/ql/test/logs/clientnegative/mapreduce_stack_trace_turnoff.q.out
  
 /home/cloudera/Code/hive/ql/src/test/results/clientnegative/mapreduce_stack_trace_turnoff.q.out
 [junit] 5c5
 [junit]  FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.MapRedTask
 [junit] —
 [junit]  FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.MapRedTask. Unable to initialize custom script
 The error code 2(which indicates unable to initialize custom script) 
 could not be retrieved. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3306) SMBJoin/BucketMapJoin should be allowed only when join key expression is exactly matches with sort/cluster key

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548023#comment-13548023
 ] 

Hudson commented on HIVE-3306:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3306 SMBJoin/BucketMapJoin should be allowed only when join key 
expression is exactly matches
with sort/cluster key (Navis via namit) (Revision 1381669)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1381669
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketMapJoinOptimizer.java
* /hive/trunk/ql/src/test/queries/clientpositive/bucket_map_join_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/bucket_map_join_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/bucketmapjoin_negative3.q
* /hive/trunk/ql/src/test/results/clientpositive/bucket_map_join_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucket_map_join_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin_negative3.q.out


 SMBJoin/BucketMapJoin should be allowed only when join key expression is 
 exactly matches with sort/cluster key
 --

 Key: HIVE-3306
 URL: https://issues.apache.org/jira/browse/HIVE-3306
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3306.1.patch.txt


 CREATE TABLE bucket_small (key int, value string) CLUSTERED BY (key) SORTED 
 BY (key) INTO 2 BUCKETS STORED AS TEXTFILE;
 load data local inpath 
 '/home/navis/apache/oss-hive/data/files/srcsortbucket1outof4.txt' INTO TABLE 
 bucket_small;
 load data local inpath 
 '/home/navis/apache/oss-hive/data/files/srcsortbucket2outof4.txt' INTO TABLE 
 bucket_small;
 CREATE TABLE bucket_big (key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 4 BUCKETS STORED AS TEXTFILE;
 load data local inpath 
 '/home/navis/apache/oss-hive/data/files/srcsortbucket1outof4.txt' INTO TABLE 
 bucket_big;
 load data local inpath 
 '/home/navis/apache/oss-hive/data/files/srcsortbucket2outof4.txt' INTO TABLE 
 bucket_big;
 load data local inpath 
 '/home/navis/apache/oss-hive/data/files/srcsortbucket3outof4.txt' INTO TABLE 
 bucket_big;
 load data local inpath 
 '/home/navis/apache/oss-hive/data/files/srcsortbucket4outof4.txt' INTO TABLE 
 bucket_big;
 select count(*) FROM bucket_small a JOIN bucket_big b ON a.key + a.key = 
 b.key;
 select /* + MAPJOIN(a) */ count(*) FROM bucket_small a JOIN bucket_big b ON 
 a.key + a.key = b.key;
 returns 116 (same) 
 But with BucketMapJoin or SMBJoin, it returns 61. But this should not be 
 allowed cause hash(a.key) != hash(a.key + a.key). 
 Bucket context should be utilized only with exact matching join expression 
 with sort/cluster key.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3304) sort merge join should work if both the tables are sorted in descending order

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548022#comment-13548022
 ] 

Hudson commented on HIVE-3304:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3304. sort merge join should work if both the tables are sorted in 
descending order. (njain via kevinwilfong) (Revision 1369879)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1369879
Files : 
* /hive/trunk/data/files/SortCol1Col2.txt
* /hive/trunk/data/files/SortCol2Col1.txt
* /hive/trunk/data/files/SortDescCol1Col2.txt
* /hive/trunk/data/files/SortDescCol2Col1.txt
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedMergeBucketMapJoinOptimizer.java
* /hive/trunk/ql/src/test/queries/clientpositive/bucket_map_join_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/bucket_map_join_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/sort_merge_join_desc_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/sort_merge_join_desc_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/sort_merge_join_desc_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/sort_merge_join_desc_4.q
* /hive/trunk/ql/src/test/results/clientpositive/bucket_map_join_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucket_map_join_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_4.q.out


 sort merge join should work if both the tables are sorted in descending order
 -

 Key: HIVE-3304
 URL: https://issues.apache.org/jira/browse/HIVE-3304
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0

 Attachments: hive.3304.1.patch, hive.3304.2.patch, hive.3304.3.patch, 
 hive.3304.4.patch


 Currently, sort merge join only works if both the tables are sorted in
 ascending order

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3505) log4j template has logging threshold that hides all audit logs

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548025#comment-13548025
 ] 

Hudson commented on HIVE-3505:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3505. log4j template has logging threshold that hides all audit logs 
(Sean Mackrory via cws) (Revision 1390278)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390278
Files : 
* /hive/trunk/common/src/java/conf/hive-log4j.properties
* /hive/trunk/ql/src/java/conf/hive-exec-log4j.properties


 log4j template has logging threshold that hides all audit logs
 --

 Key: HIVE-3505
 URL: https://issues.apache.org/jira/browse/HIVE-3505
 Project: Hive
  Issue Type: Bug
  Components: Logging
Reporter: Sean Mackrory
Assignee: Sean Mackrory
 Fix For: 0.10.0

 Attachments: HIVE-3505.patch.1, HIVE-3505.patch.2, HIVE-3505.patch.3, 
 HIVE-3505.patch.4


 With the template for log4j configuration provided in the tarball, audit 
 logging is hidden (it's logged as INFO). By making the log threshold a 
 parameter, this information remains hidden when using the CLI (which is 
 desired) but can be overridden when starting services to enable audit-logging.
 (This is primarily so that Hive is more functional out-of-the-box as 
 installed by Apache Bigtop).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3311) Convert runtime exceptions to semantic exceptions for validation of alter table commands

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548024#comment-13548024
 ] 

Hudson commented on HIVE-3311:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3311 Convert runtime exceptions to semantic exceptions for 
validation of alter table commands (Sambavi Muthukrishnan via namit) (Revision 
1367035)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367035
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/test/results/clientnegative/alter_non_native.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure8.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure9.q.out


 Convert runtime exceptions to semantic exceptions for validation of alter 
 table commands
 

 Key: HIVE-3311
 URL: https://issues.apache.org/jira/browse/HIVE-3311
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Sambavi Muthukrishnan
Assignee: Sambavi Muthukrishnan
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3311.2.patch, HIVE-3311.3.patch

   Original Estimate: 4h
  Remaining Estimate: 4h

 validateAlterTableType in DDLTask.java does a bunch of checks to ensure that 
 the alter table/view commands are correct (operations match table type, 
 command macthes table type).
 This JIRA tracks moving these to semantic exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3310) [Regression] TestMTQueries test is failing on trunk

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548026#comment-13548026
 ] 

Hudson commented on HIVE-3310:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3310 [Regression] TestMTQueries test is failing on trunk
(Navis via namit) (Revision 1367406)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1367406
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java


 [Regression] TestMTQueries test is failing on trunk
 ---

 Key: HIVE-3310
 URL: https://issues.apache.org/jira/browse/HIVE-3310
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Ashutosh Chauhan
Assignee: Navis
 Fix For: 0.10.0

 Attachments: HIVE-3310.1.patch.txt


 Hudson reported https://builds.apache.org/job/Hive-trunk-h0.21/1571/ this as 
 a regression. Previous build was clean 
 https://builds.apache.org/job/Hive-trunk-h0.21/1570/ 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3406) Yet better error message in CLI on invalid column name

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548029#comment-13548029
 ] 

Hudson commented on HIVE-3406:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3406 Yet better error message in CLI on invalid column name
(Navis via namit) (Revision 1377314)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1377314
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/RowResolver.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/results/clientnegative/clustern4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/semijoin1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/semijoin2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/semijoin3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/semijoin4.q.out


 Yet better error message in CLI on invalid column name
 --

 Key: HIVE-3406
 URL: https://issues.apache.org/jira/browse/HIVE-3406
 Project: Hive
  Issue Type: Improvement
  Components: Diagnosability
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.10.0

 Attachments: HIVE-3406.1.patch.txt


 HIVE-2256 appended column names to error message for invalid column 
 reference, but it's not alias by which a column can be referenced. For 
 example query in clustern4.q (negative)
 {code}
 SELECT x.key as k1, x.value FROM SRC x CLUSTER BY key;
 {code}
 makes exception with error message,
 {code}
 FAILED: SemanticException [Error 10004]: Line 2:50 Invalid table alias or 
 column reference 'key': (possible column names are: _col0, _col1)
 {code}
 But replacing 'key' with '_col0' or '_col1' does not make this query work. 
 The error message should be,
 {code}
 FAILED: SemanticException [Error 10004]: Line 2:50 Invalid table alias or 
 column reference 'key': (possible column names are: k1, x.value)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3409) Increase test.junit.timeout value

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548030#comment-13548030
 ] 

Hudson commented on HIVE-3409:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3409. Increase test.junit.timeout value (Carl Steinbach via cws) 
(Revision 1378470)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1378470
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/common/build.xml


 Increase test.junit.timeout value
 -

 Key: HIVE-3409
 URL: https://issues.apache.org/jira/browse/HIVE-3409
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.10.0

 Attachments: HIVE-3409.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3210) Support Bucketed mapjoin on partitioned table which has two or more partitions

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548031#comment-13548031
 ] 

Hudson commented on HIVE-3210:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3210 Support Bucketed mapjoin on partitioned table which has two or 
more partitions
(Navis via namit) (Revision 1362391)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1362391
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketMapJoinOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SortedMergeBucketMapJoinOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/PrunedPartitionList.java
* /hive/trunk/ql/src/test/queries/clientpositive/bucketmapjoin2.q
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin_negative2.q.out


 Support Bucketed mapjoin on partitioned table which has two or more partitions
 --

 Key: HIVE-3210
 URL: https://issues.apache.org/jira/browse/HIVE-3210
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Fix For: 0.10.0


 Bucketed mapjoin on multiple partition seemed to have no reason to be 
 prohibited and even safer than doing simple mapjoin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3564) hivetest.py: revision number and applied patch

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548032#comment-13548032
 ] 

Hudson commented on HIVE-3564:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3564 hivetest.py: revision number and applied patch
(Ivan Gorbachev via namit) (Revision 1397583)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1397583
Files : 
* /hive/trunk/testutils/ptest/hivetest.py


 hivetest.py: revision number and applied patch
 --

 Key: HIVE-3564
 URL: https://issues.apache.org/jira/browse/HIVE-3564
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Reporter: Ivan Gorbachev
Assignee: Ivan Gorbachev
 Fix For: 0.11.0

 Attachments: hive-3564.0.patch.txt


 It's required to add new option for hivetest.py which will allow to show base 
 revision number and applied patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3560) Hive always prints a warning message when using remote metastore

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548034#comment-13548034
 ] 

Hudson commented on HIVE-3560:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3560 : Hive always prints a warning message when using remote 
metastore (Travis Crawford via Ashutosh Chauhan) (Revision 1409066)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409066
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java


 Hive always prints a warning message when using remote metastore
 

 Key: HIVE-3560
 URL: https://issues.apache.org/jira/browse/HIVE-3560
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Travis Crawford
Assignee: Travis Crawford
 Fix For: 0.10.0

 Attachments: HIVE-3560_logging_tweaks.1.patch, 
 HIVE-3560_logging_tweaks.2.patch


 This issue was discovered in HIVE-2585 and more details about why this issue 
 was filed are available there.
 Currently if one sets {{hive.metastore.uris}} the following error will always 
 be displayed:
 {code}
 2012-07-24 15:23:58,647 [main] WARN org.apache.hadoop.hive.conf.HiveConf - 
 DEPRECATED: Configuration property hive.metastore.local no longer has any 
 effect. Make sure to provide a valid value for hive.metastore.uris if you are 
 connecting to a remote metastore.
 {code}
 The reason is {{javax.jdo.option.ConnectionURL}} has a default value and will 
 never be null. I set this property in {{hive-site.xml}} and walked through 
 the configuration loading in a debugger. If the value is not empty it takes 
 effect, and is ignored if empty.
 Since {{javax.jdo.option.ConnectionURL}} has a default and cannot be unset, 
 this warning will always be printed if someone sets {{hive.metastore.uris}}.
 Per the review comments, the error message was added to reduce user 
 confusion, and prevent surprises by using the wrong metastore (either 
 embedded or remote). In {{HiveMetaStoreClient.java}} we see a very clear info 
 message printed saying that a remote metastore is used.
 {code}
 LOG.info(Trying to connect to metastore with URI  + store);
 ...
 LOG.info(Connected to metastore.);
 {code}
 Since we clearly communicate to the user that a remote metastore at the given 
 URI is being used we'll remove that message. Additionally, to further clarify 
 a remote metastore is used I'll make the following HiveMetaStoreClient 
 logging change:
 {code}
 LOG.debug(Trying to connect to remote HiveMetaStore:  + store);
 ...
 LOG.info(Connected to remote HiveMetaStore:  + store);
 {code}
 The change is at debug level we print connection attempts, and always print 
 which remote HiveMetaStore we actually connected to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3416) Fix TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when running Hive on hadoop23

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548036#comment-13548036
 ] 

Hudson commented on HIVE-3416:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3416 [jira] Fix 
TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when running Hive on 
hadoop23
(Zhenxiao Luo via Carl Steinbach)

Summary:
HIVE-3416: Fix TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when 
running Hive on hadoop23

TestAvroSerdeUtils determinSchemaCanReadSchemaFromHDFS is failing when running 
hive on hadoop23:

$ant very-clean package -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23

$ant test -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23 -Dtestcase=TestAvroSerdeUtils

 testcase classname=org.apache.hadoop.hive.serde2.avro.TestAvroSerdeUtils 
name=determineSchemaCanReadSchemaFromHDFS time=0.21
error message=org/apache/hadoop/net/StaticMapping 
type=java.lang.NoClassDefFoundErrorjava.lang.NoClassDefFoundError: 
org/apache/hadoop/net/StaticMapping
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:534)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:489)
at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:360)
at 
org.apache.hadoop.hive.serde2.avro.TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS(TestAvroSerdeUtils.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.net.StaticMapping
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
... 25 more
/error
  /testcase

Test Plan: EMPTY

Reviewers: JIRA

Differential Revision: https://reviews.facebook.net/D5025 (Revision 1380490)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380490
Files : 
* /hive/trunk/serde/ivy.xml
* /hive/trunk/shims/ivy.xml


 Fix TestAvroSerdeUtils.determineSchemaCanReadSchemaFromHDFS when running Hive 
 on hadoop23
 -

 Key: HIVE-3416
 URL: https://issues.apache.org/jira/browse/HIVE-3416
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-3416.1.patch.txt


 TestAvroSerdeUtils determinSchemaCanReadSchemaFromHDFS is failing when 
 running hive on hadoop23:
 $ant very-clean package -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
 -Dhadoop.mr.rev=23
 $ant test -Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
 -Dhadoop.mr.rev=23 -Dtestcase=TestAvroSerdeUtils
  testcase 

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548037#comment-13548037
 ] 

Hudson commented on HIVE-3413:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3413. Fix pdk.PluginTest on hadoop23 (Zhenxiao Luo via cws) (Revision 
1380478)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1380478
Files : 
* /hive/trunk/builtins/build.xml
* /hive/trunk/builtins/ivy.xml
* /hive/trunk/pdk/scripts/build-plugin.xml
* /hive/trunk/pdk/test-plugin/test/conf
* /hive/trunk/pdk/test-plugin/test/conf/log4j.properties


 Fix pdk.PluginTest on hadoop23
 --

 Key: HIVE-3413
 URL: https://issues.apache.org/jira/browse/HIVE-3413
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-3413.1.patch.txt, HIVE-3413.2.patch.txt, 
 HIVE-3413.3.patch.txt


 When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
 test:
 [junit] Running org.apache.hive.pdk.PluginTest
 [junit] Hive history 
 file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
 [junit] Total MapReduce jobs = 1
 [junit] Launching Job 1 out of 1
 [junit] Number of reduce tasks determined at compile time: 1
 [junit] In order to change the average load for a reducer (in bytes):
 [junit]   set hive.exec.reducers.bytes.per.reducer=number
 [junit] In order to limit the maximum number of reducers:
 [junit]   set hive.exec.reducers.max=number
 [junit] In order to set a constant number of reducers:
 [junit]   set mapred.reduce.tasks=number
 [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is 
 deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the 
 log4j.properties files.
 [junit] Execution log at: 
 /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
 [junit] java.io.IOException: Cannot initialize Cluster. Please check your 
 configuration for mapreduce.framework.name and the correspond server 
 addresses.
 [junit] at 
 org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
 [junit] at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:85)
 [junit] at org.apache.hadoop.mapreduce.Cluster.init(Cluster.java:78)
 [junit] at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
 [junit] at 
 org.apache.hadoop.mapred.JobClient.init(JobClient.java:466)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
 [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [junit] at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 [junit] at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 [junit] at java.lang.reflect.Method.invoke(Method.java:616)
 [junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
 [junit] Job Submission failed with exception 'java.io.IOException(Cannot 
 initialize Cluster. Please check your configuration for 
 mapreduce.framework.name and the correspond server addresses.)'
 [junit] Execution failed with exit status: 1
 [junit] Obtaining error information
 [junit]
 [junit] Task failed!
 [junit] Task ID:
 [junit]   Stage-1
 [junit]
 [junit] Logs:
 [junit]
 [junit] /tmp/cloudera/hive.log
 [junit] FAILED: Execution Error, return code 1 from 
 org.apache.hadoop.hive.ql.exec.MapRedTask])
 [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
 With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
 Testsuite: org.apache.hive.pdk.PluginTest
 Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
 - Standard Error -
 GLOBAL SETUP:  Copying file: 
 file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
 Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
 Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
 Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
 org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
 Hive history 
 file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
 GLOBAL TEARDOWN:
 Hive history 
 file=/tmp/cloudera/hive_job_log_cloudera_201208281845_25225.txt
 OK
 Time taken: 6.874 seconds
 OK
 Time taken: 0.512 seconds
 -  ---
 Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris 
 took 4.428 sec
 

[jira] [Commented] (HIVE-3557) Access to external URLs in hivetest.py

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13548039#comment-13548039
 ] 

Hudson commented on HIVE-3557:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3557. Access to external URLs in hivetest.py. (Ivan Gorbachev via 
kevinwilfong) (Revision 1407692)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1407692
Files : 
* /hive/trunk/testutils/ptest/Ssh.py
* /hive/trunk/testutils/ptest/hivetest.py


 Access to external URLs in hivetest.py 
 ---

 Key: HIVE-3557
 URL: https://issues.apache.org/jira/browse/HIVE-3557
 Project: Hive
  Issue Type: Improvement
Reporter: Ivan Gorbachev
Assignee: Ivan Gorbachev
 Fix For: 0.10.0

 Attachments: jira-3557.0.patch, jira-3557.1.patch


 1. Migrate all non-HTTP urls to HTTP.
 2. Add HTTP_PROXY support

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


<    1   2   3   4   5   6   >