[jira] [Commented] (HIVE-5056) MapJoinProcessor ignores order of values in removing RS

2013-09-09 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761640#comment-13761640
 ] 

Navis commented on HIVE-5056:
-

{noformat}
TS1 TS2
RS1 RS2
 JOIN

TS1=a1:a2
TS2=b1:b2
JOIN=T1.a1:T2.a2:T2.b2:T2.b1
{noformat}
For MapJoin, RS1/RS2 should be removed here. Expressions of JOIN 
(T1.a1:T2.a2:T2.b2:T2.b1) based on RS1/RS2 should be rebased on TS1/TS2.

Before the patch, expressions for JOIN is remade by iterating columns of 
TS1/TS2 and old expressions are just used for checking existence, like this
{noformat}
for (pos = 0; pos  newParentOps.size(); pos++) {
MapString, ExprNodeDesc colExprMap = op.getColumnExprMap();
for (Map.EntryByte, ListExprNodeDesc entry : valueExprs.entrySet()) {
{noformat}

And that may change the order. In this case, T1.a1:T2.a2:T2.b2:T2.b1 is 
changed to T1.a1:T2.a2:T2.b1:T2.b2

 MapJoinProcessor ignores order of values in removing RS
 ---

 Key: HIVE-5056
 URL: https://issues.apache.org/jira/browse/HIVE-5056
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-5056.D12147.1.patch, HIVE-5056.D12147.2.patch


 http://www.mail-archive.com/user@hive.apache.org/msg09073.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5056) MapJoinProcessor ignores order of values in removing RS

2013-09-09 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-5056:
--

Attachment: HIVE-5056.D12147.3.patch

navis updated the revision HIVE-5056 [jira] MapJoinProcessor ignores order of 
values in removing RS.

  Rebased to trunk

Reviewers: JIRA

REVISION DETAIL
  https://reviews.facebook.net/D12147

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D12147?vs=37563id=39747#toc

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
  ql/src/test/queries/clientpositive/auto_join_reordering_values.q
  ql/src/test/results/clientpositive/auto_join_reordering_values.q.out

To: JIRA, navis


 MapJoinProcessor ignores order of values in removing RS
 ---

 Key: HIVE-5056
 URL: https://issues.apache.org/jira/browse/HIVE-5056
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-5056.D12147.1.patch, HIVE-5056.D12147.2.patch, 
 HIVE-5056.D12147.3.patch


 http://www.mail-archive.com/user@hive.apache.org/msg09073.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-5242) Trunk fails to compile

2013-09-09 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis resolved HIVE-5242.
-

Resolution: Fixed

Carl already fixed this without issue number. So, closing.

 Trunk fails to compile
 --

 Key: HIVE-5242
 URL: https://issues.apache.org/jira/browse/HIVE-5242
 Project: Hive
  Issue Type: Bug
Reporter: Ashutosh Chauhan
 Attachments: HIVE-5242.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5240) Column statistics on a partitioned column should fail early with proper error message

2013-09-09 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761646#comment-13761646
 ] 

Navis commented on HIVE-5240:
-

Should partition columns be ignored, instead of throwing exception?

 Column statistics on a partitioned column should fail early with proper error 
 message
 -

 Key: HIVE-5240
 URL: https://issues.apache.org/jira/browse/HIVE-5240
 Project: Hive
  Issue Type: Bug
  Components: Statistics
Affects Versions: 0.12.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: statistics
 Fix For: 0.12.0

 Attachments: HIVE-5240.txt


 When computing column statistics on a partitioned table, if one of the 
 columns equals the partitioned column then IndexOutOfBoundsException is 
 thrown. 
 Following analyze query throws IndexOutOfBoundsException during semantic 
 analysis phase
 {code}hive analyze table qlog_1m_part partition(year=5) compute statistics 
 for columns year,month,week,type;
 FAILED: IndexOutOfBoundsException Index: 1, Size: 0{code} 
 If the partitioned column is specified at last like below then the same 
 exception is thrown at runtime
 {code}hive analyze table qlog_1m_part partition(year=5) compute statistics 
 for columns month,week,type,year;
 Hadoop job information for null: number of mappers: 0; number of reducers: 0
 2013-09-06 18:05:06,587 null map = 0%,  reduce = 100%
 Ended Job = job_local861862820_0001
 Execution completed successfully
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
   at java.util.LinkedList.entry(LinkedList.java:365)
   at java.util.LinkedList.get(LinkedList.java:315)
   at 
 org.apache.hadoop.hive.ql.exec.ColumnStatsTask.constructColumnStatsFromPackedRow(ColumnStatsTask.java:262)
   at 
 org.apache.hadoop.hive.ql.exec.ColumnStatsTask.persistPartitionStats(ColumnStatsTask.java:302)
   at 
 org.apache.hadoop.hive.ql.exec.ColumnStatsTask.execute(ColumnStatsTask.java:345)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1407)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1187)
   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1017)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:885)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
   at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Bootstrap in Hive

2013-09-09 Thread Navis류승우
I don't know anything about statistics but in your case, duplicating
splits(x100?) by using custom InputFormat might much simpler.


2013/9/6 Sameer Agarwal samee...@cs.berkeley.edu

 Hi All,

 In order to support approximate queries in Hive and BlinkDB (
 http://blinkdb.org/), I am working towards implementing the bootstrap
 primitive (http://en.wikipedia.org/wiki/Bootstrapping_(statistics)) in
 Hive
 that can help us quantify the error incurred by a query Q when it
 operates on a small sample S of data. This method essentially requires
 launching the query Q simultaneously on a large number of samples of
 original data (typically =100) .

 The downside to this is of course that we have to launch the same query 100
 times but the upside is that each of this query would be so small that it
 can be executed on a single machine. So, in order to do this efficiently,
 we would ideally like to execute 100 instances of the query simultaneously
 on the master and all available worker nodes. Furthermore, in order to
 avoid generating the query plan 100 times on the master, we can do either
 of the two things:

1. Generate the query plan once on the master, serialize it and ship it
to the worker nodes.
2. Enable the worker nodes to access the Metastore so that they can
generate the query plan on their own in parallel.

 Given that making the query plan serializable (1) would require a lot of
 refactoring of the current code, is (2) a viable option? Moreover, since
 (2) will increase the load on the existing Metastore by 100x, is there any
 other option?

 Thanks,
 Sameer

 --
 Sameer Agarwal
 Computer Science | AMP Lab | UC Berkeley
 http://cs.berkeley.edu/~sameerag



[jira] [Commented] (HIVE-5089) Non query PreparedStatements are always failing on remote HiveServer2

2013-09-09 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761652#comment-13761652
 ] 

Navis commented on HIVE-5089:
-

It seemed HIVE-5060

 Non query PreparedStatements are always failing on remote HiveServer2
 -

 Key: HIVE-5089
 URL: https://issues.apache.org/jira/browse/HIVE-5089
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0
Reporter: Julien Letrouit
 Fix For: 0.12.0


 This is reproducing the issue systematically:
 {noformat}
 import org.apache.hive.jdbc.HiveDriver;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 public class Main {
   public static void main(String[] args) throws Exception {
 DriverManager.registerDriver(new HiveDriver());
 Connection conn = DriverManager.getConnection(jdbc:hive2://someserver);
 PreparedStatement smt = conn.prepareStatement(SET hivevar:test=1);
 smt.execute(); // Exception here
 conn.close();
   }
 }
 {noformat}
 It is producing the following stacktrace:
 {noformat}
 Exception in thread main java.sql.SQLException: Could not create ResultSet: 
 null
   at 
 org.apache.hive.jdbc.HiveQueryResultSet.retrieveSchema(HiveQueryResultSet.java:183)
   at 
 org.apache.hive.jdbc.HiveQueryResultSet.init(HiveQueryResultSet.java:134)
   at 
 org.apache.hive.jdbc.HiveQueryResultSet$Builder.build(HiveQueryResultSet.java:122)
   at 
 org.apache.hive.jdbc.HivePreparedStatement.executeImmediate(HivePreparedStatement.java:194)
   at 
 org.apache.hive.jdbc.HivePreparedStatement.execute(HivePreparedStatement.java:137)
   at Main.main(Main.java:12)
 Caused by: org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:346)
   at 
 org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:423)
   at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:405)
   at 
 org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Client.recv_GetResultSetMetadata(TCLIService.java:466)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Client.GetResultSetMetadata(TCLIService.java:453)
   at 
 org.apache.hive.jdbc.HiveQueryResultSet.retrieveSchema(HiveQueryResultSet.java:154)
   ... 5 more
 {noformat}
 I tried to fix it, unfortunately, the standalone server used in unit tests do 
 not reproduce the issue. The following test added to TestJdbcDriver2 is 
 passing:
 {noformat}
   public void testNonQueryPrepareStatement() throws Exception {
 try {
   PreparedStatement ps = con.prepareStatement(SET hivevar:test=1);
   boolean hasResultSet = ps.execute();
   assertTrue(hasResultSet);
   ps.close();
 } catch (Exception e) {
   e.printStackTrace();
   fail(e.toString());
 }
   }
 {noformat}
 Any guidance on how to reproduce it in tests would be appreciated.
 Impact: the data analysis tools we are using are performing 
 PreparedStatements. The use of custom UDF is forcing us to add 'ADD JAR ...' 
 and 'CREATE TEMPORARY FUNCTION ...' statement to our query. Those statements 
 are failing when executed as PreparedStatements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761654#comment-13761654
 ] 

Viraj Bhat commented on HIVE-4331:
--

Hi Sushanth,
 Thanks for your offer for help. I went over the trunk code of HCat and 
modified the HCat's storage handler so that they reflect in the 
org.apache.hcatalog set. There are something else I observed in the trunk, 
such as:
1) Most of the pom files have 0.12.0-SNAPSHOT, so I changed it to reflect 
0.13.0-SNAPSHOT
2) Had to modify the HCatStorageHandler.java in the 
org.apache.hcatalog.mapreduce package as it had an annotation: * @deprecated 
Use/modify {@link org.apache.hive.hcatalog.mapreduce.HCatStorageHandler} 
instead to get past the compile as we do not have this class in the 
org.apache.hive.hcatalog

Let me post the new patch and wait for the pre-commit build status.
Viraj 

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4354) Configurations on connection url for jdbc2 is not working

2013-09-09 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761656#comment-13761656
 ] 

Navis commented on HIVE-4354:
-

[~thejas] How about just reusing this issue? (after editing description)

 Configurations on connection url for jdbc2 is not working
 -

 Key: HIVE-4354
 URL: https://issues.apache.org/jira/browse/HIVE-4354
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-4354.D10239.1.patch


 In jdbc2 connection url, query part is for hiveConf and fragment part is for 
 session var. But it's not working.
 {noformat}
 beeline !connect jdbc:hive2://localhost:1#var1=value1;var2=value2 scott 
 tiger  
 scan complete in 2ms
 Connecting to jdbc:hive2://localhost:1#var1=value1;var2=value2
 Connected to: Hive (version 0.10.0)
 Driver: Hive (version 0.11.0-SNAPSHOT)
 Transaction isolation: TRANSACTION_REPEATABLE_READ
 0: jdbc:hive2://localhost:1#var1=value1 set var1;
 ++
 |set |
 ++
 | var1 is undefined  |
 ++
 1 row selected (0.245 seconds)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4354) Configurations on connection url for jdbc2 is not working

2013-09-09 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4354:
--

Attachment: HIVE-4354.D10239.2.patch

navis updated the revision HIVE-4354 [jira] Configurations on connection url 
for jdbc2 is not working.

  Rebased to trunk

Reviewers: JIRA

REVISION DETAIL
  https://reviews.facebook.net/D10239

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D10239?vs=32013id=39753#toc

AFFECTED FILES
  jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
  jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
  service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java
  
service/src/java/org/apache/hive/service/cli/session/HiveSessionImplwithUGI.java
  service/src/java/org/apache/hive/service/cli/session/SessionManager.java
  
service/src/java/org/apache/hive/service/cli/thrift/EmbeddedThriftCLIService.java
  service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java
  service/src/java/org/apache/hive/service/server/HiveServer2.java
  service/src/test/org/apache/hive/service/cli/TestEmbeddedThriftCLIService.java

To: JIRA, navis


 Configurations on connection url for jdbc2 is not working
 -

 Key: HIVE-4354
 URL: https://issues.apache.org/jira/browse/HIVE-4354
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-4354.D10239.1.patch, HIVE-4354.D10239.2.patch


 In jdbc2 connection url, query part is for hiveConf and fragment part is for 
 session var. But it's not working.
 {noformat}
 beeline !connect jdbc:hive2://localhost:1#var1=value1;var2=value2 scott 
 tiger  
 scan complete in 2ms
 Connecting to jdbc:hive2://localhost:1#var1=value1;var2=value2
 Connected to: Hive (version 0.10.0)
 Driver: Hive (version 0.11.0-SNAPSHOT)
 Transaction isolation: TRANSACTION_REPEATABLE_READ
 0: jdbc:hive2://localhost:1#var1=value1 set var1;
 ++
 |set |
 ++
 | var1 is undefined  |
 ++
 1 row selected (0.245 seconds)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5220) Add option for removing intermediate directory for partition, which is empty

2013-09-09 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761659#comment-13761659
 ] 

Navis commented on HIVE-5220:
-

Failed tests seemed not related to this. Confirmed those tests passed in local.

 Add option for removing intermediate directory for partition, which is empty
 

 Key: HIVE-5220
 URL: https://issues.apache.org/jira/browse/HIVE-5220
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-5220.D12729.1.patch


 For deeply nested partitioned table, intermediate directories are not removed 
 even if there is no partitions in it by removing them.
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=01/e=01
 /deep_part/c=09/d=01/e=02
 /deep_part/c=09/d=02
 /deep_part/c=09/d=02/e=01
 /deep_part/c=09/d=02/e=02
 {noformat}
 After removing partition (c='09'), directory remains like this, 
 {noformat}
 /deep_part/c=09/d=01
 /deep_part/c=09/d=02
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5221) Issue in colun type with data type as BINARY

2013-09-09 Thread Arun Vasu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Vasu updated HIVE-5221:


Priority: Critical  (was: Major)

 Issue in colun type with data type as BINARY
 

 Key: HIVE-5221
 URL: https://issues.apache.org/jira/browse/HIVE-5221
 Project: Hive
  Issue Type: Bug
Reporter: Arun Vasu
Priority: Critical

 Hi,
 I am using Hive 10. When I create an external table with column type as 
 Binary, the query result on the table is showing some junk values for the 
 column with binary datatype.
 Please find below the query I have used to create the table:
 CREATE EXTERNAL TABLE BOOL1(NB BOOLEAN,email STRING, bitfld BINARY)
  ROW FORMAT DELIMITED
FIELDS TERMINATED BY '^'
LINES TERMINATED BY '\n'
 STORED AS TEXTFILE
 LOCATION '/user/hivetables/testbinary';
 The query I have used is : select * from bool1
 The sample data in the hdfs file is:
 0^a...@abc.com^001
 1^a...@abc.com^010
  ^a...@abc.com^011
  ^a...@abc.com^100
 t^a...@abc.com^101
 f^a...@abc.com^110
 true^a...@abc.com^111
 false^a...@abc.com^001
 123^^01100010
 12344^^0111
 Please share your inputs if it is possible.
 Thanks,
 Arun

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Attachment: (was: HIVE-4331.patch)

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Status: Open  (was: Patch Available)

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Status: Patch Available  (was: Open)

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5244) Concurrent query compilation in HiveServer2

2013-09-09 Thread Vaibhav Gumashta (JIRA)
Vaibhav Gumashta created HIVE-5244:
--

 Summary: Concurrent query compilation in HiveServer2
 Key: HIVE-5244
 URL: https://issues.apache.org/jira/browse/HIVE-5244
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta


Currently, in an HS2 process, multiple threads block on query compilation 
(check Driver#runInternal) due to coarse level threading at the Driver class 
object level. This can be a severe performance bottleneck, especially for a 
remote server serving multiple clients. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Attachment: HIVE-4331.patch

Rebase due to HIVE-5233 and HIVE-5236 

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Status: Open  (was: Patch Available)

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2093) create/drop database should populate inputs/outputs and check concurrency and user permission

2013-09-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761676#comment-13761676
 ] 

Hive QA commented on HIVE-2093:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12602091/HIVE-2093.D12807.1.patch

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 3092 tests 
executed
*Failed tests:*
{noformat}
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testDropDatabaseFail1
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testShowDatabases
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testDropDatabaseFail2
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testDropTableFail4
org.apache.hive.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTable
org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTable
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testShowDatabases
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testShowTablesFail
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testShowTablesFail
org.apache.hcatalog.pig.TestHCatLoaderComplexSchema.testMapWithComplexData
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testDropDatabaseFail1
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testDatabaseOps
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testTableOps
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testDescSwitchDatabaseFail
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_exim_19_00_part_external_location
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testCreateTableFail4
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testDropDatabaseFail2
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testCreateTableFail4
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testDropTableFail4
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testDatabaseOps
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testDescSwitchDatabaseFail
org.apache.hive.hcatalog.security.TestHdfsAuthorizationProvider.testCreateTableFail3
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testCreateTableFail3
org.apache.hcatalog.security.TestHdfsAuthorizationProvider.testTableOps
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver_cascade_dbdrop_hadoop20
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/666/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/666/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 25 tests failed
{noformat}

This message is automatically generated.

 create/drop database should populate inputs/outputs and check concurrency and 
 user permission
 -

 Key: HIVE-2093
 URL: https://issues.apache.org/jira/browse/HIVE-2093
 Project: Hive
  Issue Type: Bug
  Components: Authorization, Locking, Metastore, Security
Reporter: Namit Jain
Assignee: Navis
 Attachments: HIVE.2093.1.patch, HIVE.2093.2.patch, HIVE.2093.3.patch, 
 HIVE.2093.4.patch, HIVE.2093.5.patch, HIVE-2093.6.patch, 
 HIVE-2093.D12807.1.patch


 concurrency and authorization are needed for create/drop table. Also to make 
 concurrency work, it's better to have LOCK/UNLOCK DATABASE and SHOW LOCKS 
 DATABASE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Status: Patch Available  (was: Open)

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Attachment: (was: HIVE-4331.patch)

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Viraj Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Bhat updated HIVE-4331:
-

Attachment: HIVE-4331.patch

Rebase due to HIVE-5233 and HIVE-5236

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5071) Address thread safety issues with HiveHistoryUtil

2013-09-09 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-5071:


   Resolution: Fixed
Fix Version/s: (was: 0.12.0)
   0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, Thanks Teddy!

 Address thread safety issues with HiveHistoryUtil
 -

 Key: HIVE-5071
 URL: https://issues.apache.org/jira/browse/HIVE-5071
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Thiruvel Thirumoolan
Assignee: Teddy Choi
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5071.1.patch.txt


 HiveHistoryUtil.parseLine() is not thread safe, it could be used by multiple 
 clients of HWA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5056) MapJoinProcessor ignores order of values in removing RS

2013-09-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761726#comment-13761726
 ] 

Hive QA commented on HIVE-5056:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12602093/HIVE-5056.D12147.3.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 3087 tests executed
*Failed tests:*
{noformat}
org.apache.hcatalog.pig.TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag
org.apache.hive.hcatalog.mapreduce.TestHCatExternalHCatNonPartitioned.testHCatNonPartitionedTable
org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
org.apache.hive.hcatalog.pig.TestHCatStorer.testPartColsInData
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/667/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/667/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

 MapJoinProcessor ignores order of values in removing RS
 ---

 Key: HIVE-5056
 URL: https://issues.apache.org/jira/browse/HIVE-5056
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
 Attachments: HIVE-5056.D12147.1.patch, HIVE-5056.D12147.2.patch, 
 HIVE-5056.D12147.3.patch


 http://www.mail-archive.com/user@hive.apache.org/msg09073.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5245) hive create table as select can not work(not support) with join on operator

2013-09-09 Thread jeff little (JIRA)
jeff little created HIVE-5245:
-

 Summary: hive create table as select can not work(not support) 
with join on operator
 Key: HIVE-5245
 URL: https://issues.apache.org/jira/browse/HIVE-5245
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: jeff little
 Fix For: 0.11.0


hello everyone, recently i came across one hive problem as below:
hive (test) create table test_09 as
select a.* from test_01 a
inner join test_02 b
on (a.id=b.id);
Automatically selecting local only mode for query
Total MapReduce jobs = 2
setting HADOOP_USER_NAMEhadoop
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
2013-09-09 05:22:36 Starting to launch local task to process map join;  
maximum memory = 932118528
2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4   
Memory usage:   113068056   rate:   0.121
2013-09-09 05:22:37 Dump the hashtable into file: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
2013-09-09 05:22:37 Upload 1 File to: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 File size: 788
2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Mapred Local Task Succeeded . Convert the Join into MapJoin
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-09-09 17:22:41,807 null map = 0%,  reduce = 0%
2013-09-09 17:22:44,814 null map = 100%,  reduce = 0%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Stage-7 is filtered out by condition resolver.
OK
Time taken: 13.138 seconds
Problem:
I cat't get the created table, namely this CTAS is nonavailable, and this table 
is not created by this hql sentence at all.who can explain for me.Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5245) hive create table as select(CTAS) can not work(not support) with join on operator

2013-09-09 Thread jeff little (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jeff little updated HIVE-5245:
--

Summary: hive create table as select(CTAS) can not work(not support) with 
join on operator  (was: hive create table as select can not work(not support) 
with join on operator)

 hive create table as select(CTAS) can not work(not support) with join on 
 operator
 -

 Key: HIVE-5245
 URL: https://issues.apache.org/jira/browse/HIVE-5245
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: jeff little
  Labels: CTAS, hive
 Fix For: 0.11.0

   Original Estimate: 96h
  Remaining Estimate: 96h

 hello everyone, recently i came across one hive problem as below:
 hive (test) create table test_09 as
 select a.* from test_01 a
 inner join test_02 b
 on (a.id=b.id);
 Automatically selecting local only mode for query
 Total MapReduce jobs = 2
 setting HADOOP_USER_NAMEhadoop
 13/09/09 17:22:36 WARN conf.Configuration: 
 file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
  attempt to override final parameter: mapred.system.dir;  Ignoring.
 13/09/09 17:22:36 WARN conf.Configuration: 
 file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
  attempt to override final parameter: mapred.local.dir;  Ignoring.
 Execution log at: /tmp/hadoop/.log
 2013-09-09 05:22:36 Starting to launch local task to process map join;
   maximum memory = 932118528
 2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4 
   Memory usage:   113068056   rate:   0.121
 2013-09-09 05:22:37 Dump the hashtable into file: 
 file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 2013-09-09 05:22:37 Upload 1 File to: 
 file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
  File size: 788
 2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
 Execution completed successfully
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 Launching Job 1 out of 2
 Number of reduce tasks is set to 0 since there's no reduce operator
 13/09/09 17:22:38 WARN conf.Configuration: 
 file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
  attempt to override final parameter: mapred.system.dir;  Ignoring.
 13/09/09 17:22:38 WARN conf.Configuration: 
 file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
  attempt to override final parameter: mapred.local.dir;  Ignoring.
 Execution log at: /tmp/hadoop/.log
 Job running in-process (local Hadoop)
 Hadoop job information for null: number of mappers: 0; number of reducers: 0
 2013-09-09 17:22:41,807 null map = 0%,  reduce = 0%
 2013-09-09 17:22:44,814 null map = 100%,  reduce = 0%
 Ended Job = job_local_0001
 Execution completed successfully
 Mapred Local Task Succeeded . Convert the Join into MapJoin
 Stage-7 is filtered out by condition resolver.
 OK
 Time taken: 13.138 seconds
 Problem:
 I can't get the created table, namely this CTAS is nonavailable, and this 
 table is not created by this hql sentence at all.who can explain for 
 me.Thanks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5245) hive create table as select can not work(not support) with join on operator

2013-09-09 Thread jeff little (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jeff little updated HIVE-5245:
--

Description: 
hello everyone, recently i came across one hive problem as below:
hive (test) create table test_09 as
select a.* from test_01 a
inner join test_02 b
on (a.id=b.id);
Automatically selecting local only mode for query
Total MapReduce jobs = 2
setting HADOOP_USER_NAMEhadoop
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
2013-09-09 05:22:36 Starting to launch local task to process map join;  
maximum memory = 932118528
2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4   
Memory usage:   113068056   rate:   0.121
2013-09-09 05:22:37 Dump the hashtable into file: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
2013-09-09 05:22:37 Upload 1 File to: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 File size: 788
2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Mapred Local Task Succeeded . Convert the Join into MapJoin
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-09-09 17:22:41,807 null map = 0%,  reduce = 0%
2013-09-09 17:22:44,814 null map = 100%,  reduce = 0%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Stage-7 is filtered out by condition resolver.
OK
Time taken: 13.138 seconds
Problem:
I can't get the created table, namely this CTAS is nonavailable, and this table 
is not created by this hql sentence at all.who can explain for me.Thanks.

  was:
hello everyone, recently i came across one hive problem as below:
hive (test) create table test_09 as
select a.* from test_01 a
inner join test_02 b
on (a.id=b.id);
Automatically selecting local only mode for query
Total MapReduce jobs = 2
setting HADOOP_USER_NAMEhadoop
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
2013-09-09 05:22:36 Starting to launch local task to process map join;  
maximum memory = 932118528
2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4   
Memory usage:   113068056   rate:   0.121
2013-09-09 05:22:37 Dump the hashtable into file: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
2013-09-09 05:22:37 Upload 1 File to: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 File size: 788
2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Mapred Local Task Succeeded . Convert the Join into MapJoin
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.

[jira] [Updated] (HIVE-5245) hive create table as select(CTAS) can not work(not support) with join on operator

2013-09-09 Thread jeff little (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jeff little updated HIVE-5245:
--

Description: 
hello everyone, recently i came across one hive problem as below:
hive (test) create table test_09 as
select a.* from test_01 a
join test_02 b
on (a.id=b.id);
Automatically selecting local only mode for query
Total MapReduce jobs = 2
setting HADOOP_USER_NAMEhadoop
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
2013-09-09 05:22:36 Starting to launch local task to process map join;  
maximum memory = 932118528
2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4   
Memory usage:   113068056   rate:   0.121
2013-09-09 05:22:37 Dump the hashtable into file: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
2013-09-09 05:22:37 Upload 1 File to: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 File size: 788
2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Mapred Local Task Succeeded . Convert the Join into MapJoin
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-09-09 17:22:41,807 null map = 0%,  reduce = 0%
2013-09-09 17:22:44,814 null map = 100%,  reduce = 0%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Stage-7 is filtered out by condition resolver.
OK
Time taken: 13.138 seconds
Problem:
I can't get the created table, namely this CTAS is nonavailable, and this table 
is not created by this hql sentence at all.who can explain for me.Thanks.

  was:
hello everyone, recently i came across one hive problem as below:
hive (test) create table test_09 as
select a.* from test_01 a
inner join test_02 b
on (a.id=b.id);
Automatically selecting local only mode for query
Total MapReduce jobs = 2
setting HADOOP_USER_NAMEhadoop
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
2013-09-09 05:22:36 Starting to launch local task to process map join;  
maximum memory = 932118528
2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4   
Memory usage:   113068056   rate:   0.121
2013-09-09 05:22:37 Dump the hashtable into file: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
2013-09-09 05:22:37 Upload 1 File to: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 File size: 788
2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Mapred Local Task Succeeded . Convert the Join into MapJoin
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.

[jira] [Updated] (HIVE-5245) hive create table as select(CTAS) can not work(not support) with join on operator

2013-09-09 Thread jeff little (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jeff little updated HIVE-5245:
--

Description: 
hello everyone, recently i came across one hive problem as below:
hive (test) create table test_09 as
select a.* from test_01 a
join test_02 b
on (a.id=b.id);
Automatically selecting local only mode for query
Total MapReduce jobs = 2
setting HADOOP_USER_NAMEhadoop
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
2013-09-09 05:22:36 Starting to launch local task to process map join;  
maximum memory = 932118528
2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4   
Memory usage:   113068056   rate:   0.121
2013-09-09 05:22:37 Dump the hashtable into file: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
2013-09-09 05:22:37 Upload 1 File to: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 File size: 788
2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Mapred Local Task Succeeded . Convert the Join into MapJoin
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2013-09-09 17:22:41,807 null map = 0%,  reduce = 0%
2013-09-09 17:22:44,814 null map = 100%,  reduce = 0%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Stage-7 is filtered out by condition resolver.
OK
Time taken: 13.138 seconds
hive (test) select * from test_08;
FAILED: SemanticException [Error 10001]: Line 1:14 Table not found 'test_08'
hive (test)
Problem:
I can't get the created table, namely this CTAS is nonavailable, and this table 
is not created by this hql sentence at all.who can explain for me.Thanks.

  was:
hello everyone, recently i came across one hive problem as below:
hive (test) create table test_09 as
select a.* from test_01 a
join test_02 b
on (a.id=b.id);
Automatically selecting local only mode for query
Total MapReduce jobs = 2
setting HADOOP_USER_NAMEhadoop
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:36 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10008/jobconf.xml:a
 attempt to override final parameter: mapred.local.dir;  Ignoring.
Execution log at: /tmp/hadoop/.log
2013-09-09 05:22:36 Starting to launch local task to process map join;  
maximum memory = 932118528
2013-09-09 05:22:37 Processing rows:4   Hashtable size: 4   
Memory usage:   113068056   rate:   0.121
2013-09-09 05:22:37 Dump the hashtable into file: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
2013-09-09 05:22:37 Upload 1 File to: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10005/HashTable-Stage-6/MapJoin-mapfile90--.hashtable
 File size: 788
2013-09-09 05:22:37 End of local task; Time Taken: 0.444 sec.
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
Mapred Local Task Succeeded . Convert the Join into MapJoin
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
13/09/09 17:22:38 WARN conf.Configuration: 
file:/tmp/hadoop/hive_2013-09-09_17-22-34_848_1629553341892012305/-local-10009/jobconf.xml:a
 attempt to override final parameter: mapred.system.dir;  Ignoring.
13/09/09 17:22:38 WARN conf.Configuration: 

[jira] [Commented] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761786#comment-13761786
 ] 

Hive QA commented on HIVE-4331:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12602102/HIVE-4331.patch

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 3098 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTable
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testSequenceTableWriteRead
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testTextTableWriteRead
org.apache.hive.hcatalog.mapreduce.TestHCatExternalHCatNonPartitioned.testHCatNonPartitionedTable
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testSequenceTableWriteReadMR
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1
org.apache.hive.hcatalog.pig.TestHCatStorerMulti.testStoreBasicTable
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testTextTableWriteReadMR
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_smb_mapjoin_8
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/669/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/669/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4642) Implement vectorized RLIKE and REGEXP filter expressions

2013-09-09 Thread Teddy Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-4642:
-

Status: Patch Available  (was: Open)

 Implement vectorized RLIKE and REGEXP filter expressions
 

 Key: HIVE-4642
 URL: https://issues.apache.org/jira/browse/HIVE-4642
 Project: Hive
  Issue Type: Sub-task
Reporter: Eric Hanson
Assignee: Teddy Choi
 Attachments: HIVE-4642-1.patch, HIVE-4642.2.patch, 
 HIVE-4642.3.patch.txt, HIVE-4642.4.patch.txt, HIVE-4642.5.patch.txt, 
 HIVE-4642.6.patch.txt, Hive-Vectorized-Query-Execution-Design-rev10.docx


 See title. I will add more details next week. The goal is (a) make this work 
 correctly and (b) optimize it as well as possible, at least for the common 
 cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Export version of Wiki broken

2013-09-09 Thread Lars Francke
Hi,

I did: https://issues.apache.org/jira/browse/INFRA-6736

As I'm not a committer it'd be great if one of you could comment on
that issue to verify that I'm not making stuff up :)

Thanks,
Lars

On Tue, Sep 3, 2013 at 3:19 AM, Thejas Nair the...@hortonworks.com wrote:
 Lars,
 Thanks for bringing this up!
 Can you please create an INFRA ticket for this ?
 The google search results often leads to the broken page versions of the
 doc.

 Thanks,
 Thejas




 On Mon, Sep 2, 2013 at 12:27 AM, Lars Francke lars.fran...@gmail.com
 wrote:

 Hi,

 does anyone know why the Auto export version[1] of the Confluence
 wiki exists? Most of the links as well as the styles seem broken to
 me. Not a big deal in itself it's just that Google seems to give
 preference to that version so that it appears in all search results.

 Is there any way for us to modify that page, disable the export or at
 least prevent Google from indexing it?

 I'm happy to take it up with @infra too if those are the guys that can
 help.

 Cheers,
 Lars

 [1] https://cwiki.apache.org/Hive/languagemanual.html



 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader of
 this message is not the intended recipient, you are hereby notified that any
 printing, copying, dissemination, distribution, disclosure or forwarding of
 this communication is strictly prohibited. If you have received this
 communication in error, please contact the sender immediately and delete it
 from your system. Thank You.


[jira] [Commented] (HIVE-5089) Non query PreparedStatements are always failing on remote HiveServer2

2013-09-09 Thread Julien Letrouit (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761818#comment-13761818
 ] 

Julien Letrouit commented on HIVE-5089:
---

It is actually a side effect of HIVE-5006

 Non query PreparedStatements are always failing on remote HiveServer2
 -

 Key: HIVE-5089
 URL: https://issues.apache.org/jira/browse/HIVE-5089
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.11.0
Reporter: Julien Letrouit
 Fix For: 0.12.0


 This is reproducing the issue systematically:
 {noformat}
 import org.apache.hive.jdbc.HiveDriver;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 public class Main {
   public static void main(String[] args) throws Exception {
 DriverManager.registerDriver(new HiveDriver());
 Connection conn = DriverManager.getConnection(jdbc:hive2://someserver);
 PreparedStatement smt = conn.prepareStatement(SET hivevar:test=1);
 smt.execute(); // Exception here
 conn.close();
   }
 }
 {noformat}
 It is producing the following stacktrace:
 {noformat}
 Exception in thread main java.sql.SQLException: Could not create ResultSet: 
 null
   at 
 org.apache.hive.jdbc.HiveQueryResultSet.retrieveSchema(HiveQueryResultSet.java:183)
   at 
 org.apache.hive.jdbc.HiveQueryResultSet.init(HiveQueryResultSet.java:134)
   at 
 org.apache.hive.jdbc.HiveQueryResultSet$Builder.build(HiveQueryResultSet.java:122)
   at 
 org.apache.hive.jdbc.HivePreparedStatement.executeImmediate(HivePreparedStatement.java:194)
   at 
 org.apache.hive.jdbc.HivePreparedStatement.execute(HivePreparedStatement.java:137)
   at Main.main(Main.java:12)
 Caused by: org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:346)
   at 
 org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:423)
   at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:405)
   at 
 org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Client.recv_GetResultSetMetadata(TCLIService.java:466)
   at 
 org.apache.hive.service.cli.thrift.TCLIService$Client.GetResultSetMetadata(TCLIService.java:453)
   at 
 org.apache.hive.jdbc.HiveQueryResultSet.retrieveSchema(HiveQueryResultSet.java:154)
   ... 5 more
 {noformat}
 I tried to fix it, unfortunately, the standalone server used in unit tests do 
 not reproduce the issue. The following test added to TestJdbcDriver2 is 
 passing:
 {noformat}
   public void testNonQueryPrepareStatement() throws Exception {
 try {
   PreparedStatement ps = con.prepareStatement(SET hivevar:test=1);
   boolean hasResultSet = ps.execute();
   assertTrue(hasResultSet);
   ps.close();
 } catch (Exception e) {
   e.printStackTrace();
   fail(e.toString());
 }
   }
 {noformat}
 Any guidance on how to reproduce it in tests would be appreciated.
 Impact: the data analysis tools we are using are performing 
 PreparedStatements. The use of custom UDF is forcing us to add 'ADD JAR ...' 
 and 'CREATE TEMPORARY FUNCTION ...' statement to our query. Those statements 
 are failing when executed as PreparedStatements.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4003) NullPointerException in ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java

2013-09-09 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761825#comment-13761825
 ] 

Lars Francke commented on HIVE-4003:


Could we get this in for the 0.12 release?

In addition to the positive result of the Hadoop QA bot I've been using this in 
production for weeks without problems.

 NullPointerException in 
 ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
 -

 Key: HIVE-4003
 URL: https://issues.apache.org/jira/browse/HIVE-4003
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Thomas Adam
Assignee: Mark Grover
 Attachments: HIVE-4003.patch, HIVE-4003.patch


 Utilities.java seems to be throwing a NPE.
 Change contributed by Thomas Adam.
 Reference: 
 https://github.com/tecbot/hive/commit/1e29d88837e4101a76e870a716aadb729437355b#commitcomment-2588350

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4619) Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23

2013-09-09 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761826#comment-13761826
 ] 

Lars Francke commented on HIVE-4619:


Could we get this in for the 0.12 release?

I've been using this patch in production for weeks and it works at least for me.

 Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23
 --

 Key: HIVE-4619
 URL: https://issues.apache.org/jira/browse/HIVE-4619
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-4619.D10971.1.patch


 path uris in input split are missing scheme (it's fixed on cdh3u6 and hadoop 
 1.0)
 {noformat}
 2013-05-28 14:34:28,857 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Adding alias data_type to work list for file 
 hdfs://qa14:9000/user/hive/warehouse/data_type
 2013-05-28 14:34:28,858 ERROR org.apache.hadoop.hive.ql.exec.MapOperator: 
 Configuration does not have any alias for path: 
 /user/hive/warehouse/data_type/00_0
 2013-05-28 14:34:28,875 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
 Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
 2013-05-28 14:34:28,877 WARN org.apache.hadoop.mapred.Child: Error running 
 child
 java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:387)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
 at org.apache.hadoop.mapred.Child.main(Child.java:260)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 9 more
 Caused by: java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
 ... 14 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 17 more
 Caused by: java.lang.RuntimeException: Map operator initialization failed
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Configuration and input 
 path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:522)
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:90)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Configuration 
 and input path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:516)
 ... 23 more
 2013-05-28 14:34:28,881 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
 for the task
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2110) Hive Client is indefenitely waiting for reading from Socket

2013-09-09 Thread Azrael (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761843#comment-13761843
 ] 

Azrael commented on HIVE-2110:
--

[~prasadm], [~cwsteinbach] I think socketTimeout is the basic function for 
connection and necessary. If you don't mind, I'd like to update rebased patch 
used for JDBC2. 

 Hive Client is indefenitely waiting for reading from Socket
 ---

 Key: HIVE-2110
 URL: https://issues.apache.org/jira/browse/HIVE-2110
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.5.0, 0.11.0
 Environment: Hadoop 0.20.1, Hive0.5.0 and SUSE Linux Enterprise 
 Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5).
Reporter: Chinna Rao Lalam
Assignee: Prasad Mujumdar
 Attachments: HIVE-2110.1.patch, HIVE-2110.patch


 Hive Client is indefenitely waiting for reading from Socket. Thread dump i  
 added below.
 Cause is:
  
   In the HiveClient, when client socket is created, the read timeout is 
 mentioned is 0. So the socket will indefinetly wait when the machine where 
 Hive Server is running is shutdown or network is unplugged. The same may 
 not happen if the HiveServer alone is killed or gracefully shutdown. At this 
 time, client will get connection reset exception. 
 Code in HiveConnection
 ---
 {noformat}
 transport = new TSocket(host, port);
 TProtocol protocol = new TBinaryProtocol(transport); 
 client = new HiveClient(protocol);
 {noformat}
 In the Client side, they send the query and wait for the response 
 send_execute(query,id); recv_execute(); // place where client waiting is 
 initiated
 Thread dump:
 {noformat}
 main prio=10 tid=0x40111000 nid=0x3641 runnable [0x7f0d73f29000]
   java.lang.Thread.State: RUNNABLE
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:129)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:317) 
   locked 0x7f0d5d3f0828 (a java.io.BufferedInputStream)
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:314)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:262)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:192)
   at 
 org.apache.hadoop.hive.service.ThriftHive$Client.recv_execute(ThriftHive.java:130)
   at 
 org.apache.hadoop.hive.service.ThriftHive$Client.execute(ThriftHive.java:109) 
   locked 0x7f0d5d3f0878 (a org.apache.thrift.transport.TSocket)
   at 
 org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:218)
   at 
 org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:154)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5071) Address thread safety issues with HiveHistoryUtil

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761855#comment-13761855
 ] 

Hudson commented on HIVE-5071:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #88 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/88/])
HIVE-5071 : Address thread safety issues with HiveHistoryUtil (Teddy Choi 
reviewed by Edward Capriolo committed by Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1520979)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/history/HiveHistoryUtil.java


 Address thread safety issues with HiveHistoryUtil
 -

 Key: HIVE-5071
 URL: https://issues.apache.org/jira/browse/HIVE-5071
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Thiruvel Thirumoolan
Assignee: Teddy Choi
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5071.1.patch.txt


 HiveHistoryUtil.parseLine() is not thread safe, it could be used by multiple 
 clients of HWA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5071) Address thread safety issues with HiveHistoryUtil

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761856#comment-13761856
 ] 

Hudson commented on HIVE-5071:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #156 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/156/])
HIVE-5071 : Address thread safety issues with HiveHistoryUtil (Teddy Choi 
reviewed by Edward Capriolo committed by Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1520979)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/history/HiveHistoryUtil.java


 Address thread safety issues with HiveHistoryUtil
 -

 Key: HIVE-5071
 URL: https://issues.apache.org/jira/browse/HIVE-5071
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Thiruvel Thirumoolan
Assignee: Teddy Choi
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5071.1.patch.txt


 HiveHistoryUtil.parseLine() is not thread safe, it could be used by multiple 
 clients of HWA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4619) Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23

2013-09-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761858#comment-13761858
 ] 

Hive QA commented on HIVE-4619:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12585003/HIVE-4619.D10971.1.patch

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/670/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/670/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-670/source-prep.txt
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf build hcatalog/build hcatalog/core/build 
hcatalog/storage-handlers/hbase/build hcatalog/server-extensions/build 
hcatalog/webhcat/svr/build hcatalog/webhcat/java-client/build 
hcatalog/hcatalog-pig-adapter/build common/src/gen
+ svn update
Uhcatalog/pom.xml
Ucommon/src/java/org/apache/hadoop/hive/conf/HiveConf.java
Uivy/ivysettings.xml
Uivy/libraries.properties
Uql/ivy.xml
Uql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
Uql/src/test/org/apache/hadoop/hive/ql/exec/TestPlan.java
Uql/src/test/results/compiler/plan/input1.q.xml
Uql/src/test/results/compiler/plan/input2.q.xml
Uql/src/test/results/compiler/plan/input3.q.xml
Uql/src/test/results/compiler/plan/input4.q.xml
Uql/src/test/results/compiler/plan/input5.q.xml
Uql/src/test/results/compiler/plan/input_testxpath2.q.xml
Uql/src/test/results/compiler/plan/input6.q.xml
Uql/src/test/results/compiler/plan/input7.q.xml
Uql/src/test/results/compiler/plan/input_testsequencefile.q.xml
Uql/src/test/results/compiler/plan/input8.q.xml
Uql/src/test/results/compiler/plan/input9.q.xml
Uql/src/test/results/compiler/plan/udf1.q.xml
Uql/src/test/results/compiler/plan/input20.q.xml
Uql/src/test/results/compiler/plan/udf4.q.xml
Uql/src/test/results/compiler/plan/sample1.q.xml
Uql/src/test/results/compiler/plan/sample2.q.xml
Uql/src/test/results/compiler/plan/udf6.q.xml
Uql/src/test/results/compiler/plan/sample3.q.xml
Uql/src/test/results/compiler/plan/sample4.q.xml
Uql/src/test/results/compiler/plan/sample5.q.xml
Uql/src/test/results/compiler/plan/sample6.q.xml
Uql/src/test/results/compiler/plan/sample7.q.xml
Uql/src/test/results/compiler/plan/groupby1.q.xml
Uql/src/test/results/compiler/plan/udf_case.q.xml
Uql/src/test/results/compiler/plan/groupby2.q.xml
Uql/src/test/results/compiler/plan/subq.q.xml
Uql/src/test/results/compiler/plan/groupby3.q.xml
Uql/src/test/results/compiler/plan/groupby4.q.xml
Uql/src/test/results/compiler/plan/cast1.q.xml
Uql/src/test/results/compiler/plan/groupby5.q.xml
Uql/src/test/results/compiler/plan/groupby6.q.xml
Uql/src/test/results/compiler/plan/join1.q.xml
Uql/src/test/results/compiler/plan/join2.q.xml
Uql/src/test/results/compiler/plan/join3.q.xml
Uql/src/test/results/compiler/plan/join4.q.xml
Uql/src/test/results/compiler/plan/join5.q.xml
Uql/src/test/results/compiler/plan/join6.q.xml
Uql/src/test/results/compiler/plan/case_sensitivity.q.xml
Uql/src/test/results/compiler/plan/join7.q.xml
Uql/src/test/results/compiler/plan/join8.q.xml
Uql/src/test/results/compiler/plan/union.q.xml
Uql/src/test/results/compiler/plan/udf_when.q.xml
Uql/src/test/results/compiler/plan/input_testxpath.q.xml
Uql/src/test/results/compiler/plan/input_part1.q.xml
Uql/src/java/org/apache/hadoop/hive/ql/exec/ColumnInfo.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapRedTask.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeColumnEvaluator.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
Uql/src/java/org/apache/hadoop/hive/ql/exec/HashTableDummyOperator.java
U

[jira] [Commented] (HIVE-4642) Implement vectorized RLIKE and REGEXP filter expressions

2013-09-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761859#comment-13761859
 ] 

Hive QA commented on HIVE-4642:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12602047/HIVE-4642.6.patch.txt

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/671/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/671/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests failed with: NonZeroExitCodeException: Command 'bash 
/data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and 
output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-671/source-prep.txt
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 152.

At revision 152.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0 to p2
+ exit 1
'
{noformat}

This message is automatically generated.

 Implement vectorized RLIKE and REGEXP filter expressions
 

 Key: HIVE-4642
 URL: https://issues.apache.org/jira/browse/HIVE-4642
 Project: Hive
  Issue Type: Sub-task
Reporter: Eric Hanson
Assignee: Teddy Choi
 Attachments: HIVE-4642-1.patch, HIVE-4642.2.patch, 
 HIVE-4642.3.patch.txt, HIVE-4642.4.patch.txt, HIVE-4642.5.patch.txt, 
 HIVE-4642.6.patch.txt, Hive-Vectorized-Query-Execution-Design-rev10.docx


 See title. I will add more details next week. The goal is (a) make this work 
 correctly and (b) optimize it as well as possible, at least for the common 
 cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-1511) Hive plan serialization is slow

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-1511:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Brock for review and helping fix test cases. 
Thanks, Mohammad for tracking down some nasty issues!

 Hive plan serialization is slow
 ---

 Key: HIVE-1511
 URL: https://issues.apache.org/jira/browse/HIVE-1511
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.7.0, 0.11.0
Reporter: Ning Zhang
Assignee: Mohammad Kamrul Islam
 Fix For: 0.13.0

 Attachments: failedPlan.xml, generated_plan.xml, HIVE-1511.10.patch, 
 HIVE-1511.11.patch, HIVE-1511.12.patch, HIVE-1511.13.patch, 
 HIVE-1511.14.patch, HIVE-1511.16.patch, HIVE-1511.17.patch, 
 HIVE-1511.4.patch, HIVE-1511.5.patch, HIVE-1511.6.patch, HIVE-1511.7.patch, 
 HIVE-1511.8.patch, HIVE-1511.9.patch, HIVE-1511.patch, HIVE-1511-wip2.patch, 
 HIVE-1511-wip3.patch, HIVE-1511-wip4.patch, HIVE-1511.wip.9.patch, 
 HIVE-1511-wip.patch, KryoHiveTest.java, run.sh


 As reported by Edward Capriolo:
 For reference I did this as a test case
 SELECT * FROM src where
 key=0 OR key=0 OR key=0 OR  key=0 OR key=0 OR key=0 OR key=0 OR key=0
 OR key=0 OR key=0 OR key=0 OR
 key=0 OR key=0 OR key=0 OR  key=0 OR key=0 OR key=0 OR key=0 OR key=0
 OR key=0 OR key=0 OR key=0 OR
 ...(100 more of these)
 No OOM but I gave up after the test case did not go anywhere for about
 2 minutes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2110) Hive Client is indefenitely waiting for reading from Socket

2013-09-09 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2110:
--

Attachment: HIVE-2110.D12813.1.patch

azrael requested code review of HIVE-2110 [jira] Hive Client is indefenitely 
waiting for reading from Socket.

Reviewers: JIRA

HIVE-2110 : Hive Client is indefenitely waiting for reading from Socket

TEST PLAN
  Unit test

REVISION DETAIL
  https://reviews.facebook.net/D12813

AFFECTED FILES
  jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java
  jdbc/src/java/org/apache/hive/jdbc/Utils.java
  jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/30735/

To: JIRA, azrael


 Hive Client is indefenitely waiting for reading from Socket
 ---

 Key: HIVE-2110
 URL: https://issues.apache.org/jira/browse/HIVE-2110
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.5.0, 0.11.0
 Environment: Hadoop 0.20.1, Hive0.5.0 and SUSE Linux Enterprise 
 Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5).
Reporter: Chinna Rao Lalam
Assignee: Prasad Mujumdar
 Attachments: HIVE-2110.1.patch, HIVE-2110.D12813.1.patch, 
 HIVE-2110.patch


 Hive Client is indefenitely waiting for reading from Socket. Thread dump i  
 added below.
 Cause is:
  
   In the HiveClient, when client socket is created, the read timeout is 
 mentioned is 0. So the socket will indefinetly wait when the machine where 
 Hive Server is running is shutdown or network is unplugged. The same may 
 not happen if the HiveServer alone is killed or gracefully shutdown. At this 
 time, client will get connection reset exception. 
 Code in HiveConnection
 ---
 {noformat}
 transport = new TSocket(host, port);
 TProtocol protocol = new TBinaryProtocol(transport); 
 client = new HiveClient(protocol);
 {noformat}
 In the Client side, they send the query and wait for the response 
 send_execute(query,id); recv_execute(); // place where client waiting is 
 initiated
 Thread dump:
 {noformat}
 main prio=10 tid=0x40111000 nid=0x3641 runnable [0x7f0d73f29000]
   java.lang.Thread.State: RUNNABLE
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:129)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:317) 
   locked 0x7f0d5d3f0828 (a java.io.BufferedInputStream)
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:314)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:262)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:192)
   at 
 org.apache.hadoop.hive.service.ThriftHive$Client.recv_execute(ThriftHive.java:130)
   at 
 org.apache.hadoop.hive.service.ThriftHive$Client.execute(ThriftHive.java:109) 
   locked 0x7f0d5d3f0878 (a org.apache.thrift.transport.TSocket)
   at 
 org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:218)
   at 
 org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:154)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4492) Revert HIVE-4322

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4492:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Samuel for initial patch. Thanks, Carl for review!

 Revert HIVE-4322
 

 Key: HIVE-4492
 URL: https://issues.apache.org/jira/browse/HIVE-4492
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Reporter: Samuel Yuan
Assignee: Samuel Yuan
 Fix For: 0.13.0

 Attachments: HIVE-4492.1.patch.txt, HIVE-4492.patch


 See HIVE-4432 and HIVE-4433. It's possible to work around these issues but a 
 better solution is probably to roll back the fix and change the API to use 
 a primitive type as the map key (in a backwards-compatible manner).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4432) Follow-up to HIVE-4322 - make metastore API changes backwards compatible

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4432.


Resolution: Not A Problem

After HIVE-4492 rollback, this is no longer an issue.

 Follow-up to HIVE-4322 - make metastore API changes backwards compatible
 

 Key: HIVE-4432
 URL: https://issues.apache.org/jira/browse/HIVE-4432
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Reporter: Samuel Yuan
Assignee: Samuel Yuan

 Right now the fix for HIVE-4322 makes different versions of the metastore 
 server and client incompatible with each other. This can make deployment very 
 painful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-4433) Fix C++ Thrift bindings broken in HIVE-4322

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4433.


Resolution: Not A Problem

After HIVE-4492 rollback, this is no longer an issue.

 Fix C++ Thrift bindings broken in HIVE-4322
 ---

 Key: HIVE-4433
 URL: https://issues.apache.org/jira/browse/HIVE-4433
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.12.0
Reporter: Carl Steinbach
Assignee: Samuel Yuan
Priority: Blocker
 Fix For: 0.12.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5208) Provide an easier way to capture DEBUG logging

2013-09-09 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761881#comment-13761881
 ] 

Brock Noland commented on HIVE-5208:


Yeah I see this as a usability/supportability issue.

 Provide an easier way to capture DEBUG logging
 --

 Key: HIVE-5208
 URL: https://issues.apache.org/jira/browse/HIVE-5208
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 0.11.0
Reporter: Harsh J
Priority: Minor

 Capturing debug logging for troubleshooting is painful in Hive today:
 1. It doesn't log anywhere by default.
 2. We need to add a long -hiveconf hive.root.logger=DEBUG,console to the 
 Hive CLI just to enable the debug flag, or set an equivalent env-var 
 appropriately.
 I suggest we make this simpler via either one of the below:
 1. Provide a wrapped binary, hive-debug, so folks can simply re-run
 the hive-debug command and re-run their query and capture an output. This 
 could also write to a pre-designated $PWD file.
 2. Provide a simpler switch, such as -verbose that automatically
 toggles the flag instead, much like what Beeline does today already.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4619) Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23

2013-09-09 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761883#comment-13761883
 ] 

Brock Noland commented on HIVE-4619:


I'd be fine committing this if you submit a patch that applies.

 Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23
 --

 Key: HIVE-4619
 URL: https://issues.apache.org/jira/browse/HIVE-4619
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-4619.D10971.1.patch


 path uris in input split are missing scheme (it's fixed on cdh3u6 and hadoop 
 1.0)
 {noformat}
 2013-05-28 14:34:28,857 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Adding alias data_type to work list for file 
 hdfs://qa14:9000/user/hive/warehouse/data_type
 2013-05-28 14:34:28,858 ERROR org.apache.hadoop.hive.ql.exec.MapOperator: 
 Configuration does not have any alias for path: 
 /user/hive/warehouse/data_type/00_0
 2013-05-28 14:34:28,875 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
 Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
 2013-05-28 14:34:28,877 WARN org.apache.hadoop.mapred.Child: Error running 
 child
 java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:387)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
 at org.apache.hadoop.mapred.Child.main(Child.java:260)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 9 more
 Caused by: java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
 ... 14 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 17 more
 Caused by: java.lang.RuntimeException: Map operator initialization failed
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Configuration and input 
 path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:522)
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:90)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Configuration 
 and input path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:516)
 ... 23 more
 2013-05-28 14:34:28,881 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
 for the task
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5239) LazyDate goes into irretrievable NULL mode once inited with NULL once

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5239:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Jason!

 LazyDate goes into irretrievable NULL mode once inited with NULL once
 -

 Key: HIVE-5239
 URL: https://issues.apache.org/jira/browse/HIVE-5239
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5239.1.patch


 Stumbled across HIVE-4757 with Timestamp.  It looks like Date has the same 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4710) ant maven-build -Dmvn.publish.repo=local fails

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4710:
---

   Resolution: Fixed
Fix Version/s: 0.12.0
   Status: Resolved  (was: Patch Available)

Fixed via HIVE-4871

 ant maven-build -Dmvn.publish.repo=local fails
 --

 Key: HIVE-4710
 URL: https://issues.apache.org/jira/browse/HIVE-4710
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Amareshwari Sriramadasu
 Fix For: 0.12.0

 Attachments: hive-4710.patch


 ant maven-build fails with following error :
 /home/amareshwaris/hive/build.xml:121: The following error occurred while 
 executing this line:
 /home/amareshwaris/hive/build.xml:123: The following error occurred while 
 executing this line:
 Target make-pom does not exist in the project hcatalog. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4871) Apache builds fail with Target make-pom does not exist in the project hcatalog.

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4871:
---

   Resolution: Fixed
Fix Version/s: (was: 0.12.0)
   0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Eugene!

 Apache builds fail with Target make-pom does not exist in the project 
 hcatalog.
 ---

 Key: HIVE-4871
 URL: https://issues.apache.org/jira/browse/HIVE-4871
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-4871.patch

   Original Estimate: 24h
  Time Spent: 24.4h
  Remaining Estimate: 0h

 For example,
 https://builds.apache.org/job/Hive-trunk-h0.21/2192/console.
 All unit tests pass, but deployment of build artifacts fails.
 HIVE-4387 provided a bandaid for 0.11.  Need to figure out long term fix for 
 this for 0.12.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4925) Modify Hive build to enable compiling and running Hive with JDK7

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4925:
---

   Resolution: Fixed
Fix Version/s: (was: 0.12.0)
   0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Xuefu!

 Modify Hive build to enable compiling and running Hive with JDK7
 

 Key: HIVE-4925
 URL: https://issues.apache.org/jira/browse/HIVE-4925
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure
Affects Versions: 0.11.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-4925.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5199) Custom SerDe containing a nonSettable complex data type row object inspector throws cast exception with HIVE 0.11

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5199:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Hari!

 Custom SerDe containing a nonSettable complex data type row object inspector 
 throws cast exception with HIVE 0.11
 -

 Key: HIVE-5199
 URL: https://issues.apache.org/jira/browse/HIVE-5199
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-5199.2.patch.txt, HIVE-5199.3.patch.txt, 
 HIVE-5199.patch.4.txt, HIVE-5199.patch.txt


 The issue happens because of the changes in HIVE-3833.
 Consider a partitioned table with different custom serdes for the partition 
 and tables. The serde at table level, say, customSerDe1's object inspector is 
 of settableDataType where as the serde at partition level, say, 
 customSerDe2's object inspector is of nonSettableDataType. The current 
 implementation introduced by HIVE-3833 does not convert nested Complex Data 
 Types which extend nonSettableObjectInspector to a settableObjectInspector 
 type inside ObjectInspectorConverters.getConvertedOI(). However, it tries to 
 typecast the nonSettableObjectInspector to a settableObjectInspector inside  
 ObjectInspectorConverters.getConverter(ObjectInspector inputOI, 
 ObjectInspector outputOI).
 The attached patch HIVE-5199.2.patch.txt contains a stand-alone test case.
 The below exception can happen via FetchOperator as well as MapOperator. 
 For example, consider the FetchOperator.
 Inside FetchOperator consider the following call:
 getRecordReader()-ObjectInspectorConverters. getConverter()
 The stack trace as follows:
 2013-08-28 17:57:25,307 ERROR CliDriver (SessionState.java:printError(432)) - 
 Failed with exception java.io.IOException:java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 java.io.IOException: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:544)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:488)
 at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
 at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1412)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters.getConverter(ObjectInspectorConverters.java:144)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters$StructConverter.init(ObjectInspectorConverters.java:307)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters.getConverter(ObjectInspectorConverters.java:138)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:406)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2110) Hive Client is indefenitely waiting for reading from Socket

2013-09-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761974#comment-13761974
 ] 

Hive QA commented on HIVE-2110:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12602138/HIVE-2110.D12813.1.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 3087 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testSequenceTableWriteRead
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testTextTableWriteRead
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testSequenceTableWriteReadMR
org.apache.hive.hcatalog.mapreduce.TestSequenceFileReadWrite.testTextTableWriteReadMR
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/672/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/672/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

 Hive Client is indefenitely waiting for reading from Socket
 ---

 Key: HIVE-2110
 URL: https://issues.apache.org/jira/browse/HIVE-2110
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.5.0, 0.11.0
 Environment: Hadoop 0.20.1, Hive0.5.0 and SUSE Linux Enterprise 
 Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5).
Reporter: Chinna Rao Lalam
Assignee: Prasad Mujumdar
 Attachments: HIVE-2110.1.patch, HIVE-2110.D12813.1.patch, 
 HIVE-2110.patch


 Hive Client is indefenitely waiting for reading from Socket. Thread dump i  
 added below.
 Cause is:
  
   In the HiveClient, when client socket is created, the read timeout is 
 mentioned is 0. So the socket will indefinetly wait when the machine where 
 Hive Server is running is shutdown or network is unplugged. The same may 
 not happen if the HiveServer alone is killed or gracefully shutdown. At this 
 time, client will get connection reset exception. 
 Code in HiveConnection
 ---
 {noformat}
 transport = new TSocket(host, port);
 TProtocol protocol = new TBinaryProtocol(transport); 
 client = new HiveClient(protocol);
 {noformat}
 In the Client side, they send the query and wait for the response 
 send_execute(query,id); recv_execute(); // place where client waiting is 
 initiated
 Thread dump:
 {noformat}
 main prio=10 tid=0x40111000 nid=0x3641 runnable [0x7f0d73f29000]
   java.lang.Thread.State: RUNNABLE
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:129)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:317) 
   locked 0x7f0d5d3f0828 (a java.io.BufferedInputStream)
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:314)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:262)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:192)
   at 
 org.apache.hadoop.hive.service.ThriftHive$Client.recv_execute(ThriftHive.java:130)
   at 
 org.apache.hadoop.hive.service.ThriftHive$Client.execute(ThriftHive.java:109) 
   locked 0x7f0d5d3f0878 (a org.apache.thrift.transport.TSocket)
   at 
 org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:218)
   at 
 org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:154)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4914) filtering via partition name should be done inside metastore server (implementation)

2013-09-09 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761942#comment-13761942
 ] 

Phabricator commented on HIVE-4914:
---

ashutoshc has requested changes to the revision HIVE-4914 [jira] filtering via 
partition name should be done inside metastore server (implementation).

  Also now we anyways have client side code for filter expression evaluation, I 
think its easier to support backward compatibility. Client can catch thrift 
equivalent of method not function exception while trying this function and than 
degrade itself to old style client side evaluation.

INLINE COMMENTS
  metastore/if/hive_metastore.thrift:289 Now that kryo support is checked in 
trunk, I think its better to send kryo serialized expression in binary format 
instead of xml serialized string.

REVISION DETAIL
  https://reviews.facebook.net/D12561

To: JIRA, ashutoshc, sershe


 filtering via partition name should be done inside metastore server 
 (implementation)
 

 Key: HIVE-4914
 URL: https://issues.apache.org/jira/browse/HIVE-4914
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-4914.01.patch, HIVE-4914.02.patch, 
 HIVE-4914.03.patch, HIVE-4914.D12561.1.patch, HIVE-4914.D12561.2.patch, 
 HIVE-4914.D12561.3.patch, HIVE-4914.D12645.1.patch, 
 HIVE-4914-only-no-gen.patch, HIVE-4914-only.patch, HIVE-4914.patch, 
 HIVE-4914.patch, HIVE-4914.patch


 Currently, if the filter pushdown is impossible (which is most cases), the 
 client gets all partition names from metastore, filters them, and asks for 
 partitions by names for the filtered set.
 Metastore server code should do that instead; it should check if pushdown is 
 possible and do it if so; otherwise it should do name-based filtering.
 Saves the roundtrip with all partition names from the server to client, and 
 also removes the need to have pushdown viability checking on both sides.
 NO PRECOMMIT TESTS

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4871) Apache builds fail with Target make-pom does not exist in the project hcatalog.

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761980#comment-13761980
 ] 

Hudson commented on HIVE-4871:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #89 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/89/])
HIVE-4871 : Apache builds fail with Target make-pom does not exist in the 
project hcatalog (Eugene Koifman via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521143)
* /hive/trunk/build.properties
* /hive/trunk/build.xml
* /hive/trunk/hcatalog/build.xml


 Apache builds fail with Target make-pom does not exist in the project 
 hcatalog.
 ---

 Key: HIVE-4871
 URL: https://issues.apache.org/jira/browse/HIVE-4871
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-4871.patch

   Original Estimate: 24h
  Time Spent: 24.4h
  Remaining Estimate: 0h

 For example,
 https://builds.apache.org/job/Hive-trunk-h0.21/2192/console.
 All unit tests pass, but deployment of build artifacts fails.
 HIVE-4387 provided a bandaid for 0.11.  Need to figure out long term fix for 
 this for 0.12.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5199) Custom SerDe containing a nonSettable complex data type row object inspector throws cast exception with HIVE 0.11

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761978#comment-13761978
 ] 

Hudson commented on HIVE-5199:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #89 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/89/])
HIVE-5199 : Custom SerDe containing a nonSettable complex data type row object 
inspector throws cast exception with HIVE 0.11 (Hari Sankar via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521155)
* /hive/trunk/build-common.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomNonSettableListObjectInspector1.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomNonSettableStructObjectInspector1.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe1.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe2.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_wise_fileformat17.q
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat17.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorConverters.java


 Custom SerDe containing a nonSettable complex data type row object inspector 
 throws cast exception with HIVE 0.11
 -

 Key: HIVE-5199
 URL: https://issues.apache.org/jira/browse/HIVE-5199
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-5199.2.patch.txt, HIVE-5199.3.patch.txt, 
 HIVE-5199.patch.4.txt, HIVE-5199.patch.txt


 The issue happens because of the changes in HIVE-3833.
 Consider a partitioned table with different custom serdes for the partition 
 and tables. The serde at table level, say, customSerDe1's object inspector is 
 of settableDataType where as the serde at partition level, say, 
 customSerDe2's object inspector is of nonSettableDataType. The current 
 implementation introduced by HIVE-3833 does not convert nested Complex Data 
 Types which extend nonSettableObjectInspector to a settableObjectInspector 
 type inside ObjectInspectorConverters.getConvertedOI(). However, it tries to 
 typecast the nonSettableObjectInspector to a settableObjectInspector inside  
 ObjectInspectorConverters.getConverter(ObjectInspector inputOI, 
 ObjectInspector outputOI).
 The attached patch HIVE-5199.2.patch.txt contains a stand-alone test case.
 The below exception can happen via FetchOperator as well as MapOperator. 
 For example, consider the FetchOperator.
 Inside FetchOperator consider the following call:
 getRecordReader()-ObjectInspectorConverters. getConverter()
 The stack trace as follows:
 2013-08-28 17:57:25,307 ERROR CliDriver (SessionState.java:printError(432)) - 
 Failed with exception java.io.IOException:java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 java.io.IOException: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:544)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:488)
 at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
 at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1412)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 

[jira] [Commented] (HIVE-4492) Revert HIVE-4322

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761977#comment-13761977
 ] 

Hudson commented on HIVE-4492:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #89 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/89/])
HIVE-4492 : Revert HIVE4322 (Samuel Yuan and Ashutosh Chauhan via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521120)
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedValueList.java
* /hive/trunk/metastore/src/gen/thrift/gen-php/metastore/Types.php
* /hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/MetaDataFormatUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPruner.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java


 Revert HIVE-4322
 

 Key: HIVE-4492
 URL: https://issues.apache.org/jira/browse/HIVE-4492
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Reporter: Samuel Yuan
Assignee: Samuel Yuan
 Fix For: 0.13.0

 Attachments: HIVE-4492.1.patch.txt, HIVE-4492.patch


 See HIVE-4432 and HIVE-4433. It's possible to work around these issues but a 
 better solution is probably to roll back the fix and change the API to use 
 a primitive type as the map key (in a backwards-compatible manner).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4925) Modify Hive build to enable compiling and running Hive with JDK7

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761979#comment-13761979
 ] 

Hudson commented on HIVE-4925:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #89 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/89/])
HIVE-4925 : Modify Hive build to enable compiling and running Hive with JDK7 
(Xuefu Zhang via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521157)
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/hcatalog/build.properties
* /hive/trunk/hcatalog/storage-handlers/hbase/build.xml
* /hive/trunk/shims/build.xml


 Modify Hive build to enable compiling and running Hive with JDK7
 

 Key: HIVE-4925
 URL: https://issues.apache.org/jira/browse/HIVE-4925
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure
Affects Versions: 0.11.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-4925.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1511) Hive plan serialization is slow

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761981#comment-13761981
 ] 

Hudson commented on HIVE-1511:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #89 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/89/])
HIVE-1511

Summary: Hive Plan Serialization

Test Plan: Regression test suite

Reviewers: brock

Reviewed By: brock

Differential Revision: https://reviews.facebook.net/D12789 (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521110)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/udtf/example/GenericUDTFCount2.java
* 
/hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/udtf/example/GenericUDTFExplode2.java
* /hive/trunk/hcatalog/pom.xml
* /hive/trunk/ivy/ivysettings.xml
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/ivy.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnInfo.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeColumnEvaluator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableDummyOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/RowSchema.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapRedTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/BucketingSortingCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/LocalMapJoinProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SortMergeJoinTaskDispatcher.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PTFDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFnGrams.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFArray.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFormatNumber.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIndex.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFNamedStruct.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFStruct.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToUnixTimeStamp.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFExplode.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFJSONTuple.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFParseUrlTuple.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFStack.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestPlan.java
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/cast1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input20.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml
* 

[jira] [Commented] (HIVE-5234) partition name filtering uses suboptimal datastructures

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761983#comment-13761983
 ] 

Hudson commented on HIVE-5234:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #89 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/89/])
HIVE-5234 : partition name filtering uses suboptimal datastructures (Sergey 
Shelukhin via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521158)
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java


 partition name filtering uses suboptimal datastructures
 ---

 Key: HIVE-5234
 URL: https://issues.apache.org/jira/browse/HIVE-5234
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-5234.D12777.1.patch


 Some DSes used in name-based partition filtering, as well as related methods, 
 are suboptimal, which can cost 100-s of ms on large number of partitions. I 
 noticed while perf testing HIVE-4914, but it can also be applied separately 
 given that the patch over there will take some time to get in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5239) LazyDate goes into irretrievable NULL mode once inited with NULL once

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761984#comment-13761984
 ] 

Hudson commented on HIVE-5239:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #89 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/89/])
HIVE-5239 : LazyDate goes into irretrievable NULL mode once inited with NULL 
once (Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521136)
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDate.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/lazy/TestLazyPrimitive.java


 LazyDate goes into irretrievable NULL mode once inited with NULL once
 -

 Key: HIVE-5239
 URL: https://issues.apache.org/jira/browse/HIVE-5239
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5239.1.patch


 Stumbled across HIVE-4757 with Timestamp.  It looks like Date has the same 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-5246) Local task for map join submitted via oozie job fails on a secure HDFS

2013-09-09 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-5246:
-

 Summary:  Local task for map join submitted via oozie job fails on 
a secure HDFS
 Key: HIVE-5246
 URL: https://issues.apache.org/jira/browse/HIVE-5246
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0, 0.10.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar


For a Hive query started by Oozie Hive action, the local task submitted for 
Mapjoin fails. The HDFS delegation token is not shared properly with the child 
JVM created for the local task.

Oozie creates a delegation token for the Hive action and sets env variable 
HADOOP_TOKEN_FILE_LOCATION as well as mapreduce.job.credentials.binary config 
property. However this doesn't get passed down to the child JVM which causes 
the problem.
This is similar issue addressed by HIVE-4343 which address the problem 
HiveServer2

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-4881) hive local mode: java.io.FileNotFoundException: emptyFile

2013-09-09 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar reassigned HIVE-4881:
-

Assignee: Prasad Mujumdar

 hive local mode: java.io.FileNotFoundException: emptyFile
 -

 Key: HIVE-4881
 URL: https://issues.apache.org/jira/browse/HIVE-4881
 Project: Hive
  Issue Type: Bug
 Environment: hive 0.9.0+158-1.cdh4.1.3.p0.23~squeeze-cdh4.1.3
Reporter: Bartosz Cisek
Assignee: Prasad Mujumdar
Priority: Critical

 Our hive jobs fail due to strange error pasted below. Strace showed that 
 process created this file, accessed it a few times and then it throwed 
 exception that it couldn't find file it just accessed. In next step it 
 unliked it. Yay.
 Very similar problem was reported [in already closed 
 task|https://issues.apache.org/jira/browse/HIVE-1633?focusedCommentId=13598983page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13598983]
  or left unresolved on [mailing 
 lists|http://mail-archives.apache.org/mod_mbox/hive-user/201307.mbox/%3c94f02eb368b740ebbcd94df4d5d1d...@amxpr03mb054.eurprd03.prod.outlook.com%3E].
 I'll be happy to provide required additional details. 
 {code:title=Stack trace}
 2013-07-18 12:49:46,109 ERROR security.UserGroupInformation 
 (UserGroupInformation.java:doAs(1335)) - PriviledgedActionException 
 as:username (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not 
 exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile
 2013-07-18 12:49:46,113 ERROR exec.ExecDriver 
 (SessionState.java:printError(403)) - Job Submission failed with exception 
 'java.io.FileNotFoundException(File does not exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile)'
 java.io.FileNotFoundException: File does not exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.init(CombineFileInputFormat.java:462)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387)
 at 
 org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1040)
 at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1032)
 at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:895)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
 at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:895)
 at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:869)
 at 
 org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)
 at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 {code}
 {code:title=strace with grep emptyFile}
 7385  14:48:02.808096 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808201 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808277 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808348 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808506 
 

[jira] [Commented] (HIVE-5078) [WebHCat] Fix e2e tests on Windows plus test cases for new features

2013-09-09 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762040#comment-13762040
 ] 

Eugene Koifman commented on HIVE-5078:
--

for some reason this patch removes jobsubmission2.conf.  Why is this no longer 
relevant?

 [WebHCat] Fix e2e tests on Windows plus test cases for new features
 ---

 Key: HIVE-5078
 URL: https://issues.apache.org/jira/browse/HIVE-5078
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5078-1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4003) NullPointerException in exec.Utilities

2013-09-09 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-4003:
---

Summary: NullPointerException in exec.Utilities  (was: NullPointerException 
in ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java)

 NullPointerException in exec.Utilities
 --

 Key: HIVE-4003
 URL: https://issues.apache.org/jira/browse/HIVE-4003
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Thomas Adam
Assignee: Mark Grover
Priority: Blocker
 Attachments: HIVE-4003.patch, HIVE-4003.patch


 Utilities.java seems to be throwing a NPE.
 Change contributed by Thomas Adam.
 Reference: 
 https://github.com/tecbot/hive/commit/1e29d88837e4101a76e870a716aadb729437355b#commitcomment-2588350

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4003) NullPointerException in ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java

2013-09-09 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762017#comment-13762017
 ] 

Mark Grover commented on HIVE-4003:
---

[~appodictic] or [~brocknoland] would one of you mind committing this?

 NullPointerException in 
 ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
 -

 Key: HIVE-4003
 URL: https://issues.apache.org/jira/browse/HIVE-4003
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Thomas Adam
Assignee: Mark Grover
 Attachments: HIVE-4003.patch, HIVE-4003.patch


 Utilities.java seems to be throwing a NPE.
 Change contributed by Thomas Adam.
 Reference: 
 https://github.com/tecbot/hive/commit/1e29d88837e4101a76e870a716aadb729437355b#commitcomment-2588350

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5001) [WebHCat] JobState is read/written with different user credentials

2013-09-09 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-5001:
-

Priority: Minor  (was: Major)

Lowering priority as this isn't commonly used in production and HIVE-4601 has 
improved HDFSStorage error logging so that this condition is at least visible 
in the logs.

 [WebHCat] JobState is read/written with different user credentials
 --

 Key: HIVE-5001
 URL: https://issues.apache.org/jira/browse/HIVE-5001
 Project: Hive
  Issue Type: Bug
  Components: Authorization, HCatalog
Affects Versions: 0.11.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Minor

 JobState can be persisted to HDFS or Zookeeper.  At various points in the 
 lifecycle it's accessed with different user credentials thus may cause errors 
 depending on how permissions are set.
 Example:
 When submitting a MR job, templeton.JarDelegator is used.
 It calls LauncherDelegator#queueAsUser() which runs TempletonControllerJob 
 with UserGroupInformation.doAs().
 TempletonControllerJob will in turn create JobState and persist it.
 LauncherDelegator.registerJob() also modifies JobState but w/o doing a doAs()
 So in the later case it's possible that the persisted state of JobState by a 
 different user than one that created/owns the file.
 templeton.tool.HDFSCleanup tries to delete these files w/o doAs.
 'childid' file, for example, is created with rw-r--r--.
 and it's parent directory (job_201308051224_0001) has rwxr-xr-x.
 HDFSStorage doesn't set file permissions explicitly so it must be using 
 default permissions.
 So there is a potential issue here (depending on UMASK) especially once 
 HIVE-4601 is addressed.
 Actually, even w/o HIVE-4601 the user that owns the WebHCat process is likely 
 different than the one submitting a request.
 The default for templeton.storage.class is 
 org.apache.hcatalog.templeton.toolHDFSStorage, but it's likely that most 
 production environments change it to Zookeeper, which may explain why this 
 issue is not commonly seen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4003) NullPointerException in exec.Utilities

2013-09-09 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-4003:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   0.12.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and 0.12! Thank you for your contribution!

 NullPointerException in exec.Utilities
 --

 Key: HIVE-4003
 URL: https://issues.apache.org/jira/browse/HIVE-4003
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Thomas Adam
Assignee: Mark Grover
Priority: Blocker
 Fix For: 0.12.0, 0.13.0

 Attachments: HIVE-4003.patch, HIVE-4003.patch


 Utilities.java seems to be throwing a NPE.
 Change contributed by Thomas Adam.
 Reference: 
 https://github.com/tecbot/hive/commit/1e29d88837e4101a76e870a716aadb729437355b#commitcomment-2588350

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5234) partition name filtering uses suboptimal datastructures

2013-09-09 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5234:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Sergey!

 partition name filtering uses suboptimal datastructures
 ---

 Key: HIVE-5234
 URL: https://issues.apache.org/jira/browse/HIVE-5234
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-5234.D12777.1.patch


 Some DSes used in name-based partition filtering, as well as related methods, 
 are suboptimal, which can cost 100-s of ms on large number of partitions. I 
 noticed while perf testing HIVE-4914, but it can also be applied separately 
 given that the patch over there will take some time to get in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4003) NullPointerException in ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java

2013-09-09 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-4003:
--

Priority: Blocker  (was: Major)

 NullPointerException in 
 ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
 -

 Key: HIVE-4003
 URL: https://issues.apache.org/jira/browse/HIVE-4003
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Thomas Adam
Assignee: Mark Grover
Priority: Blocker
 Attachments: HIVE-4003.patch, HIVE-4003.patch


 Utilities.java seems to be throwing a NPE.
 Change contributed by Thomas Adam.
 Reference: 
 https://github.com/tecbot/hive/commit/1e29d88837e4101a76e870a716aadb729437355b#commitcomment-2588350

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4003) NullPointerException in ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java

2013-09-09 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762023#comment-13762023
 ] 

Edward Capriolo commented on HIVE-4003:
---

I marked it as blocker, if I do not get to it someone else should.  

 NullPointerException in 
 ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
 -

 Key: HIVE-4003
 URL: https://issues.apache.org/jira/browse/HIVE-4003
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Thomas Adam
Assignee: Mark Grover
Priority: Blocker
 Attachments: HIVE-4003.patch, HIVE-4003.patch


 Utilities.java seems to be throwing a NPE.
 Change contributed by Thomas Adam.
 Reference: 
 https://github.com/tecbot/hive/commit/1e29d88837e4101a76e870a716aadb729437355b#commitcomment-2588350

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4619) Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23

2013-09-09 Thread Lars Francke (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Francke updated HIVE-4619:
---

Attachment: HIVE-4619.2.patch

This patch applies cleanly for me on current trunk.

 Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23
 --

 Key: HIVE-4619
 URL: https://issues.apache.org/jira/browse/HIVE-4619
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-4619.2.patch, HIVE-4619.D10971.1.patch


 path uris in input split are missing scheme (it's fixed on cdh3u6 and hadoop 
 1.0)
 {noformat}
 2013-05-28 14:34:28,857 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Adding alias data_type to work list for file 
 hdfs://qa14:9000/user/hive/warehouse/data_type
 2013-05-28 14:34:28,858 ERROR org.apache.hadoop.hive.ql.exec.MapOperator: 
 Configuration does not have any alias for path: 
 /user/hive/warehouse/data_type/00_0
 2013-05-28 14:34:28,875 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
 Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
 2013-05-28 14:34:28,877 WARN org.apache.hadoop.mapred.Child: Error running 
 child
 java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:387)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
 at org.apache.hadoop.mapred.Child.main(Child.java:260)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 9 more
 Caused by: java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
 ... 14 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 17 more
 Caused by: java.lang.RuntimeException: Map operator initialization failed
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Configuration and input 
 path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:522)
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:90)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Configuration 
 and input path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:516)
 ... 23 more
 2013-05-28 14:34:28,881 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
 for the task
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4003) NullPointerException in exec.Utilities

2013-09-09 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762057#comment-13762057
 ] 

Mark Grover commented on HIVE-4003:
---

Thank you!

 NullPointerException in exec.Utilities
 --

 Key: HIVE-4003
 URL: https://issues.apache.org/jira/browse/HIVE-4003
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Thomas Adam
Assignee: Mark Grover
Priority: Blocker
 Fix For: 0.12.0, 0.13.0

 Attachments: HIVE-4003.patch, HIVE-4003.patch


 Utilities.java seems to be throwing a NPE.
 Change contributed by Thomas Adam.
 Reference: 
 https://github.com/tecbot/hive/commit/1e29d88837e4101a76e870a716aadb729437355b#commitcomment-2588350

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4881) hive local mode: java.io.FileNotFoundException: emptyFile

2013-09-09 Thread Bartosz Cisek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762110#comment-13762110
 ] 

Bartosz Cisek commented on HIVE-4881:
-

Thanks for response. We switched to server mode and problem was gone. I think 
this task might be closed.

 hive local mode: java.io.FileNotFoundException: emptyFile
 -

 Key: HIVE-4881
 URL: https://issues.apache.org/jira/browse/HIVE-4881
 Project: Hive
  Issue Type: Bug
 Environment: hive 0.9.0+158-1.cdh4.1.3.p0.23~squeeze-cdh4.1.3
Reporter: Bartosz Cisek
Assignee: Prasad Mujumdar
Priority: Critical

 Our hive jobs fail due to strange error pasted below. Strace showed that 
 process created this file, accessed it a few times and then it throwed 
 exception that it couldn't find file it just accessed. In next step it 
 unliked it. Yay.
 Very similar problem was reported [in already closed 
 task|https://issues.apache.org/jira/browse/HIVE-1633?focusedCommentId=13598983page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13598983]
  or left unresolved on [mailing 
 lists|http://mail-archives.apache.org/mod_mbox/hive-user/201307.mbox/%3c94f02eb368b740ebbcd94df4d5d1d...@amxpr03mb054.eurprd03.prod.outlook.com%3E].
 I'll be happy to provide required additional details. 
 {code:title=Stack trace}
 2013-07-18 12:49:46,109 ERROR security.UserGroupInformation 
 (UserGroupInformation.java:doAs(1335)) - PriviledgedActionException 
 as:username (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not 
 exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile
 2013-07-18 12:49:46,113 ERROR exec.ExecDriver 
 (SessionState.java:printError(403)) - Job Submission failed with exception 
 'java.io.FileNotFoundException(File does not exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile)'
 java.io.FileNotFoundException: File does not exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.init(CombineFileInputFormat.java:462)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387)
 at 
 org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1040)
 at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1032)
 at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:895)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
 at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:895)
 at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:869)
 at 
 org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)
 at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 {code}
 {code:title=strace with grep emptyFile}
 7385  14:48:02.808096 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808201 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808277 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808348 
 

[jira] [Updated] (HIVE-4568) Beeline needs to support resolving variables

2013-09-09 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-4568:
--

Attachment: HIVE-4568.6.patch

Rename the patch to kick off test (again).

 Beeline needs to support resolving variables
 

 Key: HIVE-4568
 URL: https://issues.apache.org/jira/browse/HIVE-4568
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.10.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.12.0

 Attachments: HIVE-4568-1.patch, HIVE-4568-2.patch, HIVE-4568.3.patch, 
 HIVE-4568.4.patch, HIVE-4568.5.patch, HIVE-4568.6.patch, HIVE-4568.patch


 Previous Hive CLI allows user to specify hive variables at the command line 
 using option --hivevar. In user's script, reference to a hive variable will 
 be substituted with the value of the variable. In such way, user can 
 parameterize his/her script and invoke the script with different hive 
 variable values. The following script is one usage:
 {code}
 hive --hivevar
  INPUT=/user/jenkins/oozie.1371538916178/examples/input-data/table
  --hivevar
  OUTPUT=/user/jenkins/oozie.1371538916178/examples/output-data/hive
  -f script.q
 {code}
 script.q makes use of hive variables:
 {code}
 CREATE EXTERNAL TABLE test (a INT) STORED AS TEXTFILE LOCATION '${INPUT}';
 INSERT OVERWRITE DIRECTORY '${OUTPUT}' SELECT * FROM test;
 {code}
 However, after upgrade to hiveserver2 and beeline, this functionality is 
 missing. Beeline doesn't take --hivevar option, and any hive variable isn't 
 passed to server so it cannot be used for substitution.
 This JIRA is to address this issue, providing a backward compatible behavior 
 at Beeline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Merge vectorization branch to trunk

2013-09-09 Thread Jitendra Pandey
Hi Brock,
   I will merge latest trunk to the branch and run all the tests again, and
provide you the run time difference. Trunk has moved ahead since the last
merge, therefore test results are a bit stale right now.

 What new modules or dependencies are added?
  No new dependencies have been added.
 Can the feature be disabled if required?
  Yes, there is boolean hive.vectorized.execution.enabled flag. If it is
false, vectorization optimization is completely ignored. By default, we
intend to keep it false for now, so that users who don't care about
vectorization don't have to change anything.



On Fri, Sep 6, 2013 at 2:24 PM, Brock Noland br...@cloudera.com wrote:

 Hi,

 First of all I'd like to say thanks for your hard work!  I also agree
 that this should go in post 0.12 branch.  I have a couple questions:

 Do all the tests pass?
 How long does ant test test take in comparison with trunk?
 What new modules or dependencies are added?
 Can the feature be disabled if required?

 Brock

 On Fri, Sep 6, 2013 at 3:44 PM, Jitendra Pandey
 jiten...@hortonworks.com wrote:
  Hi Folks,
 Vectorized query execution work has been under development for around
 6
  months now. We have made good progress in making basic datatypes and
  queries to work in vectorized mode and also we have laid sufficient
  groundwork so that any future development can happen directly on trunk.
 This is a community effort involving several developers including Eric
  Hanson, Remus Rusanu, Sarvesh
  Sakalanaga
 https://issues.apache.org/jira/secure/ViewProfile.jspa?name=sarvesh.sn,
  Gopal V, Teddy Choi, Tony Murphy, Timothy Chen, Prashanth J and myself.
 I believe vectorization branch is now ready to be merged to the trunk.
 I propose to merge it immediately after the hive-0.12 branch is cut,
 so
  that we get sufficient time before the next release.
 
  thanks,
  jitendra
  --
  http://hortonworks.com/download/
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.



 --
 Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org




-- 
http://hortonworks.com/download/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Commented] (HIVE-5199) Custom SerDe containing a nonSettable complex data type row object inspector throws cast exception with HIVE 0.11

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761997#comment-13761997
 ] 

Hudson commented on HIVE-5199:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #157 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/157/])
HIVE-5199 : Custom SerDe containing a nonSettable complex data type row object 
inspector throws cast exception with HIVE 0.11 (Hari Sankar via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521155)
* /hive/trunk/build-common.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomNonSettableListObjectInspector1.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomNonSettableStructObjectInspector1.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe1.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/CustomSerDe2.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_wise_fileformat17.q
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat17.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorConverters.java


 Custom SerDe containing a nonSettable complex data type row object inspector 
 throws cast exception with HIVE 0.11
 -

 Key: HIVE-5199
 URL: https://issues.apache.org/jira/browse/HIVE-5199
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-5199.2.patch.txt, HIVE-5199.3.patch.txt, 
 HIVE-5199.patch.4.txt, HIVE-5199.patch.txt


 The issue happens because of the changes in HIVE-3833.
 Consider a partitioned table with different custom serdes for the partition 
 and tables. The serde at table level, say, customSerDe1's object inspector is 
 of settableDataType where as the serde at partition level, say, 
 customSerDe2's object inspector is of nonSettableDataType. The current 
 implementation introduced by HIVE-3833 does not convert nested Complex Data 
 Types which extend nonSettableObjectInspector to a settableObjectInspector 
 type inside ObjectInspectorConverters.getConvertedOI(). However, it tries to 
 typecast the nonSettableObjectInspector to a settableObjectInspector inside  
 ObjectInspectorConverters.getConverter(ObjectInspector inputOI, 
 ObjectInspector outputOI).
 The attached patch HIVE-5199.2.patch.txt contains a stand-alone test case.
 The below exception can happen via FetchOperator as well as MapOperator. 
 For example, consider the FetchOperator.
 Inside FetchOperator consider the following call:
 getRecordReader()-ObjectInspectorConverters. getConverter()
 The stack trace as follows:
 2013-08-28 17:57:25,307 ERROR CliDriver (SessionState.java:printError(432)) - 
 Failed with exception java.io.IOException:java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 java.io.IOException: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:544)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:488)
 at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
 at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1412)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 

[jira] [Commented] (HIVE-4492) Revert HIVE-4322

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761996#comment-13761996
 ] 

Hudson commented on HIVE-4492:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #157 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/157/])
HIVE-4492 : Revert HIVE4322 (Samuel Yuan and Ashutosh Chauhan via Ashutosh 
Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521120)
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/hive_metastore_types.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedInfo.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/SkewedValueList.java
* /hive/trunk/metastore/src/gen/thrift/gen-php/metastore/Types.php
* /hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ttypes.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/hive_metastore_types.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Partition.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/MetaDataFormatUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/listbucketingpruner/ListBucketingPruner.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java


 Revert HIVE-4322
 

 Key: HIVE-4492
 URL: https://issues.apache.org/jira/browse/HIVE-4492
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Reporter: Samuel Yuan
Assignee: Samuel Yuan
 Fix For: 0.13.0

 Attachments: HIVE-4492.1.patch.txt, HIVE-4492.patch


 See HIVE-4432 and HIVE-4433. It's possible to work around these issues but a 
 better solution is probably to roll back the fix and change the API to use 
 a primitive type as the map key (in a backwards-compatible manner).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4925) Modify Hive build to enable compiling and running Hive with JDK7

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761998#comment-13761998
 ] 

Hudson commented on HIVE-4925:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #157 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/157/])
HIVE-4925 : Modify Hive build to enable compiling and running Hive with JDK7 
(Xuefu Zhang via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521157)
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/hcatalog/build.properties
* /hive/trunk/hcatalog/storage-handlers/hbase/build.xml
* /hive/trunk/shims/build.xml


 Modify Hive build to enable compiling and running Hive with JDK7
 

 Key: HIVE-4925
 URL: https://issues.apache.org/jira/browse/HIVE-4925
 Project: Hive
  Issue Type: Sub-task
  Components: Build Infrastructure
Affects Versions: 0.11.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.13.0

 Attachments: HIVE-4925.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5218) datanucleus does not work with SQLServer in Hive metastore

2013-09-09 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762094#comment-13762094
 ] 

Sergey Soldatov commented on HIVE-5218:
---

 Could you please try the attachmed fix. If it's still fail, could you please 
attach the logs.

 datanucleus does not work with SQLServer in Hive metastore
 --

 Key: HIVE-5218
 URL: https://issues.apache.org/jira/browse/HIVE-5218
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
Reporter: shanyu zhao
 Attachments: 
 0001-HIVE-5218-datanucleus-does-not-work-with-SQLServer-i.patch


 HIVE-3632 upgraded datanucleus version to 3.2.x, however, this version of 
 datanucleus doesn't work with SQLServer as the metastore. The problem is that 
 datanucleus tries to use fully qualified object name to find a table in the 
 database but couldn't find it.
 If I downgrade the version to HIVE-2084, SQLServer works fine.
 It could be a bug in datanucleus.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5218) datanucleus does not work with SQLServer in Hive metastore

2013-09-09 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HIVE-5218:
--

Attachment: 0001-HIVE-5218-datanucleus-does-not-work-with-SQLServer-i.patch

 datanucleus does not work with SQLServer in Hive metastore
 --

 Key: HIVE-5218
 URL: https://issues.apache.org/jira/browse/HIVE-5218
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0
Reporter: shanyu zhao
 Attachments: 
 0001-HIVE-5218-datanucleus-does-not-work-with-SQLServer-i.patch


 HIVE-3632 upgraded datanucleus version to 3.2.x, however, this version of 
 datanucleus doesn't work with SQLServer as the metastore. The problem is that 
 datanucleus tries to use fully qualified object name to find a table in the 
 database but couldn't find it.
 If I downgrade the version to HIVE-2084, SQLServer works fine.
 It could be a bug in datanucleus.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4844) Add varchar data type

2013-09-09 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762007#comment-13762007
 ] 

Xuefu Zhang commented on HIVE-4844:
---

I think people use varchar in Hive probably because of legacy data from DBs. 
Thus, a number at a scale of major DBs would be good. Yes, it's arbitrary. 64K 
would be good, but 64M or 64G might not be as good.

 Add varchar data type
 -

 Key: HIVE-4844
 URL: https://issues.apache.org/jira/browse/HIVE-4844
 Project: Hive
  Issue Type: New Feature
  Components: Types
Reporter: Jason Dere
Assignee: Jason Dere
 Attachments: HIVE-4844.10.patch, HIVE-4844.11.patch, 
 HIVE-4844.12.patch, HIVE-4844.1.patch.hack, HIVE-4844.2.patch, 
 HIVE-4844.3.patch, HIVE-4844.4.patch, HIVE-4844.5.patch, HIVE-4844.6.patch, 
 HIVE-4844.7.patch, HIVE-4844.8.patch, HIVE-4844.9.patch, 
 HIVE-4844.D12699.1.patch, screenshot.png


 Add new varchar data types which have support for more SQL-compliant 
 behavior, such as SQL string comparison semantics, max length, etc.
 Char type will be added as another task.
 NO PRECOMMIT TESTS - now dependent on HIVE-5203/5204/5206

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5233) move hbase storage handler to org.apache.hcatalog package

2013-09-09 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5233:
---

Release Note: The Hive HBase Storage Handler is deprecated as of 0.12.

 move hbase storage handler to org.apache.hcatalog package
 -

 Key: HIVE-5233
 URL: https://issues.apache.org/jira/browse/HIVE-5233
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0

 Attachments: 5233.move, 5233.update, HIVE-5233.patch


 org.apache.hcatalog in hcatalog/storage-handlers/ was erroneously renamed to 
 org.apache.hive.hcatalog in HIVE-4895.  This should be reverted as this 
 module is deprecated and should continue to exist in org.apache.hcatalog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-5233) move hbase storage handler to org.apache.hcatalog package

2013-09-09 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5233:
---

Release Note: The Hive HCatalog HBase Storage Handler is deprecated as of 
0.12.  (was: The Hive HBase Storage Handler is deprecated as of 0.12.)

 move hbase storage handler to org.apache.hcatalog package
 -

 Key: HIVE-5233
 URL: https://issues.apache.org/jira/browse/HIVE-5233
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0

 Attachments: 5233.move, 5233.update, HIVE-5233.patch


 org.apache.hcatalog in hcatalog/storage-handlers/ was erroneously renamed to 
 org.apache.hive.hcatalog in HIVE-4895.  This should be reverted as this 
 module is deprecated and should continue to exist in org.apache.hcatalog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5233) move hbase storage handler to org.apache.hcatalog package

2013-09-09 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762156#comment-13762156
 ] 

Brock Noland commented on HIVE-5233:


[~sushanth] [~ekoifman] since this is deprecated in 0.12 we should be able to 
delete it from trunk, right?

 move hbase storage handler to org.apache.hcatalog package
 -

 Key: HIVE-5233
 URL: https://issues.apache.org/jira/browse/HIVE-5233
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0

 Attachments: 5233.move, 5233.update, HIVE-5233.patch


 org.apache.hcatalog in hcatalog/storage-handlers/ was erroneously renamed to 
 org.apache.hive.hcatalog in HIVE-4895.  This should be reverted as this 
 module is deprecated and should continue to exist in org.apache.hcatalog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4619) Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23

2013-09-09 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762167#comment-13762167
 ] 

Hive QA commented on HIVE-4619:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12602163/HIVE-4619.2.patch

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 3088 tests 
executed
*Failed tests:*
{noformat}
org.apache.hcatalog.hbase.snapshot.lock.TestWriteLock.testRun
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSchema
org.apache.hive.hcatalog.fileformats.TestOrcDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_partition_wise_fileformat17
org.apache.hive.hcatalog.api.TestHCatClient.testObjectNotFoundException
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure
org.apache.hive.hcatalog.api.TestHCatClient.testDropTableException
org.apache.hive.hcatalog.api.TestHCatClient.testDropPartitionsWithPartialSpec
org.apache.hive.hcatalog.api.TestHCatClient.testCreateTableLike
org.apache.hive.hcatalog.api.TestHCatClient.testRenameTable
org.apache.hive.hcatalog.api.TestHCatClient.testOtherFailure
org.apache.hive.hcatalog.api.TestHCatClient.testBasicDDLCommands
org.apache.hive.hcatalog.pig.TestOrcHCatStorer.testStoreBasicTable
org.apache.hcatalog.api.TestHCatClient.testBasicDDLCommands
org.apache.hive.hcatalog.api.TestHCatClient.testGetPartitionsWithPartialSpec
org.apache.hive.hcatalog.api.TestHCatClient.testUpdateTableSchema
org.apache.hive.hcatalog.api.TestHCatClient.testDatabaseLocation
org.apache.hive.hcatalog.pig.TestHCatStorerMulti.testStorePartitionedTable
org.apache.hive.hcatalog.api.TestHCatClient.testPartitionsHCatClientImpl
org.apache.hcatalog.pig.TestHCatLoaderComplexSchema.testSyntheticComplexSchema
org.apache.hive.hcatalog.api.TestHCatClient.testGetMessageBusTopicName
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/673/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/673/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 21 tests failed
{noformat}

This message is automatically generated.

 Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23
 --

 Key: HIVE-4619
 URL: https://issues.apache.org/jira/browse/HIVE-4619
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-4619.2.patch, HIVE-4619.D10971.1.patch


 path uris in input split are missing scheme (it's fixed on cdh3u6 and hadoop 
 1.0)
 {noformat}
 2013-05-28 14:34:28,857 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Adding alias data_type to work list for file 
 hdfs://qa14:9000/user/hive/warehouse/data_type
 2013-05-28 14:34:28,858 ERROR org.apache.hadoop.hive.ql.exec.MapOperator: 
 Configuration does not have any alias for path: 
 /user/hive/warehouse/data_type/00_0
 2013-05-28 14:34:28,875 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
 Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
 2013-05-28 14:34:28,877 WARN org.apache.hadoop.mapred.Child: Error running 
 child
 java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:387)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
 at org.apache.hadoop.mapred.Child.main(Child.java:260)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 9 more
 Caused 

[jira] [Commented] (HIVE-5071) Address thread safety issues with HiveHistoryUtil

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762142#comment-13762142
 ] 

Hudson commented on HIVE-5071:
--

SUCCESS: Integrated in Hive-trunk-h0.21 #2320 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2320/])
HIVE-5071 : Address thread safety issues with HiveHistoryUtil (Teddy Choi 
reviewed by Edward Capriolo committed by Navis) (navis: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1520979)
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/history/HiveHistoryUtil.java


 Address thread safety issues with HiveHistoryUtil
 -

 Key: HIVE-5071
 URL: https://issues.apache.org/jira/browse/HIVE-5071
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Thiruvel Thirumoolan
Assignee: Teddy Choi
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-5071.1.patch.txt


 HiveHistoryUtil.parseLine() is not thread safe, it could be used by multiple 
 clients of HWA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: creating branch 0.12

2013-09-09 Thread Thejas Nair
Carl,
Thanks for fixing this.

Sorry about that. Didn't expect the branching change to break the
build! I should have still done a build after the version number
change.
I have updated the instructions in to include these steps
-https://cwiki.apache.org/confluence/display/Hive/HowToRelease

On Sat, Sep 7, 2013 at 11:22 PM, Carl Steinbach cwsteinb...@gmail.com wrote:
 I committed a patch on trunk and branch-0.12 that fixes this issue. All I
 had to do was
 update the Hive version number in eight separate pom files located in
 various hcatalog
 subdirectories. Switching to Maven is going to make this easier, right?


 On Sat, Sep 7, 2013 at 2:55 PM, Edward Capriolo edlinuxg...@gmail.comwrote:

 I am trying to accomplish this ticket...

 https://issues.apache.org/jira/browse/HIVE-5087

 [edward@jackintosh branch-0.12]$ ant clean package
 /home/edward/Documents/java/branch-0.12/build.xml:168: The following error
 occurred while executing this line:
 /home/edward/Documents/java/branch-0.12/hcatalog/build.xml:68: The
 following error occurred while executing this line:

 /home/edward/Documents/java/branch-0.12/hcatalog/build-support/ant/deploy.xml:81:
 The following error occurred while executing this line:

 /home/edward/Documents/java/branch-0.12/hcatalog/build-support/ant/deploy.xml:57:
 The following error occurred while executing this line:

 /home/edward/Documents/java/branch-0.12/hcatalog/build-support/ant/deploy.xml:48:
 Error installing artifact 'org.apache.hive:hive-shims:jar': Error
 installing artifact: File

 /home/edward/Documents/java/branch-0.12/build/shims/hive-shims-0.12.0-SNAPSHOT.jar
 does not exist

 Total time: 2 minutes 9 seconds


 Neither trunk nor 12 branch and complete ant clean package...test needs
 package


 On Fri, Sep 6, 2013 at 9:55 PM, Hari Subramaniyan 
 hsubramani...@hortonworks.com wrote:

  HIVE-5199(Custom SerDe containing a nonSettable complex data type row
  object inspector throws cast exception with HIVE 0.11) is not committed
  yet, but needs to go in for hive 0.12.
 
  Thanks
  Hari
 
 
  On Fri, Sep 6, 2013 at 5:29 PM, Thejas Nair the...@hortonworks.com
  wrote:
 
   As discussed in earlier thread, I will go ahead and create a branch
   for hive 0.12 release. I will do that in another hour.
  
   Please reply to this thread if there are any jiras that are not
   committed yet, that you feel should be included in hive 0.12.  Please
   include only jiras that are being actively worked on (or that you are
   willing to actively work on), and those that are likely to able to get
   to a committable state in another week or so. I would like to get 0.12
   into a stabilizing phase in two weeks, which would mean freezing the
   branch at that point to checkin only any blocker bug fixes.
  
   Thanks,
   Thejas
  
   --
   CONFIDENTIALITY NOTICE
   NOTICE: This message is intended for the use of the individual or
 entity
  to
   which it is addressed and may contain information that is confidential,
   privileged and exempt from disclosure under applicable law. If the
 reader
   of this message is not the intended recipient, you are hereby notified
  that
   any printing, copying, dissemination, distribution, disclosure or
   forwarding of this communication is strictly prohibited. If you have
   received this communication in error, please contact the sender
  immediately
   and delete it from your system. Thank You.
  
 
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Commented] (HIVE-4619) Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23

2013-09-09 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762170#comment-13762170
 ] 

Brock Noland commented on HIVE-4619:


Man those hcatalog tests are flaky. Let's run this one more time.

 Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23
 --

 Key: HIVE-4619
 URL: https://issues.apache.org/jira/browse/HIVE-4619
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-4619.2.patch, HIVE-4619.D10971.1.patch


 path uris in input split are missing scheme (it's fixed on cdh3u6 and hadoop 
 1.0)
 {noformat}
 2013-05-28 14:34:28,857 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Adding alias data_type to work list for file 
 hdfs://qa14:9000/user/hive/warehouse/data_type
 2013-05-28 14:34:28,858 ERROR org.apache.hadoop.hive.ql.exec.MapOperator: 
 Configuration does not have any alias for path: 
 /user/hive/warehouse/data_type/00_0
 2013-05-28 14:34:28,875 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
 Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
 2013-05-28 14:34:28,877 WARN org.apache.hadoop.mapred.Child: Error running 
 child
 java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:387)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
 at org.apache.hadoop.mapred.Child.main(Child.java:260)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 9 more
 Caused by: java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
 ... 14 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 17 more
 Caused by: java.lang.RuntimeException: Map operator initialization failed
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Configuration and input 
 path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:522)
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:90)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Configuration 
 and input path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:516)
 ... 23 more
 2013-05-28 14:34:28,881 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
 for the task
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5138) Streaming - Web HCat API

2013-09-09 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762146#comment-13762146
 ] 

Eugene Koifman commented on HIVE-5138:
--

[~roshan_naik] A couple of comments on this patch:
1. All delegators in WebHCat take the 'user' as determined by Server.java and 
use that to make secure calls to JobTrakcer, HDFS etc.  HCatStreamingDelegator 
ignores it.  Why is that?
2. Most operations in HCatStreamingDelegator do multiple things (like modify 
metadata, create some HDFS file, etc.).  It sounds like every one of these 
operations should be atomic.  For example, say for some reason 2 identical 
calls to partitionRoll() happen at the same time.  How is this atomicity 
achieved?

 Streaming - Web HCat  API
 -

 Key: HIVE-5138
 URL: https://issues.apache.org/jira/browse/HIVE-5138
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog, Metastore
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: HIVE-4196.v2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5199) Custom SerDe containing a nonSettable complex data type row object inspector throws cast exception with HIVE 0.11

2013-09-09 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762159#comment-13762159
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-5199:
-

Ashutosh Chauhan Thanks for committing the changes. I see that 
data/files/pw17.txt is missing from the commit. Can you please make sure this 
is added to the trunk, otherwise the testcase partition_wise_fileformat17 will 
fail.

 Custom SerDe containing a nonSettable complex data type row object inspector 
 throws cast exception with HIVE 0.11
 -

 Key: HIVE-5199
 URL: https://issues.apache.org/jira/browse/HIVE-5199
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-5199.2.patch.txt, HIVE-5199.3.patch.txt, 
 HIVE-5199.patch.4.txt, HIVE-5199.patch.txt


 The issue happens because of the changes in HIVE-3833.
 Consider a partitioned table with different custom serdes for the partition 
 and tables. The serde at table level, say, customSerDe1's object inspector is 
 of settableDataType where as the serde at partition level, say, 
 customSerDe2's object inspector is of nonSettableDataType. The current 
 implementation introduced by HIVE-3833 does not convert nested Complex Data 
 Types which extend nonSettableObjectInspector to a settableObjectInspector 
 type inside ObjectInspectorConverters.getConvertedOI(). However, it tries to 
 typecast the nonSettableObjectInspector to a settableObjectInspector inside  
 ObjectInspectorConverters.getConverter(ObjectInspector inputOI, 
 ObjectInspector outputOI).
 The attached patch HIVE-5199.2.patch.txt contains a stand-alone test case.
 The below exception can happen via FetchOperator as well as MapOperator. 
 For example, consider the FetchOperator.
 Inside FetchOperator consider the following call:
 getRecordReader()-ObjectInspectorConverters. getConverter()
 The stack trace as follows:
 2013-08-28 17:57:25,307 ERROR CliDriver (SessionState.java:printError(432)) - 
 Failed with exception java.io.IOException:java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 java.io.IOException: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:544)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:488)
 at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
 at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1412)
 at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
 at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
 at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
 Caused by: java.lang.ClassCastException: 
 com.skype.data.whaleshark.hadoop.hive.proto.ProtoMapObjectInspector cannot be 
 cast to 
 org.apache.hadoop.hive.serde2.objectinspector.SettableMapObjectInspector
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters.getConverter(ObjectInspectorConverters.java:144)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters$StructConverter.init(ObjectInspectorConverters.java:307)
 at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters.getConverter(ObjectInspectorConverters.java:138)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:406)
 at 
 org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4871) Apache builds fail with Target make-pom does not exist in the project hcatalog.

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13761999#comment-13761999
 ] 

Hudson commented on HIVE-4871:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #157 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/157/])
HIVE-4871 : Apache builds fail with Target make-pom does not exist in the 
project hcatalog (Eugene Koifman via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521143)
* /hive/trunk/build.properties
* /hive/trunk/build.xml
* /hive/trunk/hcatalog/build.xml


 Apache builds fail with Target make-pom does not exist in the project 
 hcatalog.
 ---

 Key: HIVE-4871
 URL: https://issues.apache.org/jira/browse/HIVE-4871
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-4871.patch

   Original Estimate: 24h
  Time Spent: 24.4h
  Remaining Estimate: 0h

 For example,
 https://builds.apache.org/job/Hive-trunk-h0.21/2192/console.
 All unit tests pass, but deployment of build artifacts fails.
 HIVE-4387 provided a bandaid for 0.11.  Need to figure out long term fix for 
 this for 0.12.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1511) Hive plan serialization is slow

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762000#comment-13762000
 ] 

Hudson commented on HIVE-1511:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #157 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/157/])
HIVE-1511

Summary: Hive Plan Serialization

Test Plan: Regression test suite

Reviewers: brock

Reviewed By: brock

Differential Revision: https://reviews.facebook.net/D12789 (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521110)
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/udtf/example/GenericUDTFCount2.java
* 
/hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/udtf/example/GenericUDTFExplode2.java
* /hive/trunk/hcatalog/pom.xml
* /hive/trunk/ivy/ivysettings.xml
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/ivy.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnInfo.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeColumnEvaluator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableDummyOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/RowSchema.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/HadoopJobExecHelper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapRedTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/BucketingSortingCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinTaskDispatcher.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/LocalMapJoinProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SortMergeJoinTaskDispatcher.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBJoinTree.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PTFDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFnGrams.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFArray.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFFormatNumber.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIndex.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFNamedStruct.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFStruct.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToUnixTimeStamp.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFExplode.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFJSONTuple.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFParseUrlTuple.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFStack.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestPlan.java
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/cast1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/groupby6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input20.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input3.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input4.q.xml

[jira] [Commented] (HIVE-5234) partition name filtering uses suboptimal datastructures

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762001#comment-13762001
 ] 

Hudson commented on HIVE-5234:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #157 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/157/])
HIVE-5234 : partition name filtering uses suboptimal datastructures (Sergey 
Shelukhin via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521158)
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java


 partition name filtering uses suboptimal datastructures
 ---

 Key: HIVE-5234
 URL: https://issues.apache.org/jira/browse/HIVE-5234
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-5234.D12777.1.patch


 Some DSes used in name-based partition filtering, as well as related methods, 
 are suboptimal, which can cost 100-s of ms on large number of partitions. I 
 noticed while perf testing HIVE-4914, but it can also be applied separately 
 given that the patch over there will take some time to get in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5239) LazyDate goes into irretrievable NULL mode once inited with NULL once

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762002#comment-13762002
 ] 

Hudson commented on HIVE-5239:
--

FAILURE: Integrated in Hive-trunk-hadoop1-ptest #157 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/157/])
HIVE-5239 : LazyDate goes into irretrievable NULL mode once inited with NULL 
once (Jason Dere via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1521136)
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyDate.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/lazy/TestLazyPrimitive.java


 LazyDate goes into irretrievable NULL mode once inited with NULL once
 -

 Key: HIVE-5239
 URL: https://issues.apache.org/jira/browse/HIVE-5239
 Project: Hive
  Issue Type: Bug
Reporter: Jason Dere
Assignee: Jason Dere
 Fix For: 0.13.0

 Attachments: HIVE-5239.1.patch


 Stumbled across HIVE-4757 with Timestamp.  It looks like Date has the same 
 issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4619) Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23

2013-09-09 Thread Mike Lewis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762129#comment-13762129
 ] 

Mike Lewis commented on HIVE-4619:
--

I just verified this patch fixes the issue with cdh4.3

 Hive 0.11.0 is not working with pre-cdh3u6 and hadoop-0.23
 --

 Key: HIVE-4619
 URL: https://issues.apache.org/jira/browse/HIVE-4619
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-4619.2.patch, HIVE-4619.D10971.1.patch


 path uris in input split are missing scheme (it's fixed on cdh3u6 and hadoop 
 1.0)
 {noformat}
 2013-05-28 14:34:28,857 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 
 Adding alias data_type to work list for file 
 hdfs://qa14:9000/user/hive/warehouse/data_type
 2013-05-28 14:34:28,858 ERROR org.apache.hadoop.hive.ql.exec.MapOperator: 
 Configuration does not have any alias for path: 
 /user/hive/warehouse/data_type/00_0
 2013-05-28 14:34:28,875 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
 Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
 2013-05-28 14:34:28,877 WARN org.apache.hadoop.mapred.Child: Error running 
 child
 java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:387)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:266)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
 at org.apache.hadoop.mapred.Child.main(Child.java:260)
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 9 more
 Caused by: java.lang.RuntimeException: Error in configuring object
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
 at 
 org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
 at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
 ... 14 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
 ... 17 more
 Caused by: java.lang.RuntimeException: Map operator initialization failed
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
 org.apache.hadoop.hive.ql.metadata.HiveException: Configuration and input 
 path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:522)
 at 
 org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:90)
 ... 22 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Configuration 
 and input path are inconsistent
 at 
 org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:516)
 ... 23 more
 2013-05-28 14:34:28,881 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
 for the task
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-5233) move hbase storage handler to org.apache.hcatalog package

2013-09-09 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762234#comment-13762234
 ] 

Sushanth Sowmyan commented on HIVE-5233:


I think our original intent was to keep deprecated classes for 2 releases - for 
eg., if this gets deprecated in 0.12, we keep it for 0.13, but it's gone in 
0.14. So we'd still need it in trunk.

[~alangates] might want to comment on this too, so tagging him.

 move hbase storage handler to org.apache.hcatalog package
 -

 Key: HIVE-5233
 URL: https://issues.apache.org/jira/browse/HIVE-5233
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0

 Attachments: 5233.move, 5233.update, HIVE-5233.patch


 org.apache.hcatalog in hcatalog/storage-handlers/ was erroneously renamed to 
 org.apache.hive.hcatalog in HIVE-4895.  This should be reverted as this 
 module is deprecated and should continue to exist in org.apache.hcatalog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4881) hive local mode: java.io.FileNotFoundException: emptyFile

2013-09-09 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762029#comment-13762029
 ] 

Prasad Mujumdar commented on HIVE-4881:
---

When the local mode is enabled, the query context always returns the scratch 
dir path of the local file system which gets pushed down into the job. That's 
what causing the problem. The 'empty' file created for the empty 
table/partitions should be created on the same file system as table.


 hive local mode: java.io.FileNotFoundException: emptyFile
 -

 Key: HIVE-4881
 URL: https://issues.apache.org/jira/browse/HIVE-4881
 Project: Hive
  Issue Type: Bug
 Environment: hive 0.9.0+158-1.cdh4.1.3.p0.23~squeeze-cdh4.1.3
Reporter: Bartosz Cisek
Assignee: Prasad Mujumdar
Priority: Critical

 Our hive jobs fail due to strange error pasted below. Strace showed that 
 process created this file, accessed it a few times and then it throwed 
 exception that it couldn't find file it just accessed. In next step it 
 unliked it. Yay.
 Very similar problem was reported [in already closed 
 task|https://issues.apache.org/jira/browse/HIVE-1633?focusedCommentId=13598983page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13598983]
  or left unresolved on [mailing 
 lists|http://mail-archives.apache.org/mod_mbox/hive-user/201307.mbox/%3c94f02eb368b740ebbcd94df4d5d1d...@amxpr03mb054.eurprd03.prod.outlook.com%3E].
 I'll be happy to provide required additional details. 
 {code:title=Stack trace}
 2013-07-18 12:49:46,109 ERROR security.UserGroupInformation 
 (UserGroupInformation.java:doAs(1335)) - PriviledgedActionException 
 as:username (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not 
 exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile
 2013-07-18 12:49:46,113 ERROR exec.ExecDriver 
 (SessionState.java:printError(403)) - Job Submission failed with exception 
 'java.io.FileNotFoundException(File does not exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile)'
 java.io.FileNotFoundException: File does not exist: 
 /tmp/username/hive_2013-07-18_12-49-45_218_605775464480014480/-mr-1/1/emptyFile
 at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:787)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.init(CombineFileInputFormat.java:462)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256)
 at 
 org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392)
 at 
 org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358)
 at 
 org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387)
 at 
 org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1040)
 at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1032)
 at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:172)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:895)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
 at 
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:895)
 at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:869)
 at 
 org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)
 at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 {code}
 {code:title=strace with grep emptyFile}
 7385  14:48:02.808096 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808201 
 stat(/tmp/username/hive_2013-07-18_14-48-00_700_8005967322498387476/-mr-1/1/emptyFile,
  {st_mode=S_IFREG|0755, st_size=0, ...}) = 0
 7385  14:48:02.808277 
 

[jira] [Commented] (HIVE-5233) move hbase storage handler to org.apache.hcatalog package

2013-09-09 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762254#comment-13762254
 ] 

Eugene Koifman commented on HIVE-5233:
--

For API level back compat we agreed on 2 releases.  (I don't know what the plan 
for storage handler was)

 move hbase storage handler to org.apache.hcatalog package
 -

 Key: HIVE-5233
 URL: https://issues.apache.org/jira/browse/HIVE-5233
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0

 Attachments: 5233.move, 5233.update, HIVE-5233.patch


 org.apache.hcatalog in hcatalog/storage-handlers/ was erroneously renamed to 
 org.apache.hive.hcatalog in HIVE-4895.  This should be reverted as this 
 module is deprecated and should continue to exist in org.apache.hcatalog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4331) Integrated StorageHandler for Hive and HCat using the HiveStorageHandler

2013-09-09 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762281#comment-13762281
 ] 

Brock Noland commented on HIVE-4331:


It's because the HCatalog tests are flaky.

 Integrated StorageHandler for Hive and HCat using the HiveStorageHandler
 

 Key: HIVE-4331
 URL: https://issues.apache.org/jira/browse/HIVE-4331
 Project: Hive
  Issue Type: Task
  Components: HBase Handler, HCatalog
Affects Versions: 0.12.0
Reporter: Ashutosh Chauhan
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE4331_07-17.patch, hive4331hcatrebase.patch, 
 HIVE-4331.patch, StorageHandlerDesign_HIVE4331.pdf


 1) Deprecate the HCatHBaseStorageHandler and RevisionManager from HCatalog. 
 These will now continue to function but internally they will use the 
 DefaultStorageHandler from Hive. They will be removed in future release of 
 Hive.
 2) Design a HivePassThroughFormat so that any new StorageHandler in Hive will 
 bypass the HiveOutputFormat. We will use this class in Hive's 
 HBaseStorageHandler instead of the HiveHBaseTableOutputFormat.
 3) Write new unit tests in the HCat's storagehandler so that systems such 
 as Pig and Map Reduce can use the Hive's HBaseStorageHandler instead of the 
 HCatHBaseStorageHandler.
 4) Make sure all the old and new unit tests pass without backward 
 compatibility (except known issues as described in the Design Document).
 5) Replace all instances of the HCat source code, which point to 
 HCatStorageHandler to use theHiveStorageHandler including the 
 FosterStorageHandler.
 I have attached the design document for the same and will attach a patch to 
 this Jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HIVE-5230) Better error reporting by async threads in HiveServer2

2013-09-09 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-5230 started by Vaibhav Gumashta.

 Better error reporting by async threads in HiveServer2
 --

 Key: HIVE-5230
 URL: https://issues.apache.org/jira/browse/HIVE-5230
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta

 [HIVE-4617|https://issues.apache.org/jira/browse/HIVE-4617] provides support 
 for async execution in HS2. When a background thread gets an error, currently 
 the client can only poll for the operation state and also the error with its 
 stacktrace is logged. However, it will be useful to provide a richer error 
 response like thrift API does with TStatus (which is constructed while 
 building a Thrift response object). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >