HIVE_OPTS in hiveserver (HIVE-2355)

2013-01-09 Thread Gabriel Reid
Hi,

I was wondering if anyone could give me some info on the status of HIVE-2355. 
This issue (although it's possibly somewhat poorly labelled) covers the fact 
that the contents auxlib directory aren't properly passed through as 
hive.aux.jars.path when running hiveserver.

I've just tested the same patch posted on HIVE-2355 (as it's necessary if you 
want to use a custom SerDe or StorageHandler with hiveserver), and it appears 
to work as intended, thereby allowing the use of extensions with hiveserver.

Is there anything holding this patch back to being committed, or has it just 
been forgotten (or something else)? If there are any additional issues 
surrounding getting this functionality into hiveserver, I'd be happy to assist.

- Gabriel




[jira] [Assigned] (HIVE-3463) Add CASCADING to MySQL's InnoDB schema

2013-01-09 Thread Alexander Alten-Lorenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Alten-Lorenz reassigned HIVE-3463:


Assignee: (was: Alexander Alten-Lorenz)

 Add CASCADING to MySQL's InnoDB schema
 --

 Key: HIVE-3463
 URL: https://issues.apache.org/jira/browse/HIVE-3463
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Alexander Alten-Lorenz

 Cascading could help to cleanup the tables when a FK is deleted.
 http://dev.mysql.com/doc/refman/5.5/en/innodb-foreign-key-constraints.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3463) Add CASCADING to MySQL's InnoDB schema

2013-01-09 Thread Alexander Alten-Lorenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547764#comment-13547764
 ] 

Alexander Alten-Lorenz commented on HIVE-3463:
--

Since the switch into a new DN version needs large work I going ahead and mark 
as Unassigned.

 Add CASCADING to MySQL's InnoDB schema
 --

 Key: HIVE-3463
 URL: https://issues.apache.org/jira/browse/HIVE-3463
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Alexander Alten-Lorenz

 Cascading could help to cleanup the tables when a FK is deleted.
 http://dev.mysql.com/doc/refman/5.5/en/innodb-foreign-key-constraints.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3784) de-emphasize mapjoin hint

2013-01-09 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547768#comment-13547768
 ] 

Ashutosh Chauhan commented on HIVE-3784:


Namit,
I very much like this work, since diffs are all red, which is good since it is 
removing lot of unnecessary complexity from the codebase, but Vinod's point is 
worth considering. I also see this you have moved mapjoin_subquery.q, 
mapjoin_mapjoin.q and such test cases from positive to negative. Do you have a 
proposal on what could be done to preserve that optimization of pipelining 
multiple map-join operators (even on different keys) in a single mapper? 

 de-emphasize mapjoin hint
 -

 Key: HIVE-3784
 URL: https://issues.apache.org/jira/browse/HIVE-3784
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3784.1.patch, hive.3784.2.patch, hive.3784.3.patch, 
 hive.3784.4.patch, hive.3784.5.patch


 hive.auto.convert.join has been around for a long time, and is pretty stable.
 When mapjoin hint was created, the above parameter did not exist.
 The only reason for the user to specify a mapjoin currently is if they want
 it to be converted to a bucketed-mapjoin or a sort-merge bucketed mapjoin.
 Eventually, that should also go away, but that may take some time to 
 stabilize.
 There are many rules in SemanticAnalyzer to handle the following trees:
 ReduceSink - MapJoin
 Union  - MapJoin
 MapJoin- MapJoin
 This should not be supported anymore. In any of the above scenarios, the
 user can get the mapjoin behavior by setting hive.auto.convert.join to true
 and not specifying the hint. This will simplify the code a lot.
 What does everyone think ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3773) Share input scan by unions across multiple queries

2013-01-09 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547772#comment-13547772
 ] 

Ashutosh Chauhan commented on HIVE-3773:


Actually, HIVE-2206 won't optimize this query as it is if I am reading that 
patch correctly. But, I think concept of having multiple pipeline of operators 
in a single Map (or Reduce) task and tracking it via tag byte as introduced in 
HIVE-2206 you will also have to implement (or some variant of it). I think its 
worth looking at that patch to see if you can reuse the code from it. Worse 
thing to have is a similar concept being implemented via two different 
mechanisms for two different optimization scenarios.

 Share input scan by unions across multiple queries
 --

 Key: HIVE-3773
 URL: https://issues.apache.org/jira/browse/HIVE-3773
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Gang Tim Liu

 Consider a query like:
 select * from
 (
   select key, 1 as value, count(1) from src group by key
 union all
   select 1 as key, value, count(1) from src group by value
 union all
   select key, value, count(1) from src group by key, value
 ) s;
 src is scanned multiple times currently (one per sub-query).
 This should be treated like a multi-table insert by the optimizer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3803) explain dependency should show the dependencies hierarchically in presence of views

2013-01-09 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3803:
-

Status: Patch Available  (was: Open)

The entry https://reviews.facebook.net/D7377 only contains the code changes, 
since
the entire patch cannot be loaded via phabricator

 explain dependency should show the dependencies hierarchically in presence of 
 views
 ---

 Key: HIVE-3803
 URL: https://issues.apache.org/jira/browse/HIVE-3803
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3803.10.patch, hive.3803.11.patch, 
 hive.3803.1.patch, hive.3803.2.patch, hive.3803.3.patch, hive.3803.4.patch, 
 hive.3803.5.patch, hive.3803.6.patch, hive.3803.7.patch, hive.3803.8.patch, 
 hive.3803.9.patch


 It should also include tables whose partitions are being accessed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3803) explain dependency should show the dependencies hierarchically in presence of views

2013-01-09 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547776#comment-13547776
 ] 

Namit Jain commented on HIVE-3803:
--

The latest patch 
https://issues.apache.org/jira/secure/attachment/12563885/hive.3803.11.patch 
contains all the changes

 explain dependency should show the dependencies hierarchically in presence of 
 views
 ---

 Key: HIVE-3803
 URL: https://issues.apache.org/jira/browse/HIVE-3803
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3803.10.patch, hive.3803.11.patch, 
 hive.3803.1.patch, hive.3803.2.patch, hive.3803.3.patch, hive.3803.4.patch, 
 hive.3803.5.patch, hive.3803.6.patch, hive.3803.7.patch, hive.3803.8.patch, 
 hive.3803.9.patch


 It should also include tables whose partitions are being accessed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3784) de-emphasize mapjoin hint

2013-01-09 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3784:
-

Status: Open  (was: Patch Available)

Let me think about it - this is like a star-schema join.

 de-emphasize mapjoin hint
 -

 Key: HIVE-3784
 URL: https://issues.apache.org/jira/browse/HIVE-3784
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3784.1.patch, hive.3784.2.patch, hive.3784.3.patch, 
 hive.3784.4.patch, hive.3784.5.patch


 hive.auto.convert.join has been around for a long time, and is pretty stable.
 When mapjoin hint was created, the above parameter did not exist.
 The only reason for the user to specify a mapjoin currently is if they want
 it to be converted to a bucketed-mapjoin or a sort-merge bucketed mapjoin.
 Eventually, that should also go away, but that may take some time to 
 stabilize.
 There are many rules in SemanticAnalyzer to handle the following trees:
 ReduceSink - MapJoin
 Union  - MapJoin
 MapJoin- MapJoin
 This should not be supported anymore. In any of the above scenarios, the
 user can get the mapjoin behavior by setting hive.auto.convert.join to true
 and not specifying the hint. This will simplify the code a lot.
 What does everyone think ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3853) UDF unix_timestamp is deterministic if an argument is given, but it treated as non-deterministic preventing PPD

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547785#comment-13547785
 ] 

Hudson commented on HIVE-3853:
--

Integrated in Hive-trunk-h0.21 #1902 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1902/])
HIVE-3853 UDF unix_timestamp is deterministic if an argument is given, but 
it treated as non-deterministic preventing PPD (Navis via namit) (Revision 
1430429)

 Result = SUCCESS
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1430429
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToUnixTimeStamp.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUnixTimeStamp.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_to_unix_timestamp.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_unix_timestamp.q
* /hive/trunk/ql/src/test/results/clientpositive/show_functions.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_to_unix_timestamp.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_unix_timestamp.q.out


 UDF unix_timestamp is deterministic if an argument is given, but it treated 
 as non-deterministic preventing PPD
 ---

 Key: HIVE-3853
 URL: https://issues.apache.org/jira/browse/HIVE-3853
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: Navis
Assignee: Navis
Priority: Trivial
  Labels: udf
 Fix For: 0.11.0

 Attachments: HIVE-3853.D7767.1.patch, HIVE-3853.D7767.2.patch


 unix_timestamp is declared as a non-deterministic function. But if user 
 provides an argument, it makes deterministic result and eligible to PPD.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3789) Patch HIVE-3648 causing the majority of unit tests to fail on branch 0.9

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547786#comment-13547786
 ] 

Hudson commented on HIVE-3789:
--

Integrated in Hive-trunk-h0.21 #1902 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1902/])
HIVE-3789 : Patch 3648 causing the majority of unit tests to fail on branch 
0.9 (Arup Malakar via Ashutosh Chauhan) (Revision 1430420)

 Result = SUCCESS
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1430420
Files : 
* /hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyFileSystem.java


 Patch HIVE-3648 causing the majority of unit tests to fail on branch 0.9
 

 Key: HIVE-3789
 URL: https://issues.apache.org/jira/browse/HIVE-3789
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Tests
Affects Versions: 0.9.0, 0.10.0
 Environment: Hadooop 0.23.5, JDK 1.6.0_31
Reporter: Chris Drome
Assignee: Arup Malakar
 Fix For: 0.11.0

 Attachments: HIVE-3789.branch-0.9_1.patch, 
 HIVE-3789.branch-0.9_2.patch, HIVE-3789.trunk.1.patch, HIVE-3789.trunk.2.patch


 Rolling back to before this patch shows that the unit tests are passing, 
 after the patch, the majority of the unit tests are failing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3872) MAP JOIN for VIEW thorws NULL pointer exception error

2013-01-09 Thread Santosh Achhra (JIRA)
Santosh Achhra created HIVE-3872:


 Summary: MAP JOIN  for VIEW thorws NULL pointer exception error
 Key: HIVE-3872
 URL: https://issues.apache.org/jira/browse/HIVE-3872
 Project: Hive
  Issue Type: Bug
  Components: Views
Affects Versions: 0.9.0
Reporter: Santosh Achhra
Priority: Critical


I have created a view  as shown below. 

CREATE VIEW V1 AS
select /*+ MAPJOIN(t1) ,MAPJOIN(t2)  */ t1.f1, t1.f2, t1.f3, t1.f4, t2.f1, 
t2.f2, t2.f3 from TABLE1 t1 join TABLE t2 on ( t1.f2= t2.f2 and t1.f3 = t2.f3 
and t1.f4 = t2.f4 ) group by t1.f1, t1.f2, t1.f3, t1.f4, t2.f1, t2.f2, t2.f3

View get created successfully however when I execute below mentioned SQL or any 
SQL on the view  get NULLPOINTER exception error

hive select count (*) from V1;
FAILED: NullPointerException null
hive

Is there anything wrong with the view creation ?

Next I created view without MAPJOIN hints 

CREATE VIEW V1 AS
select  t1.f1, t1.f2, t1.f3, t1.f4, t2.f1, t2.f2, t2.f3 from TABLE1 t1 join 
TABLE t2 on ( t1.f2= t2.f2 and t1.f3 = t2.f3 and t1.f4 = t2.f4 ) group by 
t1.f1, t1.f2, t1.f3, t1.f4, t2.f1, t2.f2, t2.f3

Before executing select SQL I excute set  hive.auto.convert.join=true; 

I am getting beloow mentioned warnings
java.lang.InstantiationException: org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
Continuing ...
java.lang.RuntimeException: failed to evaluate: unbound=Class.new();
Continuing ...


And I see from log that total 5 mapreduce jobs are started however when don't 
set auto.convert.join to true, I see only 3 mapreduce jobs getting invoked.
Total MapReduce jobs = 5
Ended Job = 1116112419, job is filtered out (removed at runtime).
Ended Job = -33256989, job is filtered out (removed at runtime).
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use 
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2820) Invalid tag is used for MapJoinProcessor

2013-01-09 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547817#comment-13547817
 ] 

Ashutosh Chauhan commented on HIVE-2820:


Verified that bug still exists on trunk. Navis sorry for letting this patch 
going stale. Please update the jira, I will take a look at it.

 Invalid tag is used for MapJoinProcessor
 

 Key: HIVE-2820
 URL: https://issues.apache.org/jira/browse/HIVE-2820
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
 Environment: ubuntu
Reporter: Navis
Assignee: Navis
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2820.D1935.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2820.D1935.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2820.D1935.3.patch


 Testing HIVE-2810, I've found tag and alias are used in very confusing 
 manner. For example, query below fails..
 {code}
 hive set hive.auto.convert.join=true;
  
 hive select /*+ STREAMTABLE(a) */ * from myinput1 a join myinput1 b on 
 a.key=b.key join myinput1 c on a.key=c.key;
 Total MapReduce jobs = 4
 Ended Job = 1667415037, job is filtered out (removed at runtime).
 Ended Job = 1739566906, job is filtered out (removed at runtime).
 Ended Job = 1113337780, job is filtered out (removed at runtime).
 12/02/24 10:27:14 WARN conf.HiveConf: DEPRECATED: Ignoring hive-default.xml 
 found on the CLASSPATH at /home/navis/hive/conf/hive-default.xml
 Execution log at: 
 /tmp/navis/navis_20120224102727_cafe0d8d-9b21-441d-bd4e-b83303b31cdc.log
 2012-02-24 10:27:14   Starting to launch local task to process map join;  
 maximum memory = 932118528
 java.lang.NullPointerException
   at 
 org.apache.hadoop.hive.ql.exec.HashTableSinkOperator.processOp(HashTableSinkOperator.java:312)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
   at 
 org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:83)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at 
 org.apache.hadoop.hive.ql.exec.MapredLocalTask.startForward(MapredLocalTask.java:325)
   at 
 org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:272)
   at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:685)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
 Execution failed with exit status: 2
 Obtaining error information
 {code}
 Failed task has a plan which doesn't make sense.
 {noformat}
   Stage: Stage-8
 Map Reduce Local Work
   Alias - Map Local Tables:
 b 
   Fetch Operator
 limit: -1
 c 
   Fetch Operator
 limit: -1
   Alias - Map Local Operator Tree:
 b 
   TableScan
 alias: b
 HashTable Sink Operator
   condition expressions:
 0 {key} {value}
 1 {key} {value}
 2 {key} {value}
   handleSkewJoin: false
   keys:
 0 [Column[key]]
 1 [Column[key]]
 2 [Column[key]]
   Position of Big Table: 0
 c 
   TableScan
 alias: c
 Map Join Operator
   condition map:
Inner Join 0 to 1
Inner Join 0 to 2
   condition expressions:
 0 {key} {value}
 1 {key} {value}
 2 {key} {value}
   handleSkewJoin: false
   keys:
 0 [Column[key]]
 1 [Column[key]]
 2 [Column[key]]
   outputColumnNames: _col0, _col1, _col4, _col5, _col8, _col9
   Position of Big Table: 0
   Select Operator
 expressions:
   expr: _col0
   type: int
   expr: _col1
   type: int
   expr: _col4
   type: int
   expr: _col5
   type: int
   expr: _col8
   type: int
   expr: _col9
   type: int
 outputColumnNames: _col0, _col1, _col2, _col3, 

[jira] [Updated] (HIVE-3807) Hive authorization should use short username when Kerberos authentication

2013-01-09 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HIVE-3807:


Attachment: HIVE-3807.patch

 Hive authorization should use short username when Kerberos authentication
 -

 Key: HIVE-3807
 URL: https://issues.apache.org/jira/browse/HIVE-3807
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Affects Versions: 0.9.0
Reporter: Kai Zheng
 Attachments: HIVE-3807.patch


 Currently when authentication method is Kerberos,Hive authorization uses user 
 full name as privilege principal, for example, it uses j...@example.com 
 instead of john.
 It should use the short name instead. The benefits:
 1. Be consistent. Hadoop, HBase and etc they all use short name in related 
 ACLs or authorizations. For Hive authorization works well with them, this 
 should be.
 2. Be convenient. It's very inconvenient to use the lengthy Kerberos 
 principal name when grant or revoke privileges via Hive CLI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3807) Hive authorization should use short username when Kerberos authentication

2013-01-09 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HIVE-3807:


Fix Version/s: 0.9.0
   Status: Patch Available  (was: Open)

Simple fix. Thanks for review.

 Hive authorization should use short username when Kerberos authentication
 -

 Key: HIVE-3807
 URL: https://issues.apache.org/jira/browse/HIVE-3807
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Affects Versions: 0.9.0
Reporter: Kai Zheng
 Fix For: 0.9.0

 Attachments: HIVE-3807.patch


 Currently when authentication method is Kerberos,Hive authorization uses user 
 full name as privilege principal, for example, it uses j...@example.com 
 instead of john.
 It should use the short name instead. The benefits:
 1. Be consistent. Hadoop, HBase and etc they all use short name in related 
 ACLs or authorizations. For Hive authorization works well with them, this 
 should be.
 2. Be convenient. It's very inconvenient to use the lengthy Kerberos 
 principal name when grant or revoke privileges via Hive CLI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2839) Filters on outer join with mapjoin hint is not applied correctly

2013-01-09 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547819#comment-13547819
 ] 

Ashutosh Chauhan commented on HIVE-2839:


This patch got stale as well. Navis, please refresh this patch and I will take 
a look.

 Filters on outer join with mapjoin hint is not applied correctly
 

 Key: HIVE-2839
 URL: https://issues.apache.org/jira/browse/HIVE-2839
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2839.D2079.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2839.D2079.2.patch


 Testing HIVE-2820, I've found some queries with mapjoin hint makes exceptions.
 {code}
 SELECT /*+ MAPJOIN(a) */ * FROM src a RIGHT OUTER JOIN src b on a.key=b.key 
 AND true limit 10;
 FAILED: Hive Internal Error: 
 java.lang.ClassCastException(org.apache.hadoop.hive.ql.plan.ExprNodeConstantDesc
  cannot be cast to org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc)
 java.lang.ClassCastException: 
 org.apache.hadoop.hive.ql.plan.ExprNodeConstantDesc cannot be cast to 
 org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.convertMapJoin(MapJoinProcessor.java:363)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.generateMapJoinOperator(MapJoinProcessor.java:483)
   at 
 org.apache.hadoop.hive.ql.optimizer.MapJoinProcessor.transform(MapJoinProcessor.java:689)
   at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:87)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7519)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:891)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
 {code}
 and 
 {code}
 SELECT /*+ MAPJOIN(a) */ * FROM src a RIGHT OUTER JOIN src b on a.key=b.key 
 AND b.key * 10  '1000' limit 10;
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:161)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:391)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:416)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
   at org.apache.hadoop.mapred.Child.main(Child.java:264)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:198)
   at 
 org.apache.hadoop.hive.ql.exec.MapJoinOperator.cleanUpInputFileChangedOp(MapJoinOperator.java:212)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1321)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1325)
   at 
 org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1325)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:495)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:143)
   ... 8 more
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2279) Implement sort_array UDF

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547836#comment-13547836
 ] 

Hudson commented on HIVE-2279:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2279. Implement sort(array) UDF (Zhenxiao Luo via cws) (Revision 
1234146)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1234146
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFSortArray.java
* /hive/trunk/ql/src/test/queries/clientnegative/udf_sort_array_wrong1.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_sort_array_wrong2.q
* /hive/trunk/ql/src/test/queries/clientnegative/udf_sort_array_wrong3.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_sort_array.q
* /hive/trunk/ql/src/test/results/clientnegative/udf_sort_array_wrong1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_sort_array_wrong2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/udf_sort_array_wrong3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/show_functions.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_sort_array.q.out


 Implement sort_array UDF
 

 Key: HIVE-2279
 URL: https://issues.apache.org/jira/browse/HIVE-2279
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2279.D1059.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2279.D1101.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2279.D1107.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2279.D1125.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2279.D1143.1.patch, HIVE-2279.D1143.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3106) Add option to make multi inserts more atomic

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547835#comment-13547835
 ] 

Hudson commented on HIVE-3106:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3106 Add option to make multi inserts more atomic
(Kevin Wilfong via namit) (Revision 1350792)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1350792
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/ql/if/queryplan.thrift
* /hive/trunk/ql/src/gen/thrift/gen-cpp/queryplan_types.cpp
* /hive/trunk/ql/src/gen/thrift/gen-cpp/queryplan_types.h
* 
/hive/trunk/ql/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/ql/plan/api/StageType.java
* /hive/trunk/ql/src/gen/thrift/gen-php/queryplan/queryplan_types.php
* /hive/trunk/ql/src/gen/thrift/gen-py/queryplan/ttypes.py
* /hive/trunk/ql/src/gen/thrift/gen-rb/queryplan_types.rb
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DependencyCollectionTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TaskFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRProcContext.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/DependencyCollectionWork.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/multi_insert_move_tasks_share_dependencies.q
* 
/hive/trunk/ql/src/test/results/clientpositive/multi_insert_move_tasks_share_dependencies.q.out


 Add option to make multi inserts more atomic
 

 Key: HIVE-3106
 URL: https://issues.apache.org/jira/browse/HIVE-3106
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.10.0

 Attachments: HIVE-3106.1.patch.txt, HIVE-3106.2.patch.txt


 Currently, with multi-insert queries as soon the output of one of the inserts 
 is ready the move task associated with that insert is run, creating the 
 table/partition.  However, if concurrency is enabled the lock on this 
 table/partition is not released until the entire query finishes, which can be 
 much later.
 This causes issues if, for example, a user is waiting for an output of the 
 multi-insert query which is created long before the other outputs, and 
 checking for it's existence using the metastore's Thrift methods 
 (get_table/get_partition).  In which case, the user will run their query 
 which uses the output, and it will experience a timeout trying to acquire the 
 lock on the table/partition.
 If all the move tasks depend on the parent's of all other move tasks, the 
 output creation will be much closer to atomic relieving this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3109) metastore state not cleared

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547837#comment-13547837
 ] 

Hudson commented on HIVE-3109:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3112 clear hive.metastore.partition.inherit.table.properties till 
HIVE-3109 is fixed (njain via kevinwilfong) (Revision 1349096)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1349096
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_with_star.q
* /hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_with_star.q.out


 metastore state not cleared
 ---

 Key: HIVE-3109
 URL: https://issues.apache.org/jira/browse/HIVE-3109
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Ashutosh Chauhan

 When some of the tests are in order, random bugs are encountered.
 ant test -Dtestcase=TestCliDriver -Dqfile=part_inherit_tbl_props.q,stats1.q
 leads to an error in stats1.q
 We ran into this error as part of parallel testing (HIVE-3085).
 As part of HIVE-3085, this will be fixed temporarily by clearing
 hive.metastore.partition.inherit.table.properties at the end of the test.
 But, in general, any property set in one .q file should not affect anything
 in other tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2344) filter is removed due to regression of HIVE-1538

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547838#comment-13547838
 ] 

Hudson commented on HIVE-2344:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2791: filter is still removed due to regression of HIVE-1538 althougth 
HIVE-2344 (binlijin via hashutosh) (Revision 1291916)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1291916
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* /hive/trunk/ql/src/test/queries/clientpositive/ppd2.q
* /hive/trunk/ql/src/test/results/clientpositive/ppd2.q.out


 filter is removed due to regression of HIVE-1538
 

 Key: HIVE-2344
 URL: https://issues.apache.org/jira/browse/HIVE-2344
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: He Yongqiang
Assignee: Amareshwari Sriramadasu
 Fix For: 0.8.0

 Attachments: hive-patch-2344-2.txt, hive-patch-2344.txt, 
 ppd_udf_col.q.out.txt


  select * from 
  (
  select type_bucket,randum123
  from (SELECT *, cast(rand() as double) AS randum123 FROM tbl where ds = ...) 
 a
  where randum123 =0.1)s where s.randum1230.1 limit 20;
 This is returning results...
 and 
  explain
  select type_bucket,randum123
  from (SELECT *, cast(rand() as double) AS randum123 FROM tbl where ds = ...) 
 a
  where randum123 =0.1
 shows that there is no filter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2789) query_properties.q contains non-deterministic queries

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547839#comment-13547839
 ] 

Hudson commented on HIVE-2789:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2789. query_properties.q contains non-deterministic queries (Zhenxiao 
Luo via cws) (Revision 1370982)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1370982
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/query_properties.q
* /hive/trunk/ql/src/test/results/clientpositive/query_properties.q.out


 query_properties.q contains non-deterministic queries
 -

 Key: HIVE-2789
 URL: https://issues.apache.org/jira/browse/HIVE-2789
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2789.D1647.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2789.D1647.2.patch, HIVE-2789.3.patch.txt, 
 HIVE-2789.D1647.1.patch, HIVE-2789.D1647.2.patch


 query_properties.q test failure:
 [junit] Begin query: query_properties.q
 [junit] 12/01/23 16:59:13 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:13 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:18 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:18 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:22 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:22 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:27 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:27 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:32 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:32 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:36 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:36 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:41 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:41 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:46 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:46 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:50 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:50 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:55 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:55 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 16:59:59 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 16:59:59 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 17:00:04 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 17:00:04 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 17:00:08 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 17:00:08 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] 12/01/23 17:00:13 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 17:00:13 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use 

[jira] [Commented] (HIVE-2676) The row count that loaded to a table may not right

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547840#comment-13547840
 ] 

Hudson commented on HIVE-2676:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2676 The row count that loaded to a table may not right
(binlijin via namit) (Revision 1307691)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1307691
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java


 The row count that loaded to a table may not right   
 -

 Key: HIVE-2676
 URL: https://issues.apache.org/jira/browse/HIVE-2676
 Project: Hive
  Issue Type: Improvement
Reporter: binlijin
Assignee: binlijin
Priority: Minor
  Labels: patch
 Fix For: 0.9.0

 Attachments: HIVE-2676.patch


 create table tablename as SELECT ***
 At the end hive will print a number that show how many Rows loaded to the 
 tablename, but sometimes the number is not right.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3009) do authorization for all metadata operations

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547841#comment-13547841
 ] 

Hudson commented on HIVE-3009:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3009 Memory leak in TUGIContainingTransport (Ashutosh Chauhan via egc) 
(Revision 1352260)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1352260
Files : 
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/TUGIContainingTransport.java


 do authorization for all metadata operations
 

 Key: HIVE-3009
 URL: https://issues.apache.org/jira/browse/HIVE-3009
 Project: Hive
  Issue Type: Bug
  Components: Authorization, Metastore
Reporter: Thejas M Nair
Assignee: Vandana Ayyalasomayajula

 Most of the metadata read operations and some write operations are not 
 checking for authorization. 
 See org.apache.hadoop.hive.ql.plan.HiveOperation . Operations such as 
 DESCTABLE and DROPDATABASE have null for required privileges. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3014) Fix metastore test failures caused by HIVE-2757

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547842#comment-13547842
 ] 

Hudson commented on HIVE-3014:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3014 [jira] Fix metastore test failures caused by HIVE-2757
(Zhenxiao Luo via Carl Steinbach)

Summary: HIVE-3014: Fix metastore test failures caused by HIVE-2757

Test Plan: EMPTY

Reviewers: JIRA, cwsteinbach

Reviewed By: cwsteinbach

Differential Revision: https://reviews.facebook.net/D3213 (Revision 1339004)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1339004
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java


 Fix metastore test failures caused by HIVE-2757
 ---

 Key: HIVE-3014
 URL: https://issues.apache.org/jira/browse/HIVE-3014
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-3014.D3213.1.patch, 
 HIVE-3014.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3839) adding .gitattributes file for normalizing line endings during cross platform development

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547843#comment-13547843
 ] 

Hudson commented on HIVE-3839:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3839 : adding .gitattributes file for normalizing line endings during 
cross platform development (Thejas Nair via Ashutosh Chauhan) (Revision 1426691)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1426691
Files : 
* /hive/trunk/.gitattributes


 adding .gitattributes file for normalizing line endings during cross platform 
 development
 -

 Key: HIVE-3839
 URL: https://issues.apache.org/jira/browse/HIVE-3839
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.11.0

 Attachments: HIVE-3839.1.patch


 On the lines of HADOOP-8912 .
 Many developers clone the apache/hive git repository to make changes.
 Adding a .gitattributes file will help in doing the right thing while 
 checking out files on Windows (eg- adding \r\n on checkout of most text 
 files, preserving \n in case of *.sh files ), and replacing \r\n with \n 
 while checking in code back into a git repository.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3013) TestCliDriver cannot be debugged with eclipse since hadoop_home is set incorrectly

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547844#comment-13547844
 ] 

Hudson commented on HIVE-3013:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3013 [jira] TestCliDriver cannot be debugged with eclipse since 
hadoop_home is set wrongly

Summary:
HIVE-3013

fix typo

cp Fix

cp

Test Plan: EMPTY

Reviewers: JIRA, njain, kevinwilfong

Differential Revision: https://reviews.facebook.net/D3555 (Revision 1348995)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1348995
Files : 
* /hive/trunk/build.properties
* /hive/trunk/build.xml
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/eclipse-templates/HiveCLI.launchtemplate
* /hive/trunk/eclipse-templates/HiveServer.launchtemplate
* /hive/trunk/eclipse-templates/TestCliDriver.launchtemplate
* /hive/trunk/eclipse-templates/TestEmbeddedHiveMetaStore.launchtemplate
* /hive/trunk/eclipse-templates/TestHBaseCliDriver.launchtemplate
* /hive/trunk/eclipse-templates/TestHive.launchtemplate
* /hive/trunk/eclipse-templates/TestHiveMetaStoreChecker.launchtemplate
* /hive/trunk/eclipse-templates/TestJdbc.launchtemplate
* /hive/trunk/eclipse-templates/TestMTQueries.launchtemplate
* /hive/trunk/eclipse-templates/TestRemoteHiveMetaStore.launchtemplate
* /hive/trunk/eclipse-templates/TestTruncate.launchtemplate


 TestCliDriver cannot be debugged with eclipse since hadoop_home is set 
 incorrectly
 --

 Key: HIVE-3013
 URL: https://issues.apache.org/jira/browse/HIVE-3013
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Namit Jain
Assignee: Carl Steinbach
 Fix For: 0.10.0

 Attachments: hive.3013.1.patch, HIVE-3013.2.patch.txt, 
 HIVE-3013.3.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2791) filter is still removed due to regression of HIVE-1538 althougth HIVE-2344

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547845#comment-13547845
 ] 

Hudson commented on HIVE-2791:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2791: filter is still removed due to regression of HIVE-1538 althougth 
HIVE-2344 (binlijin via hashutosh) (Revision 1291916)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1291916
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* /hive/trunk/ql/src/test/queries/clientpositive/ppd2.q
* /hive/trunk/ql/src/test/results/clientpositive/ppd2.q.out


 filter is still removed due to regression of HIVE-1538 althougth HIVE-2344
 --

 Key: HIVE-2791
 URL: https://issues.apache.org/jira/browse/HIVE-2791
 Project: Hive
  Issue Type: Bug
Reporter: binlijin
Assignee: binlijin
 Fix For: 0.9.0

 Attachments: HIVE-2791.2.patch, HIVE-2791.patch, ppd-dropped-filter.q




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2792) SUBSTR(CAST(string AS BINARY)) produces unexpected results

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547846#comment-13547846
 ] 

Hudson commented on HIVE-2792:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2792: SUBSTR(CAST(string AS BINARY)) produces unexpected results 
(navis via hashutosh) (Revision 1291633)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1291633
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFSubstr.java
* /hive/trunk/ql/src/test/queries/clientpositive/ba_table_udfs.q
* /hive/trunk/ql/src/test/queries/clientpositive/udf_substr.q
* /hive/trunk/ql/src/test/results/clientpositive/ba_table_udfs.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_substr.q.out


 SUBSTR(CAST(string AS BINARY)) produces unexpected results
 

 Key: HIVE-2792
 URL: https://issues.apache.org/jira/browse/HIVE-2792
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.8.0, 0.8.1
Reporter: Carl Steinbach
Assignee: Navis
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2792.D1797.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2792.D1797.2.patch, HIVE-2792.D1797.2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2793) Disable loadpart_err.q on 0.23

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547847#comment-13547847
 ] 

Hudson commented on HIVE-2793:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2793 [jira] Disable loadpart_err.q on 0.23

Summary: HIVE-2793. Add 0.23 to list of excluded Hadoop versions for
loadpart_err.q

Test Plan: EMPTY

Reviewers: JIRA, jsichi, ashutoshc

Reviewed By: ashutoshc

CC: ashutoshc

Differential Revision: https://reviews.facebook.net/D1665 (Revision 1244311)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1244311
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/loadpart_err.q


 Disable loadpart_err.q on 0.23
 --

 Key: HIVE-2793
 URL: https://issues.apache.org/jira/browse/HIVE-2793
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2793.D1665.1.patch, 
 HIVE-2793.D1665.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2794) Aggregations without grouping should return NULL when applied to partitioning column of a partitionless table

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547848#comment-13547848
 ] 

Hudson commented on HIVE-2794:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2794 : Aggregations without grouping should return NULL when applied 
to partitioning column of a partitionless table (Zhenxiao Luo via Ashutosh 
Chauhan) (Revision 1418848)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1418848
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapOperator.java
* /hive/trunk/ql/src/test/queries/clientpositive/partInit.q
* /hive/trunk/ql/src/test/results/clientpositive/metadataonly1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/partInit.q.out


 Aggregations without grouping should return NULL when applied to partitioning 
 column of a partitionless table
 -

 Key: HIVE-2794
 URL: https://issues.apache.org/jira/browse/HIVE-2794
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
 Fix For: 0.11.0

 Attachments: HIVE-2794.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2795) View partitions do not have a storage descriptor

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547849#comment-13547849
 ] 

Hudson commented on HIVE-2795:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2966 :Revert HIVE-2795  (Thejas Nair via Ashutosh Chauhan) (Revision 
1328568)
HIVE-2795 View partitions do not have a storage descriptor
(Kevin Wilfong via namit) (Revision 1242682)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1328568
Files : 
* /hive/trunk/metastore/scripts/upgrade/001-HIVE-2795.update_view_partitions.py
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/MetaDataFormatUtils.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/describe_formatted_view_partitioned_json.q
* 
/hive/trunk/ql/src/test/results/clientpositive/describe_formatted_view_partitioned.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/describe_formatted_view_partitioned_json.q.out

namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1242682
Files : 
* /hive/trunk/metastore/scripts/upgrade/001-HIVE-2795.update_view_partitions.py
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/describe_formatted_view_partitioned.q
* 
/hive/trunk/ql/src/test/results/clientpositive/describe_formatted_view_partitioned.q.out


 View partitions do not have a storage descriptor
 

 Key: HIVE-2795
 URL: https://issues.apache.org/jira/browse/HIVE-2795
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Namit Jain
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2795.D1683.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2795.D1683.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2795.D1683.3.patch


 Besides being an inconsistency, it causes errors.
 Calling describe formatted on a view partition throws an exception
 java.lang.NullPointerException
   at org.apache.hadoop.hive.ql.metadata.Partition.getCols(Partition.java:505) 
  
   at org.apache.hadoop.hive.ql.exec.DDLTask.describeTable(DDLTask.java:2570)
 because it does not have a column descriptor, which is part of the storage 
 descriptor.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2682) Clean-up logs

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547851#comment-13547851
 ] 

Hudson commented on HIVE-2682:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2682: Clean-up logs (Rajat Goel via Ashutosh Chauhan) (Revision 
1230379)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1230379
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsAggregator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsPublisher.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java


 Clean-up logs
 -

 Key: HIVE-2682
 URL: https://issues.apache.org/jira/browse/HIVE-2682
 Project: Hive
  Issue Type: Wish
  Components: Logging
Affects Versions: 0.8.1, 0.9.0
Reporter: Rajat Goel
Assignee: Rajat Goel
Priority: Trivial
  Labels: logging
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2682.D1035.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2682.D1035.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2682.D1035.3.patch, hive-2682.patch

   Original Estimate: 24h
  Remaining Estimate: 24h

 Just wanted to cleanup some logs being printed at wrong loglevel -
 1. org.apache.hadoop.hive.ql.exec.CommonJoinOperator prints table 0 has 1000 
 rows for join key [...] as WARNING. Is it really that? 
 2. org.apache.hadoop.hive.ql.exec.GroupByOperator prints Hash Table 
 completed flushed and Begin Hash Table flush at close: size = 21 as 
 WARNING. It shouldn't be.
 3. org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher prints Warning. 
 Invalid statistic. which looks fishy.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3012) hive custom scripts do not work well if the data contains new lines

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547850#comment-13547850
 ] 

Hudson commented on HIVE-3012:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3012 hive custom scripts do not work well if the data contains new 
lines (njain via kevinwilfong) (Revision 1336986)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1336986
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/data/scripts/newline.py
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TextRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TextRecordWriter.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java
* /hive/trunk/ql/src/test/queries/clientpositive/newline.q
* /hive/trunk/ql/src/test/results/clientpositive/newline.q.out


 hive custom scripts do not work well if the data contains new lines
 ---

 Key: HIVE-3012
 URL: https://issues.apache.org/jira/browse/HIVE-3012
 Project: Hive
  Issue Type: Improvement
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-3012.D3099.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-3012.D3099.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-3012.D3099.3.patch


 If the data contain newline, it will be passed as is to the script.
 The script has no way of splitting the data based on the new line.
 An option should be added to hive to escape/unescape the new lines.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3832) Insert overwrite doesn't create a dir if the skewed column position doesnt match

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547853#comment-13547853
 ] 

Hudson commented on HIVE-3832:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3832 Insert overwrite doesn't create a dir if the skewed column 
position doesnt match
(Gang Tim Liu via namit) (Revision 1425589)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1425589
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ListBucketingCtx.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/SkewedColumnPositionPair.java
* /hive/trunk/ql/src/test/queries/clientpositive/list_bucket_dml_11.q
* /hive/trunk/ql/src/test/queries/clientpositive/list_bucket_dml_12.q
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_12.q.out


 Insert overwrite doesn't create a dir if the skewed column position doesnt 
 match
 

 Key: HIVE-3832
 URL: https://issues.apache.org/jira/browse/HIVE-3832
 Project: Hive
  Issue Type: Bug
Reporter: Gang Tim Liu
Assignee: Gang Tim Liu
 Fix For: 0.11.0

 Attachments: HIVE-3832.patch.1, HIVE-3832.patch.2


 If skewed column doesn't match the position in table column, insert overwrite 
 doesn't create sub-dir but put all into default directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2894) RCFile Reader doesn't provide access to Metadata

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547854#comment-13547854
 ] 

Hudson commented on HIVE-2894:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2894 [jira] RCFile Reader doesn't provide access to Metadata
(Owen O'Malley via Ashutosh Chauhan)

Summary:
hive-2894

Add an accessor for RCFile's metadata.

Currently the RCFile writer can add metadata to an RCFile, but the reader
doesn't provide an accessor. I'd like to add one.

Test Plan:
I added a call to test that the metadata that was passed in was available from
the reader.

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2421 (Revision 1304693)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304693
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/TestRCFile.java


 RCFile Reader doesn't provide access to Metadata
 

 Key: HIVE-2894
 URL: https://issues.apache.org/jira/browse/HIVE-2894
 Project: Hive
  Issue Type: New Feature
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2894.D2421.1.patch


 Currently the RCFile writer can add metadata to an RCFile, but the reader 
 doesn't provide an accessor. I'd like to add one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3834) Support ALTER VIEW AS SELECT in Hive

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547855#comment-13547855
 ] 

Hudson commented on HIVE-3834:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3834 Support ALTER VIEW AS SELECT in Hive
(Zhenxiao Luo via namit) (Revision 1425755)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1425755
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateViewDesc.java
* 
/hive/trunk/ql/src/test/queries/clientnegative/alter_view_as_select_not_exist.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/alter_view_as_select_with_partition.q
* /hive/trunk/ql/src/test/queries/clientpositive/alter_view_as_select.q
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_view_as_select_not_exist.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_view_as_select_with_partition.q.out
* /hive/trunk/ql/src/test/results/clientnegative/create_or_replace_view1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/create_or_replace_view2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/create_or_replace_view3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_view_as_select.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_view.q.out


 Support ALTER VIEW AS SELECT in Hive
 

 Key: HIVE-3834
 URL: https://issues.apache.org/jira/browse/HIVE-3834
 Project: Hive
  Issue Type: New Feature
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.11.0

 Attachments: HIVE-3834.1.patch.txt, HIVE-3834.2.patch.txt, 
 HIVE-3834.3.patch.txt, HIVE-3834.4.patch.txt


 Hive supports alter view on setting property, add/drop partition etc but 
 not as.
 If you want to change as part, you have to drop view, recreate it and 
 backfill partition etc. pretty painful.
 It will be nice to support this. The reference is mysql syntax 
 http://dev.mysql.com/doc/refman/5.0/en/alter-view.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3112) clear hive.metastore.partition.inherit.table.properties till HIVE-3109 is fixed

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547857#comment-13547857
 ] 

Hudson commented on HIVE-3112:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3112 clear hive.metastore.partition.inherit.table.properties till 
HIVE-3109 is fixed (njain via kevinwilfong) (Revision 1349096)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1349096
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/part_inherit_tbl_props_with_star.q
* /hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/part_inherit_tbl_props_with_star.q.out


 clear hive.metastore.partition.inherit.table.properties till HIVE-3109 is 
 fixed
 ---

 Key: HIVE-3112
 URL: https://issues.apache.org/jira/browse/HIVE-3112
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2540) LATERAL VIEW with EXPLODE produces ConcurrentModificationException

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547856#comment-13547856
 ] 

Hudson commented on HIVE-2540:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2540 LATERAL VIEW with EXPLODE produces ConcurrentModificationException
(Navis via namit) (Revision 1343036)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1343036
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/udf_explode.q
* /hive/trunk/ql/src/test/queries/clientpositive/udtf_explode.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_explode.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udtf_explode.q.out
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyArray.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyMap.java


 LATERAL VIEW with EXPLODE produces ConcurrentModificationException
 --

 Key: HIVE-2540
 URL: https://issues.apache.org/jira/browse/HIVE-2540
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.7.1, 0.9.0
Reporter: David Phillips
Assignee: Navis
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2540.D2805.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2540.D2805.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2540.D2805.3.patch


 The following produces {{ConcurrentModificationException}} on the {{for}} 
 loop inside EXPLODE:
 {code}
 create table foo as select array(1, 2) a from src limit 1;
 select a, x.b from foo lateral view explode(a) x as b;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3838) Add input table name to MetaStoreEndFunctionContext for logging purposes

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547858#comment-13547858
 ] 

Hudson commented on HIVE-3838:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3838 Add input table name to MetaStoreEndFunctionContext for logging 
purposes
(Pamela Vagata via namit) (Revision 1426431)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1426431
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreEndFunctionContext.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEndFunctionListener.java


 Add input table name to MetaStoreEndFunctionContext for logging purposes
 

 Key: HIVE-3838
 URL: https://issues.apache.org/jira/browse/HIVE-3838
 Project: Hive
  Issue Type: Task
Reporter: Pamela Vagata
Assignee: Pamela Vagata
Priority: Minor
 Fix For: 0.11.0

 Attachments: HIVE-3838.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2891) TextConverter for UDF's is inefficient if the input object is already Text or Lazy

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547860#comment-13547860
 ] 

Hudson commented on HIVE-2891:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2891: TextConverter for UDF's is inefficient if the input object is 
already Text or Lazy (Cliff Engle via Ashutosh Chauhan) (Revision 1306096)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306096
Files : 
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorConverter.java
* 
/hive/trunk/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/TestObjectInspectorConverters.java


 TextConverter for UDF's is inefficient if the input object is already Text or 
 Lazy
 --

 Key: HIVE-2891
 URL: https://issues.apache.org/jira/browse/HIVE-2891
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.7.0, 0.7.1, 0.8.1
Reporter: Cliff Engle
Assignee: Cliff Engle
Priority: Minor
 Fix For: 0.9.0

 Attachments: HIVE-2891.1.patch.txt, HIVE-2891.2.patch.txt


 The TextConverter in PrimitiveObjectInspectorConverter.java is very 
 inefficient if the input object is already Text or Lazy. Since it calls 
 getPrimitiveJavaObject, each Text is decoded into a String and then 
 re-encoded into Text. The solution is to check if preferWritable() is true, 
 then call getPrimitiveWritable(input).
 To test performance, I ran the Grep query from 
 https://issues.apache.org/jira/browse/HIVE-396 on a cluster of 3 ec2 large 
 nodes (2 slaves 1 master) on 6GB of data. It took 21 map tasks. With the 
 current 0.8.1 version, it took 81 seconds. After patching, it took 66 seconds.
 I will attach a patch and testcases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2542) DROP DATABASE CASCADE does not drop non-native tables.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547859#comment-13547859
 ] 

Hudson commented on HIVE-2542:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2542 : Forgot to do svn add for new  files in previous patch. 
(Revision 1340158)
HIVE-2542 : DROP DATABASE CASCADE does not drop non-native tables ( Vandana 
Ayyalasomayajula via Ashutosh Chauhan) (Revision 1340130)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1340158
Files : 
* /hive/trunk/hbase-handler/src/test/queries/negative
* /hive/trunk/hbase-handler/src/test/queries/negative/cascade_dbdrop.q
* /hive/trunk/hbase-handler/src/test/results/negative
* /hive/trunk/hbase-handler/src/test/results/negative/cascade_dbdrop.q.out
* /hive/trunk/hbase-handler/src/test/templates/TestHBaseNegativeCliDriver.vm

hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1340130
Files : 
* /hive/trunk/hbase-handler/build.xml
* /hive/trunk/hbase-handler/src/test/queries/external_table_ppd.q
* 
/hive/trunk/hbase-handler/src/test/queries/hbase_binary_external_table_queries.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_binary_map_queries.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_binary_storage_queries.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_bulk.m
* /hive/trunk/hbase-handler/src/test/queries/hbase_joins.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_ppd_key_range.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_pushdown.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_queries.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_stats.q
* /hive/trunk/hbase-handler/src/test/queries/hbase_stats2.q
* /hive/trunk/hbase-handler/src/test/queries/positive
* /hive/trunk/hbase-handler/src/test/queries/positive/external_table_ppd.q
* 
/hive/trunk/hbase-handler/src/test/queries/positive/hbase_binary_external_table_queries.q
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_binary_map_queries.q
* 
/hive/trunk/hbase-handler/src/test/queries/positive/hbase_binary_storage_queries.q
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_bulk.m
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_joins.q
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_ppd_key_range.q
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_pushdown.q
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_queries.q
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_stats.q
* /hive/trunk/hbase-handler/src/test/queries/positive/hbase_stats2.q
* /hive/trunk/hbase-handler/src/test/queries/positive/ppd_key_ranges.q
* /hive/trunk/hbase-handler/src/test/queries/ppd_key_ranges.q
* /hive/trunk/hbase-handler/src/test/results/external_table_ppd.q.out
* 
/hive/trunk/hbase-handler/src/test/results/hbase_binary_external_table_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_binary_map_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_binary_storage_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_bulk.m.out
* /hive/trunk/hbase-handler/src/test/results/hbase_joins.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_ppd_key_range.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_pushdown.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_stats.q.out
* /hive/trunk/hbase-handler/src/test/results/hbase_stats2.q.out
* /hive/trunk/hbase-handler/src/test/results/positive
* /hive/trunk/hbase-handler/src/test/results/positive/external_table_ppd.q.out
* 
/hive/trunk/hbase-handler/src/test/results/positive/hbase_binary_external_table_queries.q.out
* 
/hive/trunk/hbase-handler/src/test/results/positive/hbase_binary_map_queries.q.out
* 
/hive/trunk/hbase-handler/src/test/results/positive/hbase_binary_storage_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_bulk.m.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_joins.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_ppd_key_range.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_pushdown.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_queries.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_stats.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_stats2.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/ppd_key_ranges.q.out
* /hive/trunk/hbase-handler/src/test/results/ppd_key_ranges.q.out
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java


 DROP DATABASE CASCADE does not drop non-native tables. 
 ---

 Key: HIVE-2542
 URL: 

[jira] [Commented] (HIVE-2288) Adding the oracle nvl function to the UDF

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547861#comment-13547861
 ] 

Hudson commented on HIVE-2288:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2288 : Adding the oracle nvl function to the UDF (Ed Capriolo, Guy 
Doulberg via Ashutosh Chauhan) (Revision 1419204)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1419204
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFNvl.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_nvl.q
* /hive/trunk/ql/src/test/results/clientpositive/show_functions.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_nvl.q.out


 Adding the oracle nvl function to the UDF
 -

 Key: HIVE-2288
 URL: https://issues.apache.org/jira/browse/HIVE-2288
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Affects Versions: 0.9.0
Reporter: Guy Doulberg
Assignee: Edward Capriolo
Priority: Minor
  Labels: hive
 Fix For: 0.11.0

 Attachments: 
 0002-HIVE-2288-Adding-the-oracle-nvl-function-to-the-UDF.patch, 
 hive-2288.2.patch.txt


 It would be nice if we could use the nvl function, described at oracle:
 http://www.techonthenet.com/oracle/functions/nvl.php

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2529) metastore 0.8 upgrade script for PostgreSQL

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547862#comment-13547862
 ] 

Hudson commented on HIVE-2529:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2529 [jira] metastore 0.8 upgrade script for PostgreSQL
(Zhenxiao Luo via Carl Steinbach)

Summary:
HIVE-2529:  metastore 0.8 upgrade script for PostgreSQL

I think you mentioned that this was in the works.

Test Plan: EMPTY

Reviewers: JIRA, cwsteinbach

Reviewed By: cwsteinbach

Differential Revision: https://reviews.facebook.net/D3027 (Revision 1334537)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1334537
Files : 
* /hive/trunk/metastore/scripts/upgrade/postgres/008-HIVE-2246.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/008-REVERT-HIVE-2246.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/hive-schema-0.8.0.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/hive-schema-0.9.0.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/upgrade-0.7.0-to-0.8.0.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/upgrade-0.8.0-to-0.9.0.postgres.sql


 metastore 0.8 upgrade script for PostgreSQL 
 

 Key: HIVE-2529
 URL: https://issues.apache.org/jira/browse/HIVE-2529
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.8.0
Reporter: John Sichi
Assignee: Zhenxiao Luo
Priority: Blocker
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2529.D3027.1.patch, 
 HIVE-2529.1.patch.txt


 I think you mentioned that this was in the works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2898) Add nicer helper functions for adding and reading metadata from RCFiles

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547863#comment-13547863
 ] 

Hudson commented on HIVE-2898:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2898: Add nicer helper functions for adding and reading metadata from 
RCFiles (Owen Omalley via Ashutosh Chauhan) (Revision 1306464)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1306464
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/TestRCFile.java


 Add nicer helper functions for adding and reading metadata from RCFiles
 ---

 Key: HIVE-2898
 URL: https://issues.apache.org/jira/browse/HIVE-2898
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2898.D2433.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2898.D2433.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2898.D2433.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2898.D2433.4.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2898.D2433.5.patch


 Currently, to use the metadata in RCFile, you need to manipulate it using 
 SequenceFile.Metadata. I'd like to add two helper functions that make it more 
 convenient.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2477) Use name of original expression for name of CAST output

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547864#comment-13547864
 ] 

Hudson commented on HIVE-2477:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2477 Use name of original expression for name of CAST output
(Navis via namit) (Revision 1418012)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1418012
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
* /hive/trunk/ql/src/test/queries/clientpositive/alias_casted_column.q
* /hive/trunk/ql/src/test/results/clientpositive/alias_casted_column.q.out


 Use name of original expression for name of CAST output
 ---

 Key: HIVE-2477
 URL: https://issues.apache.org/jira/browse/HIVE-2477
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Adam Kramer
Assignee: Navis
Priority: Minor
 Fix For: 0.11.0

 Attachments: HIVE-2477.1.patch.txt, hive.2477.4.patch, 
 HIVE-2477.D7161.1.patch, HIVE-2477.D7161.2.patch


 CAST(foo AS INT)
 should, by default, consider itself a column named foo if 
 unspecified/unaliased.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2797) Make the IP address of a Thrift client available to HMSHandler.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547865#comment-13547865
 ] 

Hudson commented on HIVE-2797:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2797: Make the IP address of a Thrift client available to HMSHandler. 
(Kevin Wilfong via Ashutosh Chauhan) (Revision 1305041)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1305041
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/TSetIpAddressProcessor.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/TUGIBasedProcessor.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/IpAddressListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStoreIpAddress.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteUGIHiveMetaStoreIpAddress.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/TUGIContainingTransport.java


 Make the IP address of a Thrift client available to HMSHandler.
 ---

 Key: HIVE-2797
 URL: https://issues.apache.org/jira/browse/HIVE-2797
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2797.D1701.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2797.D1701.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2797.D1701.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2797.D1701.4.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2797.D1701.5.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2797.D1701.6.patch, HIVE-2797.7.patch


 Currently, in unsecured mode, metastore Thrift calls are, from the 
 HMSHandler's point of view, anonymous.  If we expose the IP address of the 
 Thrift client to the HMSHandler from the Processor, this will help to give 
 some context, in particular for audit logging, of where the call is coming 
 from.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2665) Support for metastore service specific HADOOP_OPTS environment setting

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547867#comment-13547867
 ] 

Hudson commented on HIVE-2665:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2665 : Support for metastore service specific HADOOP_OPTS environment 
setting (thw via hashutosh) (Revision 1235845)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1235845
Files : 
* /hive/trunk/bin/ext/metastore.sh


 Support for metastore service specific HADOOP_OPTS environment setting
 --

 Key: HIVE-2665
 URL: https://issues.apache.org/jira/browse/HIVE-2665
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.8.0, 0.9.0
Reporter: Thomas Weise
Assignee: Thomas Weise
Priority: Minor
 Fix For: 0.9.0

 Attachments: HIVE-2665.patch


 For development/testing it would be helpful to have a way to define 
 HADOOP_OPTS that apply only to a specific launcher and don't affect 
 everything else launched through bin/hadoop. In this specific case I'm 
 looking for a way to set metastore JVM debug options w/o modifying the 
 HADOOP_OPTS environment setting or the hive scripts (which are replaced on 
 every build).
  
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2674) get_partitions_ps throws TApplicationException if table doesn't exist

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547868#comment-13547868
 ] 

Hudson commented on HIVE-2674:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2674 get_partitions_ps throws TApplicationException if table doesn't
exist (Kevin Wilfong via namit) (Revision 1234065)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1234065
Files : 
* /hive/trunk/metastore/if/hive_metastore.thrift
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp
* /hive/trunk/metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.h
* 
/hive/trunk/metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java
* 
/hive/trunk/metastore/src/gen/thrift/gen-php/hive_metastore/ThriftHiveMetastore.php
* 
/hive/trunk/metastore/src/gen/thrift/gen-py/hive_metastore/ThriftHiveMetastore.py
* /hive/trunk/metastore/src/gen/thrift/gen-rb/thrift_hive_metastore.rb
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java


 get_partitions_ps throws TApplicationException if table doesn't exist
 -

 Key: HIVE-2674
 URL: https://issues.apache.org/jira/browse/HIVE-2674
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2674.D987.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2674.D987.2.patch


 If the table passed to get_partition_ps doesn't exist, a NPE is thrown by 
 getPartitionPsQueryResults.  There should be a check here, which throws a 
 NoSuchObjectException if the table doesn't exist.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2471) Add timestamp column to the partition stats table.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547866#comment-13547866
 ] 

Hudson commented on HIVE-2471:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2471 Add timestamp column to the partition stats table.
(Kevin Wilfong via namit) (Revision 1302739)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1302739
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsSetupConstants.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsUtils.java


 Add timestamp column to the partition stats table.
 --

 Key: HIVE-2471
 URL: https://issues.apache.org/jira/browse/HIVE-2471
 Project: Hive
  Issue Type: Improvement
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2471.D2367.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2471.D2367.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2471.D2367.3.patch, HIVE-2471.1.patch.txt


 Occasionally, when entries are added to the partition stats table the program 
 is halted before it can delete those entries, by an exception, keyboard 
 interrupt, etc.  These build up to the point where the table gets very large, 
 and it hurts the performance of the update statement which is often called.  
 In order to fix this, I am adding a column to the table which is 
 auto-populated with the current timestamp.  This will allow us to create 
 scripts that go through periodically and clean out old entries from the table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3706) getBoolVar in FileSinkOperator can be optimized

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547869#comment-13547869
 ] 

Hudson commented on HIVE-3706:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3706 getBoolVar in FileSinkOperator can be optimized
(Kevin Wilfong via namit) (Revision 1409691)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409691
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java


 getBoolVar in FileSinkOperator can be optimized
 ---

 Key: HIVE-3706
 URL: https://issues.apache.org/jira/browse/HIVE-3706
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.10.0

 Attachments: HIVE-3706.1.patch.txt


 There's a call to HiveConf.getBoolVar in FileSinkOperator's processOp method. 
  In benchmarks we found this call to be using ~2% of the CPU time on simple 
 queries, e.g. INSERT OVERWRITE TABLE t1 SELECT * FROM t2;
 This boolean value, a flag to collect the RawDataSize stat, won't change 
 during the processing of a query, so we can determine it at initialization 
 and store that value, saving that CPU.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3705) Adding authorization capability to the metastore

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547870#comment-13547870
 ] 

Hudson commented on HIVE-3705:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3705 : Adding authorization capability to the metastore (Sushanth 
Sowmyan via Ashutosh Chauhan) (Revision 1418802)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1418802
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* /hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/Warehouse.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/HadoopDefaultAuthenticator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/HadoopDefaultMetastoreAuthenticator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/HiveMetastoreAuthenticationProvider.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/AuthorizationPreEventListener.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/BitSetCheckedAuthorizationProvider.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/DefaultHiveAuthorizationProvider.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/DefaultHiveMetastoreAuthorizationProvider.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/HiveAuthorizationProviderBase.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/HiveMetastoreAuthorizationProvider.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/StorageBasedAuthorizationProvider.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/security/DummyHiveMetastoreAuthorizationProvider.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/security/InjectableDummyAuthenticator.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/security/TestAuthorizationPreEventListener.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/security/TestDefaultHiveMetastoreAuthorizationProvider.java


 Adding authorization capability to the metastore
 

 Key: HIVE-3705
 URL: https://issues.apache.org/jira/browse/HIVE-3705
 Project: Hive
  Issue Type: New Feature
  Components: Authorization, Metastore
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.10.0

 Attachments: HIVE-3705.D6681.1.patch, HIVE-3705.D6681.2.patch, 
 HIVE-3705.D6681.3.patch, HIVE-3705.D6681.4.patch, HIVE-3705.D6681.5.patch, 
 HIVE-3705.giant.svn-0.10.patch, HIVE-3705.giant.svn.patch, 
 hive-backend-auth.2.git.patch, hive-backend-auth.git.patch, 
 hive-backend-auth.post-review.git.patch, 
 hive-backend-auth.post-review-part2.git.patch, 
 hive-backend-auth.post-review-part3.git.patch, hivesec_investigation.pdf


 In an environment where multiple clients access a single metastore, and we 
 want to evolve hive security to a point where it's no longer simply 
 preventing users from shooting their own foot, we need to be able to 
 authorize metastore calls as well, instead of simply performing every 
 metastore api call that's made.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3704) name of some metastore scripts are not per convention

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547871#comment-13547871
 ] 

Hudson commented on HIVE-3704:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3704 : name of some metastore scripts are not per convention (Ashutosh 
Chauhan) (Revision 1408576)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1408576
Files : 
* /hive/trunk/metastore/scripts/upgrade/derby/010-HIVE-3649.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/011-HIVE-3649.derby.sql
* /hive/trunk/metastore/scripts/upgrade/derby/upgrade-0.9.0-to-0.10.0.derby.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/010-HIVE-3649.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/011-HIVE-3649.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/mysql/upgrade-0.9.0-to-0.10.0.mysql.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/010-HIVE-3649.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/011-HIVE-3649.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/010-HIVE-3649.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/011-HIVE-3649.postgres.sql
* 
/hive/trunk/metastore/scripts/upgrade/postgres/upgrade-0.9.0-to-0.10.0.postgres.sql


 name of some metastore scripts are not per convention
 -

 Key: HIVE-3704
 URL: https://issues.apache.org/jira/browse/HIVE-3704
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Fix For: 0.10.0

 Attachments: hive-3704.sh




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3702) Renaming table changes table location scheme/authority

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547875#comment-13547875
 ] 

Hudson commented on HIVE-3702:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3702 Renaming table changes table location scheme/authority
(Kevin Wilfong via namit) (Revision 1416875)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1416875
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveAlterHandler.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/VerifyOutputTableLocationSchemeIsFileHook.java
* /hive/trunk/ql/src/test/queries/clientpositive/rename_table_location.q
* /hive/trunk/ql/src/test/results/clientpositive/rename_table_location.q.out


 Renaming table changes table location scheme/authority
 --

 Key: HIVE-3702
 URL: https://issues.apache.org/jira/browse/HIVE-3702
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.9.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.11.0

 Attachments: HIVE-3702.1.patch.txt, HIVE-3702.2.patch.txt


 Renaming a table changes the location of the table to the default location of 
 the database, followed by the table name.  This means that if the default 
 location of the database uses a different scheme/authority, an exception will 
 get thrown attempting to move the data.
 Instead, the table's location should be made the default location of the 
 database followed by the table name, but using the original location's scheme 
 and authority.
 This only applies for managed tables, and there is already a check to ensure 
 the new location doesn't already exist.
 This is analogous to what was done for partitions in HIVE-2875

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3002) Revert HIVE-2986

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547874#comment-13547874
 ] 

Hudson commented on HIVE-3002:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3002 Revert HIVE-2986
(Kevin Wilfong via namit) (Revision 1334060)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1334060
Files : 
* /hive/trunk/contrib/src/java/org/apache/hadoop/hive/metastore
* /hive/trunk/contrib/src/java/org/apache/hadoop/hive/ql


 Revert HIVE-2986
 

 Key: HIVE-3002
 URL: https://issues.apache.org/jira/browse/HIVE-3002
 Project: Hive
  Issue Type: Task
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-3002.D3021.1.patch


 Given the amount of push back, reverting this patch pending further 
 changes/review seems like a good idea.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3001) Returning Meaningful Error Codes Messages

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547876#comment-13547876
 ] 

Hudson commented on HIVE-3001:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3001 Returning Meaningful Error Codes  Messages
(Bhushan Mandhani via namit) (Revision 1338537)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1338537
Files : 
* /hive/trunk/contrib/src/test/results/clientnegative/invalid_row_sequence.q.out
* /hive/trunk/contrib/src/test/results/clientnegative/udtf_explode2.q.out
* /hive/trunk/jdbc/src/test/org/apache/hadoop/hive/jdbc/TestJdbcDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JobDebugger.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/EximUtil.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ExportSemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/FunctionSemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestSemanticAnalyzerHookLoading.java
* 
/hive/trunk/ql/src/test/results/clientnegative/alter_concatenate_indexed_table.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure5.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/alter_view_failure7.q.out
* /hive/trunk/ql/src/test/results/clientnegative/ambiguous_col.q.out
* /hive/trunk/ql/src/test/results/clientnegative/analyze.q.out
* /hive/trunk/ql/src/test/results/clientnegative/analyze1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/analyze_view.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive5.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_insert1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_insert2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_insert3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_insert4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/archive_partspec5.q.out
* /hive/trunk/ql/src/test/results/clientnegative/bad_indextype.q.out
* /hive/trunk/ql/src/test/results/clientnegative/bad_sample_clause.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clusterbydistributeby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clusterbyorderby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clusterbysortby.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clustern1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clustern2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clustern3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/clustern4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/column_rename3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/compare_double_bigint.q.out
* 

[jira] [Commented] (HIVE-2673) Eclipse launch configurations fail due to unsatisfied builtins JAR dependency

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547872#comment-13547872
 ] 

Hudson commented on HIVE-2673:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2673: Eclipse launch configurations fail due to unsatisfied builtins 
JAR dependency (Carl Steinbach via Ashutosh Chauhan) (Revision 1238948)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1238948
Files : 
* /hive/trunk/eclipse-templates/.classpath


 Eclipse launch configurations fail due to unsatisfied builtins JAR dependency
 -

 Key: HIVE-2673
 URL: https://issues.apache.org/jira/browse/HIVE-2673
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.9.0

 Attachments: HIVE-2673.1.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3000) Potential infinite loop / log spew in ZookeeperHiveLockManager

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547877#comment-13547877
 ] 

Hudson commented on HIVE-3000:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3000 Potential infinite loop / log spew in ZookeeperHiveLockManager 
(njain via kevinwilfong) (Revision 1335106)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1335106
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java


 Potential infinite loop / log spew in ZookeeperHiveLockManager
 --

 Key: HIVE-3000
 URL: https://issues.apache.org/jira/browse/HIVE-3000
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.9.0
Reporter: Paul Yang
Assignee: Namit Jain
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-3000.D3063.1.patch


 See ZookeeperHiveLockManger.lock()
 If Zookeeper is in a bad state, it's possible to get an exception (e.g. 
 org.apache.zookeeper.KeeperException$SessionExpiredException) when we call 
 lockPrimitive(). There is a bug in the exception handler where the loop does 
 not exit because the break in the switch statement gets out the switch, not 
 the do..while loop. Because tryNum was not incremented due to the exception, 
 lockPrimitive() will be called in an infinite loop, as fast as possible. 
 Since the exception is printed for each call, Hive will produce significant 
 log spew.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3100) Add HiveCLI that runs over JDBC

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547878#comment-13547878
 ] 

Hudson commented on HIVE-3100:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3100. Add HiveCLI that runs over JDBC (Prasad Mujumdar via cws) 
(Revision 1356516)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1356516
Files : 
* /hive/trunk/LICENSE
* /hive/trunk/NOTICE
* /hive/trunk/bin/ext/beeline.sh
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/eclipse-templates/HiveBeeLine.launchtemplate
* /hive/trunk/ivy/ivysettings.xml
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/jdbc/ivy.xml
* /hive/trunk/jdbc/src/java/org/apache/hive
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/beeline
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/beeline/HiveBeeline.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/beeline/OptionsProcessor.java


 Add HiveCLI that runs over JDBC
 ---

 Key: HIVE-3100
 URL: https://issues.apache.org/jira/browse/HIVE-3100
 Project: Hive
  Issue Type: Bug
  Components: CLI, JDBC
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Fix For: 0.10.0

 Attachments: HIVE-3100-9.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3842) Remove redundant test codes

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547879#comment-13547879
 ] 

Hudson commented on HIVE-3842:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3842 Remove redundant test codes
(Navis via namit) (Revision 1429682)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1429682
Files : 
* /hive/trunk/hbase-handler/src/test/templates/TestHBaseCliDriver.vm
* /hive/trunk/hbase-handler/src/test/templates/TestHBaseNegativeCliDriver.vm
* /hive/trunk/ql/src/test/templates/TestCliDriver.vm
* /hive/trunk/ql/src/test/templates/TestNegativeCliDriver.vm
* /hive/trunk/ql/src/test/templates/TestParse.vm
* /hive/trunk/ql/src/test/templates/TestParseNegative.vm


 Remove redundant test codes
 ---

 Key: HIVE-3842
 URL: https://issues.apache.org/jira/browse/HIVE-3842
 Project: Hive
  Issue Type: Test
  Components: Tests
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Fix For: 0.11.0

 Attachments: HIVE-3842.D7773.1.patch


 Currently hive writes same test code again and again for each test, making 
 test class huge (50k line for ql).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3709) Stop storing default ConfVars in temp file

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547880#comment-13547880
 ] 

Hudson commented on HIVE-3709:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3709. Stop storing default ConfVars in temp file (Kevin Wilfong via 
cws) (Revision 1415038)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1415038
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/conf/LoopingByteArrayInputStream.java


 Stop storing default ConfVars in temp file
 --

 Key: HIVE-3709
 URL: https://issues.apache.org/jira/browse/HIVE-3709
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.11.0

 Attachments: HIVE-3709.1.patch.txt, HIVE-3709.2.patch.txt, 
 HIVE-3709.3.patch.txt


 To work around issues with Hadoop's Configuration object, specifically it's 
 addResource(InputStream), default configurations are written to a temp file 
 (I think HIVE-2362 introduced this).
 This, however, introduces the problem that once that file is deleted from 
 /tmp the client crashes.  This is particularly problematic for long running 
 services like the metastore server.
 Writing a custom InputStream to deal with the problems in the Configuration 
 object should provide a work around, which does not introduce a time bomb 
 into Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2530) Implement SHOW TBLPROPERTIES

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547881#comment-13547881
 ] 

Hudson commented on HIVE-2530:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2530. Implement SHOW TBLPROPERTIES. (leizhao via kevinwilfong) 
(Revision 1327189)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1327189
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/DDLWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ShowTblPropertiesDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/show_tblproperties.q
* /hive/trunk/ql/src/test/results/clientpositive/show_tblproperties.q.out


 Implement SHOW TBLPROPERTIES
 

 Key: HIVE-2530
 URL: https://issues.apache.org/jira/browse/HIVE-2530
 Project: Hive
  Issue Type: New Feature
  Components: SQL
Reporter: Adam Kramer
Assignee: Lei Zhao
Priority: Minor
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2530.D2589.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2530.D2589.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2530.D2589.3.patch


 Since table properties can be defined arbitrarily, they should be easy for a 
 user to query from the command-line.
 SHOW TBLPROPERTIES tblname;
 ...would show all of them, one per row, key \t value
 SHOW TBLPROPERTIES tblname (FOOBAR);
 ...would just show the value for the FOOBAR tblproperty.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3707) Round map/reduce progress down when it is in the range [99.5, 100)

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547882#comment-13547882
 ] 

Hudson commented on HIVE-3707:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3707 Round map/reduce progress down when it is in the range [99.5, 100)
(Kevin Wilfong via namit) (Revision 1409680)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409680
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java


 Round map/reduce progress down when it is in the range [99.5, 100)
 --

 Key: HIVE-3707
 URL: https://issues.apache.org/jira/browse/HIVE-3707
 Project: Hive
  Issue Type: Improvement
  Components: Logging, Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3707.1.patch.txt


 In HadoopJobExecHelper the mapProgress and reduceProgress are the value of 
 these counters taken from the running job rounded to an integer percentage.  
 This means that e.g. if the mappers are 99.5% done this is stored as 100%.
 One of the most common questions I see from new users is, the map and reduce 
 both report being 100% done, why is the query still running?
 By rounding down the value in this interval so it's only 100% when it's 
 really 100% we could avoid that confusion.
 Also, the way it appears the QueryPlan and MapRedTask determine if the 
 map/reduce phases are done is by checking if this value == 100.  I couldn't 
 find anywhere where they're used for anything significant, but they're 
 reporting early completion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3101) dropTable will all ways excute hook.rollbackDropTable whether drop table success or faild.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547883#comment-13547883
 ] 

Hudson commented on HIVE-3101:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3101. Drop table rollback hook always called. (Ransom Hezhiqiang via 
egc) (Revision 1348523)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1348523
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java


 dropTable will all ways excute hook.rollbackDropTable whether drop table 
 success or faild.
 --

 Key: HIVE-3101
 URL: https://issues.apache.org/jira/browse/HIVE-3101
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: ransom.hezhiqiang
Assignee: ransom.hezhiqiang
 Fix For: 0.10.0

 Attachments: HIVE-3101.1.patch


 see  the codes:
  boolean success = false;
 try {
   client.drop_table(dbname, name, deleteData);
   if (hook != null) {
 hook.commitDropTable(tbl, deleteData);
   }
 } catch (NoSuchObjectException e) {
   if (!ignoreUknownTab) {
 throw e;
   }
 } finally {
   if (!success  (hook != null)) {
 hook.rollbackDropTable(tbl);
   }
 }
 success  will always false, whether  the drop was success or faild.
 so it's a bug. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2559) Add target to install Hive JARs/POMs in the local Maven cache

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547884#comment-13547884
 ] 

Hudson commented on HIVE-2559:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2559 : Add target to install Hive JARs/POMs in the local Maven cache 
(Alan Gates via Ashutosh Chauhan) (Revision 1309675)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1309675
Files : 
* /hive/trunk/build.xml


 Add target to install Hive JARs/POMs in the local Maven cache
 -

 Key: HIVE-2559
 URL: https://issues.apache.org/jira/browse/HIVE-2559
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Alejandro Abdelnur
Assignee: Alan Gates
Priority: Critical
 Fix For: 0.9.0

 Attachments: HIVE-2559.patch


 HIVE-2391 is producing usable Maven artifacts.
 However, it only as a target to deploy/publish those artifacts to Apache 
 Maven repos.
 There should be a new target to locally install Hive Maven artifacts, thus 
 enabling their use from other projects before they are committed/publish to 
 Apache Maven (this is critical to test patches that may address issues in 
 downstream components).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3127) Pass hconf values as XML instead of command line arguments to child JVM

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547885#comment-13547885
 ] 

Hudson commented on HIVE-3127:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3127 Pass hconf values as XML instead of command line arguments to 
child JVM. Kanna Karanam (via egc) (Revision 1354781)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1354781
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapRedTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapredLocalTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java


 Pass hconf values as XML instead of command line arguments to child JVM
 ---

 Key: HIVE-3127
 URL: https://issues.apache.org/jira/browse/HIVE-3127
 Project: Hive
  Issue Type: Bug
  Components: Configuration, Windows
Affects Versions: 0.9.0, 0.10.0, 0.9.1
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: HIVE-3127.1.patch.txt, HIVE-3127.2.patch.txt, 
 HIVE-3127.3.patch.txt


 The maximum length of the DOS command string is 8191 characters (in Windows 
 latest versions http://support.microsoft.com/kb/830473). This limit will be 
 exceeded easily when it appends individual –hconf values to the command 
 string. To work around this problem, Write all changed hconf values to a temp 
 file and pass the temp file path to the child jvm to read and initialize the 
 -hconf parameters from file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3128) use commons-compress instead of forking tar process

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547886#comment-13547886
 ] 

Hudson commented on HIVE-3128:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3295. HIVE-3128 introduced bug causing dynamic partitioning to fail. 
(kevinwilfong reviewed by njain, ashutoshc) (Revision 1365460)
HIVE-3180 Fix Eclipse classpath template broken in HIVE-3128. Carl Steinbach 
(via egc) (Revision 1354187)
HIVE-3128 Use commons-compress instead of forking tar process (Kanna Karanam 
via egc) (Revision 1353044)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1365460
Files : 
* 
/hive/trunk/common/src/java/org/apache/hadoop/hive/common/CompressionUtils.java
* /hive/trunk/common/src/java/org/apache/hadoop/hive/common/FileUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java

ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1354187
Files : 
* /hive/trunk/eclipse-templates/.classpath

ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1353044
Files : 
* /hive/trunk/common/ivy.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/common/FileUtils.java
* /hive/trunk/ivy/libraries.properties


 use commons-compress instead of forking tar process
 ---

 Key: HIVE-3128
 URL: https://issues.apache.org/jira/browse/HIVE-3128
 Project: Hive
  Issue Type: Bug
  Components: CLI, Query Processor
Reporter: Kanna Karanam
Assignee: Kanna Karanam
 Fix For: 0.10.0

 Attachments: HIVE-3128.1.patch.txt, HIVE-3128.2.patch.txt


 TAR tool doesn’t exist by default on windows systems so use the CAB files on 
 windows

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2769) union with a multi-table insert is not working

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547887#comment-13547887
 ] 

Hudson commented on HIVE-2769:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2769 [jira] union with a multi-table insert is not working
(Namit Jain via Yongqiang He)

Summary:
https://issues.apache.org/jira/browse/HIVE-2769

HIVE-2769



Test Plan: EMPTY

Reviewers: JIRA, heyongqiang

Reviewed By: heyongqiang

CC: heyongqiang

Differential Revision: https://reviews.facebook.net/D1545 (Revision 1239161)

 Result = ABORTED
heyongqiang : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1239161
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRRedSink3.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java
* /hive/trunk/ql/src/test/queries/clientpositive/union31.q
* /hive/trunk/ql/src/test/results/clientpositive/union31.q.out


 union with a multi-table insert is not working
 --

 Key: HIVE-2769
 URL: https://issues.apache.org/jira/browse/HIVE-2769
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2769.D1545.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2768) Add a getAuthorizationProvider to HiveStorageHandler

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547888#comment-13547888
 ] 

Hudson commented on HIVE-2768:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2768: Add a getAuthorizationProvider to HiveStorageHandler (toffer via 
hashutosh) (Revision 1292969)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1292969
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/DefaultStorageHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageHandler.java


 Add a getAuthorizationProvider to HiveStorageHandler
 

 Key: HIVE-2768
 URL: https://issues.apache.org/jira/browse/HIVE-2768
 Project: Hive
  Issue Type: Task
  Components: HBase Handler
Reporter: Alan Gates
Assignee: Francis Liu
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2768.D1869.1.patch, 
 HIVE-2768.D1869.1.patch


 In version 0.92 HBase supports ACLs for tables.  In HCatalog, since we 
 delegate security to the underlying storage layer, we would like to be able 
 to obtain a HiveAuthorizationProvider specific to a HiveStorageHandler 
 instance.  This can be done by adding a getAuthorizationProvider method to 
 HiveStorageHandler.  In the case where Hive is configured to use the 
 DefaultHiveAuthorizationProvider this call will return the same default 
 provider, since Hive handles all of the authorization itself in that case.  
 In the case where it is configured to use the HCatAuthorizationProvider, it 
 would return an instance specific to the underlying storage.
 For more background on this proposed change see HCATALOG-237 and 
 https://cwiki.apache.org/confluence/display/HCATALOG/Hcat+Security+Design

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2764) Obtain delegation tokens for MR jobs in secure hbase setup

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547890#comment-13547890
 ] 

Hudson commented on HIVE-2764:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2764: Obtain delegation tokens for MR jobs in secure hbase setup (Enis 
Soztutar via Ashutosh Chauhan) (Revision 1311418)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1311418
Files : 
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableOutputFormat.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHFileOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveNullValueSequenceFileOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveOutputFormatImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveSequenceFileOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFileOutputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredWork.java
* 
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
* 
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


 Obtain delegation tokens for MR jobs in secure hbase setup  
 

 Key: HIVE-2764
 URL: https://issues.apache.org/jira/browse/HIVE-2764
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler, Security
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2764.D2205.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2764.D2205.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2764.D2205.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2764.D2205.4.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2764.D2205.5.patch, HIVE-2764_v0.patch


 As discussed in HCATALOG-244, in a secure hbase setup with 0.92, we need to 
 obtain delegation tokens for hbase and save it in jobconf, so that tasks can 
 access region servers. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2767) Optionally use framed transport with metastore

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547891#comment-13547891
 ] 

Hudson commented on HIVE-2767:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2767 [jira] Optionally use framed transport with metastore
(Travis Crawford via Ashutosh Chauhan)

Summary:
Add support for optionally using the thrift framed transport, enabling
integration with environments where that is necessary.

Users may want/need to use thrift's framed transport when communicating with the
Hive MetaStore. This patch adds a new property
hive.metastore.thrift.framed.transport.enabled that enables the framed transport
(defaults to off, aka no change from before the patch). This property must be
set for both clients and the HMS server.

It wasn't immediately clear how to use the framed transport with SASL, so as
written an exception is thrown if you try starting the server with both options.
If SASL and the framed transport will indeed work together I can update the
patch (although I don't have a secured environment to test in).

Test Plan:
Tested locally that client and server can connect, both with and
without the flag. Tests pass.

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

Differential Revision: https://reviews.facebook.net/D2661 (Revision 1325446)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1325446
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java


 Optionally use framed transport with metastore
 --

 Key: HIVE-2767
 URL: https://issues.apache.org/jira/browse/HIVE-2767
 Project: Hive
  Issue Type: New Feature
  Components: Metastore
Reporter: Travis Crawford
Assignee: Travis Crawford
 Fix For: 0.10.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2767.D2661.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2767.D2661.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2767.D2661.3.patch, HIVE-2767_a.patch.txt, 
 HIVE-2767.patch.txt


 Users may want/need to use thrift's framed transport when communicating with 
 the Hive MetaStore. This patch adds a new property 
 {{hive.metastore.thrift.framed.transport.enabled}} that enables the framed 
 transport (defaults to off, aka no change from before the patch). This 
 property must be set for both clients and the HMS server.
 It wasn't immediately clear how to use the framed transport with SASL, so as 
 written an exception is thrown if you try starting the server with both 
 options. If SASL and the framed transport will indeed work together I can 
 update the patch (although I don't have a secured environment to test in).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2772) make union31.q deterministic

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547893#comment-13547893
 ] 

Hudson commented on HIVE-2772:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2772 [jira] make union31.q deterministic
(Namit Jain via Yongqiang He)

Summary:
https://issues.apache.org/jira/browse/HIVE-2772

HIVE-2772



Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

CC: ashutoshc

Differential Revision: https://reviews.facebook.net/D1557 (Revision 1239286)

 Result = ABORTED
heyongqiang : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1239286
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/union31.q
* /hive/trunk/ql/src/test/results/clientpositive/union31.q.out


 make union31.q deterministic
 

 Key: HIVE-2772
 URL: https://issues.apache.org/jira/browse/HIVE-2772
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2772.D1557.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2773) HiveStorageHandler.configureTableJobProperites() should let the handler know wether it is configuration for input or output

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547894#comment-13547894
 ] 

Hudson commented on HIVE-2773:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2773: HiveStorageHandler.configureTableJobProperites() should let the 
handler know wether it is configuration for input or output (Francis Liu via 
Ashutosh Chauhan) (Revision 1304167)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304167
Files : 
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/DefaultStorageHandler.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStorageHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapredLocalWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PartitionDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java


 HiveStorageHandler.configureTableJobProperites() should let the handler know 
 wether it is configuration for input or output
 ---

 Key: HIVE-2773
 URL: https://issues.apache.org/jira/browse/HIVE-2773
 Project: Hive
  Issue Type: Improvement
Reporter: Francis Liu
Assignee: Francis Liu
  Labels: hcatalog, storage_handler
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2773.D1815.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2773.D2007.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2773.D2415.1.patch, HIVE-2773.patch


 HiveStorageHandler.configureTableJobProperties() is called to allow the 
 storage handler to setup any properties that the underlying 
 inputformat/outputformat/serde may need. But the handler implementation does 
 not know whether it is being called for configuring input or output. This 
 makes it a problem for handlers which sets an external state. In the case of 
 HCatalog's HBase storageHandler, whenever a write needs to be configured we 
 create a write transaction which needs to be committed or aborted later on. 
 In this case configuring for both input and output each time 
 configureTableJobProperties() is called would not be desirable. This has 
 become an issue since HCatalog is dropping storageDrivers for SerDe and 
 StorageHandler (see HCATALOG-237).
 My proposal is to replace configureTableJobProperties() with two methods:
 configureInputJobProperties()
 configureOutputJobProperties()
 Each method will have the same signature. I cursory look at the code and I 
 believe changes should be straighforward also given that we are not really 
 changing anything just splitting responsibility. If the community is fine 
 with this approach I will go ahead and create a aptch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3713) Metastore: Sporadic unit test failures

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547895#comment-13547895
 ] 

Hudson commented on HIVE-3713:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3713 : Metastore: Sporadic unit test failures (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1410581)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1410581
Files : 
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEventListener.java


 Metastore: Sporadic unit test failures
 --

 Key: HIVE-3713
 URL: https://issues.apache.org/jira/browse/HIVE-3713
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.10.0

 Attachments: HIVE-3713.1-r1409996.txt


 For instance: 
 https://builds.apache.org/job/Hive-trunk-h0.21/1792/testReport/org.apache.hadoop.hive.metastore/
 Found the following issues:
 testListener: Assumes that a certain tmp database hasn't been created yet, 
 but doesn't enforce it
 testSynchronized: Assumes that there's only one database, but doesn't enforce 
 the fact
 testDatabaseLocation: Fails if the user running the tests is root and doesn't 
 clean up after itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2662) Add Ant configuration property for dumping classpath of tests

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547896#comment-13547896
 ] 

Hudson commented on HIVE-2662:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2662 [jira] Add Ant configuration property for dumping classpath of 
tests

Summary: HIVE-2662. Add Ant configuration property for dumping classpath of
tests

Test Plan: EMPTY

Reviewers: JIRA, jsichi, ashutoshc

Reviewed By: ashutoshc

CC: ashutoshc

Differential Revision: https://reviews.facebook.net/D903 (Revision 1237510)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1237510
Files : 
* /hive/trunk/build-common.xml


 Add Ant configuration property for dumping classpath of tests
 -

 Key: HIVE-2662
 URL: https://issues.apache.org/jira/browse/HIVE-2662
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2662.D903.1.patch, 
 HIVE-2662.D903.1.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3714) Patch: Hive's ivy internal resolvers need to use sourceforge for sqlline

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547897#comment-13547897
 ] 

Hudson commented on HIVE-3714:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3714 : Patch: Hive's ivy internal resolvers need to use sourceforge 
for sqlline (Gopal V via Ashutosh Chauhan) (Revision 1419118)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1419118
Files : 
* /hive/trunk/ivy/ivysettings.xml


 Patch: Hive's ivy internal resolvers need to use sourceforge for sqlline
 

 Key: HIVE-3714
 URL: https://issues.apache.org/jira/browse/HIVE-3714
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
 Environment: Ubuntu 12.10 (x86_64)
Reporter: Gopal V
Assignee: Gopal V
Priority: Trivial
 Fix For: 0.11.0

 Attachments: hive-ivy.patch


 While building hive with an internal resolver, ivy fails to resolve sqlline, 
 which needs to be picked up from
 http://sourceforge.net/projects/sqlline/files/sqlline/1.0.2/sqlline-1_0_2.jar/download
 ant package -Dresolvers=internal
 fails with
 {code}
 [ivy:resolve]  datanucleus-repo: tried
 [ivy:resolve]   -- artifact sqlline#sqlline#1.0.2;1_0_2!sqlline.jar:
 [ivy:resolve]   
 http://www.datanucleus.org/downloads/maven2/sqlline/sqlline/1_0_2/sqlline-1_0_2.jar
 [ivy:resolve] ::
 [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
 [ivy:resolve] ::
 [ivy:resolve] :: sqlline#sqlline#1.0.2;1_0_2: not found
 [ivy:resolve] ::
 {code}
 The attached patch adds sourceforge to the internal resolver list so that if 
 the default sqlline version ( a hadoop snapshot) is used, the build does not 
 fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3817) Adding the name space for the maven task for the maven-publish target.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547898#comment-13547898
 ] 

Hudson commented on HIVE-3817:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3817 : Adding the name space for the maven task for the maven-publish 
target. (Ashish Singh via Ashutosh Chauhan) (Revision 1426522)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1426522
Files : 
* /hive/trunk/build.xml


 Adding the name space for the maven task for the maven-publish target.
 --

 Key: HIVE-3817
 URL: https://issues.apache.org/jira/browse/HIVE-3817
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
Reporter: Ashish Singh
Assignee: Ashish Singh
 Fix For: 0.11.0

 Attachments: HIVE-3817.patch


 maven task for the maven-publish target is missing from the build.xml.
 This is causing maven deploy issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3814) Cannot drop partitions on table when using Oracle metastore

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547899#comment-13547899
 ] 

Hudson commented on HIVE-3814:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3814 : Cannot drop partitions on table when using Oracle metastore 
(Deepesh Khandelwal via Ashutosh Chauhan) (Revision 1423488)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1423488
Files : 
* /hive/trunk/metastore/scripts/upgrade/oracle/012-HIVE-1362.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.10.0.oracle.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/012-HIVE-1362.postgres.sql
* /hive/trunk/metastore/scripts/upgrade/postgres/hive-schema-0.10.0.postgres.sql


 Cannot drop partitions on table when using Oracle metastore
 ---

 Key: HIVE-3814
 URL: https://issues.apache.org/jira/browse/HIVE-3814
 Project: Hive
  Issue Type: Bug
  Components: SQL
Affects Versions: 0.10.0
 Environment: Oracle 11g r2
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
Priority: Critical
 Fix For: 0.10.0

 Attachments: HIVE-3814.patch


 Create a table with a partition. Try to drop the partition or the table 
 containing the partition. Following error is seen:
 FAILED: Error in metadata: 
 MetaException(message:javax.jdo.JDODataStoreException: Error executing JDOQL 
 query SELECT 
 'org.apache.hadoop.hive.metastore.model.MPartitionColumnStatistics' AS 
 NUCLEUS_TYPE,THIS.AVG_COL_LEN,THIS.COLUMN_NAME,THIS.COLUMN_TYPE,THIS.DB_NAME,THIS.DOUBLE_HIGH_VALUE,THIS.DOUBLE_LOW_VALUE,THIS.LAST_ANALYZED,THIS.LONG_HIGH_VALUE,THIS.LONG_LOW_VALUE,THIS.MAX_COL_LEN,THIS.NUM_DISTINCTS,THIS.NUM_FALSES,THIS.NUM_NULLS,THIS.NUM_TRUES,THIS.PARTITION_NAME,THIS.TABLE_NAME,THIS.CS_ID
  FROM PART_COL_STATS THIS LEFT OUTER JOIN PARTITIONS 
 THIS_PARTITION_PARTITION_NAME ON THIS.PART_ID = 
 THIS_PARTITION_PARTITION_NAME.PART_ID WHERE 
 THIS_PARTITION_PARTITION_NAME.PART_NAME = ? AND THIS.DB_NAME = ? AND 
 THIS.TABLE_NAME = ? : ORA-00904: THIS.PARTITION_NAME: invalid 
 identifier
 The problem here is that the column PARTITION_NAME that the query is 
 referring to in table PART_COL_STATS is non-existent. Looking at the hive 
 schema scripts for mysql  derby, this should be PARTITION_NAME. Postgres 
 also suffers from the same problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3135) add an option in ptest to run on a single machine

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547900#comment-13547900
 ] 

Hudson commented on HIVE-3135:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3135. add an option in ptest to run on a single machine (Namit Jain 
via kevinwilfong) (Revision 1352973)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1352973
Files : 
* /hive/trunk/testutils/ptest/hivetest.py


 add an option in ptest to run on a single machine
 -

 Key: HIVE-3135
 URL: https://issues.apache.org/jira/browse/HIVE-3135
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Affects Versions: 0.10.0
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.10.0


 There is no need for any sudo in that case

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3718) Add check to determine whether partition can be dropped at Semantic Analysis time

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547901#comment-13547901
 ] 

Hudson commented on HIVE-3718:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3718 Add check to determine whether partition can be dropped at
Semantic Analysis time (Pamela Vagata via namit) (Revision 1428704)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1428704
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientnegative/sa_fail_hook3.q
* /hive/trunk/ql/src/test/results/clientnegative/alter_partition_nodrop.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_part_no_drop.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_tbl7.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_tbl8.q.out
* /hive/trunk/ql/src/test/results/clientnegative/sa_fail_hook3.q.out


 Add check to determine whether partition can be dropped at Semantic Analysis 
 time
 -

 Key: HIVE-3718
 URL: https://issues.apache.org/jira/browse/HIVE-3718
 Project: Hive
  Issue Type: Task
  Components: CLI
Reporter: Pamela Vagata
Assignee: Pamela Vagata
Priority: Minor
 Fix For: 0.11.0

 Attachments: HIVE-3718.1.patch.txt, HIVE-3718.2.patch.txt, 
 HIVE-3718.3.patch.txt, hive.3718.4.patch, HIVE-3718.5.patch.txt, 
 HIVE-3718.6.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2249) When creating constant expression for numbers, try to infer type from another comparison operand, instead of trying to use integer first, and then long and double

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547903#comment-13547903
 ] 

Hudson commented on HIVE-2249:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2249 When creating constant expression for numbers, try to infer type 
from another comparison operand, instead of trying to use integer first, and 
then long and double (Zhiqiu Kong via Siying Dong) (Revision 1238175)

 Result = ABORTED
sdong : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1238175
Files : 
* /hive/trunk/contrib/src/test/results/clientpositive/dboutput.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes4.q.out
* /hive/trunk/data/files/infer_const_type.txt
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/queries/clientpositive/infer_const_type.q
* /hive/trunk/ql/src/test/queries/clientpositive/insert1_overwrite_partitions.q
* /hive/trunk/ql/src/test/queries/clientpositive/insert2_overwrite_partitions.q
* /hive/trunk/ql/src/test/queries/clientpositive/ppr_pushdown.q
* /hive/trunk/ql/src/test/results/clientpositive/auto_join0.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join16.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join21.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join23.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join27.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join28.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join29.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/cast1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/cluster.q.out
* /hive/trunk/ql/src/test/results/clientpositive/create_view.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_multi_single_reducer.q.out
* /hive/trunk/ql/src/test/results/clientpositive/having.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_empty.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_file_format.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_mult_tables.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/index_auto_mult_tables_compact.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_multiple.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_partitioned.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_self_join.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_unused.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_update.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_bitmap3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_bitmap_auto.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/index_bitmap_auto_partitioned.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_bitmap_compression.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_compression.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_stale.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_stale_partitioned.q.out
* /hive/trunk/ql/src/test/results/clientpositive/infer_const_type.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input11_limit.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input14_limit.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input18.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input1_limit.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input2_limit.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input42.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part7.q.out
* 

[jira] [Commented] (HIVE-2549) Support standard cross join syntax

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547904#comment-13547904
 ] 

Hudson commented on HIVE-2549:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2549 Support standard cross join syntax. Navis Ryu (via egc) (Revision 
1357875)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1357875
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/cross_join.q
* /hive/trunk/ql/src/test/results/clientpositive/cross_join.q.out


 Support standard cross join syntax
 --

 Key: HIVE-2549
 URL: https://issues.apache.org/jira/browse/HIVE-2549
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, SQL
Affects Versions: 0.10.0
Reporter: David Phillips
Assignee: Navis
 Fix For: 0.10.0

 Attachments: hive-2549-1.txt


 Hive should support standard (ANSI) cross join syntax:
 {code}
 SELECT a.*, b.*
 FROM a
 CROSS JOIN b
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2544) Nullpointer on registering udfs.

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547906#comment-13547906
 ] 

Hudson commented on HIVE-2544:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2544 Nullpointer on registering udfs.
(Edward Capriolo via namit) (Revision 1362374)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1362374
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java


 Nullpointer on registering udfs.
 

 Key: HIVE-2544
 URL: https://issues.apache.org/jira/browse/HIVE-2544
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
Reporter: Bennie Schut
Assignee: Edward Capriolo
Priority: Blocker
 Attachments: HIVE-2544.1.patch.txt, HIVE-2544.patch.2.txt


 Currently the Function registry can throw NullPointers when multiple threads 
 are trying to register the same function. The normal put() will replace the 
 existing registered function object even if it's exactly the same function.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2646) Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547907#comment-13547907
 ] 

Hudson commented on HIVE-2646:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2646 : Fix for LANG setting while running tests. (Thomas Weise via 
Ashutosh Chauhan) (Revision 1335771)
HIVE-2646. Hive Ivy dependencies on Hadoop should depend on jars directly, not 
tarballs (Andrew Bayer and Thomas Weise) (Revision 1329381)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1335771
Files : 
* /hive/trunk/build-common.xml

cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1329381
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/build.xml
* /hive/trunk/builtins/build.xml
* /hive/trunk/builtins/ivy.xml
* /hive/trunk/cli/ivy.xml
* /hive/trunk/common/ivy.xml
* /hive/trunk/contrib/build.xml
* /hive/trunk/contrib/ivy.xml
* /hive/trunk/hbase-handler/build.xml
* /hive/trunk/hbase-handler/ivy.xml
* /hive/trunk/hwi/build.xml
* /hive/trunk/hwi/ivy.xml
* /hive/trunk/ivy/common-configurations.xml
* /hive/trunk/ivy/ivysettings.xml
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/jdbc/build.xml
* /hive/trunk/jdbc/ivy.xml
* /hive/trunk/metastore/ivy.xml
* /hive/trunk/pdk/build.xml
* /hive/trunk/pdk/ivy.xml
* /hive/trunk/pdk/scripts/build-plugin.xml
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/ivy.xml
* /hive/trunk/serde/ivy.xml
* /hive/trunk/service/build.xml
* /hive/trunk/service/ivy.xml
* /hive/trunk/shims/build.xml
* /hive/trunk/shims/ivy.xml
* /hive/trunk/testutils/hadoop


 Hive Ivy dependencies on Hadoop should depend on jars directly, not tarballs
 

 Key: HIVE-2646
 URL: https://issues.apache.org/jira/browse/HIVE-2646
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.8.0
Reporter: Andrew Bayer
Assignee: Andrew Bayer
Priority: Critical
 Fix For: 0.10.0, 0.9.1

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.10.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.11.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.12.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.13.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.14.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.15.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.4.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.5.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.6.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.7.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.8.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2133.9.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2883.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2883.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2646.D2883.3.patch, HIVE-2646.diff.txt, 
 HIVE-2646-fixtests.patch, HIVE-2646-fixtests.txt, HIVE-2646_LANG.patch


 The current Hive Ivy dependency logic for its Hadoop dependencies is 
 problematic - depending on the tarball and extracting the jars from there, 
 rather than depending on the jars directly. It'd be great if this was fixed 
 to actually have the jar dependencies defined directly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2779) Improve hooks run in Driver

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547908#comment-13547908
 ] 

Hudson commented on HIVE-2779:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2779 Improve Hooks run in Driver
(Kevin Wilfong via namit) (Revision 1241729)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1241729
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveUtils.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveSemanticAnalyzerHook.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveSemanticAnalyzerHookContext.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveSemanticAnalyzerHookContextImpl.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/hooks/VerifyHooksRunInOrder.java
* /hive/trunk/ql/src/test/queries/clientpositive/hook_order.q
* /hive/trunk/ql/src/test/results/clientnegative/bad_exec_hooks.q.out
* /hive/trunk/ql/src/test/results/clientpositive/hook_order.q.out


 Improve hooks run in Driver
 ---

 Key: HIVE-2779
 URL: https://issues.apache.org/jira/browse/HIVE-2779
 Project: Hive
  Issue Type: Improvement
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2779.D1599.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2779.D1599.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2779.D1599.3.patch


 There are some small improvements that can be made to the hooks which are run 
 in the Driver:
 1) The code to get hooks has been clearly just been copy+pasted for each of 
 Pre/Post/OnFailure/SemanticAnalyzer hooks.  This code should be consolidated 
 into a single method.
 2) There is a lot more information available to SemanticAnalyzer hooks which 
 ran after semantic analysis than to those that run before, such as inputs and 
 outputs.  We should make some of this information available to those hooks, 
 preferably through HiveSemanticAnalyzerHookContext, so that existing hooks 
 aren't broken.
 3) Currently, possibly unintentionally, hooks are initialized and run in the 
 order they appear in the comma separated list that is the value of the 
 configuration variable.  This is a useful property, we should add comments 
 indicating this is desired and add a unit test to enforce it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2778) Fail on table sampling

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547909#comment-13547909
 ] 

Hudson commented on HIVE-2778:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2778 [jira] Fail on table sampling
(Navis Ryu via Carl Steinbach)

Summary:
HIVE-2778 fix NPE on table sampling

Trying table sampling on any non-empty table throws NPE. This does not occur by
test on mini-MR.  div class=preformatted panel style=border-width:
1px;div class=preformattedContent panelContent preselect count(*) from
emp tablesample (0.1 percent);  Total MapReduce jobs = 1 Launching Job 1 out
of 1 Number of reduce tasks determined at compile time: 1 In order to change the
average load for a reducer (in bytes):   set
hive.exec.reducers.bytes.per.reducer=number In order to limit the maximum
number of reducers:   set hive.exec.reducers.max=number In order to set a
constant number of reducers:   set mapred.reduce.tasks=number
java.lang.NullPointerException  at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.sampleSplits(CombineHiveInputFormat.java:450)
at
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:403)
at 
org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:971)at
org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:963)  at
org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)   at
org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)at
org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)at
java.security.AccessController.doPrivileged(Native Method)  at
javax.security.auth.Subject.doAs(Subject.java:396)  at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833) at
org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:807)at
org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:432)  at
org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)  at
org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)  at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) 
at
org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)   at
org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)  at
org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)   at
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
at
org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212) at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)at
org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)   at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597) at
org.apache.hadoop.util.RunJar.main(RunJar.java:186) Job Submission failed with
exception 'java.lang.NullPointerException(null)' FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask  /pre /div/div

Test Plan: EMPTY

Reviewers: JIRA, cwsteinbach

Reviewed By: cwsteinbach

Differential Revision: https://reviews.facebook.net/D1593 (Revision 1301310)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1301310
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CombineHiveInputFormat.java


 Fail on table sampling 
 ---

 Key: HIVE-2778
 URL: https://issues.apache.org/jira/browse/HIVE-2778
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.9.0
 Environment: Reproduced only on hadoop-0.20.2-CDH3u1, work fine on 
 hadoop-0.20.2
Reporter: Navis
Assignee: Navis
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2778.D1593.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2778.D1593.2.patch, HIVE-2778.D1593.2.patch


 Trying table sampling on any non-empty table throws NPE. This does not occur 
 by test on mini-MR.
 {noformat}
 select count(*) from emp tablesample (0.1 percent); 
 Total MapReduce jobs = 1
 Launching Job 1 out of 1
 Number of reduce tasks determined at compile time: 1
 In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=number
 In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=number
 In order to set a constant number of reducers:
   set 

[jira] [Commented] (HIVE-3724) Metastore tests use hardcoded ports

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547910#comment-13547910
 ] 

Hudson commented on HIVE-3724:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3724 : Metastore tests use hardcoded ports (Kevin Wilfong via Ashutosh 
Chauhan) (Revision 1415917)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1415917
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEndFunctionListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEventListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStoreIpAddress.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestSetUGIOnOnlyClient.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestSetUGIOnOnlyServer.java


 Metastore tests use hardcoded ports
 ---

 Key: HIVE-3724
 URL: https://issues.apache.org/jira/browse/HIVE-3724
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3724.1.patch.txt, HIVE-3724.2.patch.txt, 
 hive-3724.svn-0.10.patch


 Several of the metastore tests use hardcoded ports for remote metastore 
 Thrift servers.  This is causing transient failures in Jenkins, e.g. 
 https://builds.apache.org/job/Hive-trunk-h0.21/1804/
 A few tests already dynamically determine free ports, and this logic can be 
 shared.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3723) Hive Driver leaks ZooKeeper connections

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547911#comment-13547911
 ] 

Hudson commented on HIVE-3723:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3723 : Hive Driver leaks ZooKeeper connections (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1414278)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1414278
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java


 Hive Driver leaks ZooKeeper connections
 ---

 Key: HIVE-3723
 URL: https://issues.apache.org/jira/browse/HIVE-3723
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: 0.10.0

 Attachments: HIVE-3723.1-r1411423.patch


 In certain error cases (i.e.: statement fails to compile, semantic errors) 
 the hive driver leaks zookeeper connections.
 This can be seen in the TestNegativeCliDriver test which accumulates a large 
 number of open file handles and fails if the max allowed number of file 
 handles isn't at least 2048.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2782) New BINARY type produces unexpected results with supported UDFS when using MapReduce2

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547912#comment-13547912
 ] 

Hudson commented on HIVE-2782:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2782 [jira] New BINARY type produces unexpected results with supported 
UDFS
when using MapReduce2

Summary:
HIVE-2782. Make ba_table_udfs.q deterministic


Test Plan: EMPTY

Reviewers: JIRA, ashutoshc

Reviewed By: ashutoshc

CC: ashutoshc

Differential Revision: https://reviews.facebook.net/D1653 (Revision 1244314)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1244314
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/ba_table_udfs.q
* /hive/trunk/ql/src/test/results/clientpositive/ba_table_udfs.q.out


 New BINARY type produces unexpected results with supported UDFS when using 
 MapReduce2
 -

 Key: HIVE-2782
 URL: https://issues.apache.org/jira/browse/HIVE-2782
 Project: Hive
  Issue Type: Bug
Reporter: Zhenxiao Luo
Assignee: Carl Steinbach
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2782.D1653.1.patch, 
 HIVE-2782.D1653.1.patch


 When using MapReduce2 for Hive
 ba_table_udfs is failing with unexpected output:
 [junit] Begin query: ba_table_udfs.q
 [junit] 12/01/23 13:32:28 WARN conf.Configuration: mapred.system.dir is 
 deprecated. Instead, use mapreduce.jobtracker.system.dir
 [junit] 12/01/23 13:32:28 WARN conf.Configuration: mapred.local.dir is 
 deprecated. Instead, use mapreduce.cluster.local.dir
 [junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I 
 lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime 
 -I Location -I LOCATION ' -I transient_lastDdlTime -I last_modified_ -I 
 java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I 
 Caused by: -I LOCK_QUERYID: -I LOCK_TIME: -I grantTime -I [.][.][.] [0-9]* 
 more -I job_[0-9]*_[0-9]* -I USING 'java -cp 
 /home/cloudera/Code/hive/build/ql/test/logs/clientpositive/ba_table_udfs.q.out
  
 /home/cloudera/Code/hive/ql/src/test/results/clientpositive/ba_table_udfs.q.out
 [junit] 20,26c20,26
 [junit]  2   10val_101
 [junit]  3   164val_164  1
 [junit]  3   150val_150  1
 [junit]  2   18val_181
 [junit]  3   177val_177  1
 [junit]  2   12val_121
 [junit]  2   11val_111
 [junit] —
 [junit]  3   120val_120  1
 [junit]  3   192val_192  1
 [junit]  3   119val_119  1
 [junit]  3   187val_187  1
 [junit]  3   176val_176  1
 [junit]  3   199val_199  1
 [junit]  3   118val_118  1
 [junit] Exception: Client execution results failed with error code = 1
 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false 
 to get more logs.
 [junit] junit.framework.AssertionFailedError: Client execution results failed 
 with error code = 1
 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false 
 to get more logs.
 [junit] at junit.framework.Assert.fail(Assert.java:50)
 [junit] at 
 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ba_table_udfs(TestCliDriver.java:129)
 [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [junit] at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 [junit] at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 [junit] at java.lang.reflect.Method.invoke(Method.java:616)
 [junit] at junit.framework.TestCase.runTest(TestCase.java:168)
 [junit] at junit.framework.TestCase.runBare(TestCase.java:134)
 [junit] at junit.framework.TestResult$1.protect(TestResult.java:110)
 [junit] at junit.framework.TestResult.runProtected(TestResult.java:128)
 [junit] at junit.framework.TestResult.run(TestResult.java:113)
 [junit] at junit.framework.TestCase.run(TestCase.java:124)
 [junit] at junit.framework.TestSuite.runTest(TestSuite.java:243)
 [junit] at junit.framework.TestSuite.run(TestSuite.java:238)
 [junit] at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
 [junit] at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
 [junit] at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
 [junit] See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false 
 to get more logs.)
 [junit] Cleaning up TestCliDriver
 [junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 10.751 sec
 [junit] Test org.apache.hadoop.hive.cli.TestCliDriver FAILED
 [for] /home/cloudera/Code/hive/ql/build.xml: The following error occurred 
 while executing this line:
 [for] 

[jira] [Commented] (HIVE-3722) Create index fails on CLI using remote metastore

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547913#comment-13547913
 ] 

Hudson commented on HIVE-3722:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3722 Create index fails on CLI using remote metastore
(Kevin Wilfong via namit) (Revision 1412415)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1412415
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHive.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveRemote.java


 Create index fails on CLI using remote metastore
 

 Key: HIVE-3722
 URL: https://issues.apache.org/jira/browse/HIVE-3722
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.10.0

 Attachments: HIVE-3722.1.patch.txt


 If the CLI uses a remote metastore and the user attempts to create an index 
 without a comment, it will fail with a NPE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3721) ALTER TABLE ADD PARTS should check for valid partition spec and throw a SemanticException if part spec is not valid

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547914#comment-13547914
 ] 

Hudson commented on HIVE-3721:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3721 ALTER TABLE ADD PARTS should check for valid partition spec and 
throw a SemanticException
if part spec is not valid (Pamela Vagata via namit) (Revision 1412432)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1412432
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientnegative/alter_table_add_partition.q
* /hive/trunk/ql/src/test/results/clientnegative/alter_table_add_partition.q.out


 ALTER TABLE ADD PARTS should check for valid partition spec and throw a 
 SemanticException if part spec is not valid
 ---

 Key: HIVE-3721
 URL: https://issues.apache.org/jira/browse/HIVE-3721
 Project: Hive
  Issue Type: Task
Reporter: Pamela Vagata
Assignee: Pamela Vagata
Priority: Minor
 Fix For: 0.11.0

 Attachments: HIVE-3721.1.patch.txt, HIVE-3721.2.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3728) make optimizing multi-group by configurable

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547915#comment-13547915
 ] 

Hudson commented on HIVE-3728:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3728. make optimizing multi-group by configurable. (njain via 
kevinwilfong) (Revision 1424292)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1424292
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/groupby_mutli_insert_common_distinct.q
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_mutli_insert_common_distinct.q.out


 make optimizing multi-group by configurable
 ---

 Key: HIVE-3728
 URL: https://issues.apache.org/jira/browse/HIVE-3728
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Fix For: 0.11.0

 Attachments: hive.3728.2.patch, hive.3728.3.patch


 This was done as part of https://issues.apache.org/jira/browse/HIVE-609.
 This should be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2781) HBaseSerDe should allow users to specify the timestamp passed to Puts

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547916#comment-13547916
 ] 

Hudson commented on HIVE-2781:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2781: HBaseSerDe should allow users to specify the timestamp passed to 
Puts (toffer via hashutosh) (Revision 1293616)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1293616
Files : 
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java
* 
/hive/trunk/hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseSerDe.java


 HBaseSerDe should allow users to specify the timestamp passed to Puts 
 --

 Key: HIVE-2781
 URL: https://issues.apache.org/jira/browse/HIVE-2781
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.9.0
Reporter: Francis Liu
Assignee: Francis Liu
 Fix For: 0.9.0

 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2781.D1863.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2781.D1863.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2781.D1881.1.patch, HIVE-2781.D1881.1.patch


 Users may want to specify the timestamp used for Put requests to hbase. Thus 
 enabling users to have the same timestamp for a single batch of writes. Which 
 would be useful for a number of things. HCatalog's HBase storageHandler 
 implementation makes use of this feature to provide users with snapshot 
 isolation and write transactions. My proposal is to add the timestamp option 
 as a final static member:
 public static final long HBASE_PUT_TIMESTAMP = hbase.put_timestamp
 And passing this value to all the Puts created by serialize()

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3829) Hive CLI needs UNSET TBLPROPERTY command

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547917#comment-13547917
 ] 

Hudson commented on HIVE-3829:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3829 Hive CLI needs UNSET TBLPROPERTY command
(Zhenxiao Luo via namit) (Revision 1425604)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1425604
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzerFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/AlterTableDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/HiveOperation.java
* /hive/trunk/ql/src/test/queries/clientnegative/set_table_property.q
* /hive/trunk/ql/src/test/queries/clientnegative/unset_table_property.q
* /hive/trunk/ql/src/test/queries/clientnegative/unset_view_property.q
* /hive/trunk/ql/src/test/queries/clientpositive/unset_table_view_property.q
* /hive/trunk/ql/src/test/results/clientnegative/set_table_property.q.out
* /hive/trunk/ql/src/test/results/clientnegative/unset_table_property.q.out
* /hive/trunk/ql/src/test/results/clientnegative/unset_view_property.q.out
* /hive/trunk/ql/src/test/results/clientpositive/unset_table_view_property.q.out


 Hive CLI needs UNSET TBLPROPERTY command
 

 Key: HIVE-3829
 URL: https://issues.apache.org/jira/browse/HIVE-3829
 Project: Hive
  Issue Type: Bug
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.11.0

 Attachments: HIVE-3829.1.patch.txt, HIVE-3829.2.patch.txt, 
 HIVE-3829.3.patch.txt, HIVE-3829.4.patch.txt, HIVE-3829.5.patch.txt


 The Hive CLI currently supports
 ALTER TABLE table SET TBLPROPERTIES ('key1' = 'value1', 'key2' = 'value2', 
 ...);
 To add/change the value of table properties.
 It would be really useful if Hive also supported
 ALTER TABLE table UNSET TBLPROPERTIES ('key1', 'key2', ...);
 Which would remove table properties

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3828) insert overwrite fails with stored-as-dir in cluster

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547918#comment-13547918
 ] 

Hudson commented on HIVE-3828:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3828 insert overwrite fails with stored-as-dir in cluster
(Gang Tim Liu via namit) (Revision 1425398)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1425398
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/test/queries/clientpositive/list_bucket_dml_10.q
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_10.q.out


 insert overwrite fails with stored-as-dir in cluster
 

 Key: HIVE-3828
 URL: https://issues.apache.org/jira/browse/HIVE-3828
 Project: Hive
  Issue Type: Bug
  Components: Import/Export
Reporter: Gang Tim Liu
Assignee: Gang Tim Liu
 Fix For: 0.11.0

 Attachments: HIVE-3828.patch.1


 The following query works fine in hive TestCliDriver test suite but not 
 minimr because different Hadoop file system is used.
 The error is
 {code}
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
 output from: .../_task_tmp.-ext-10002/key=103/_tmp.00_0 to: 
 .../_tmp.-ext-10002/key=103/00_0
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3126) Generate build the velocity based Hive tests on windows by fixing the path issues

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547919#comment-13547919
 ] 

Hudson commented on HIVE-3126:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3126 : Generate  build the velocity based Hive tests on windows by 
fixing the path issues (Kanna Karanam via Ashutosh Chauhan) (Revision 1365467)

 Result = ABORTED
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1365467
Files : 
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/QTestGenTask.java
* /hive/trunk/build-common.xml
* /hive/trunk/build.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/contrib/src/test/org/apache/hadoop/hive/contrib/mr/TestGenericMR.java
* /hive/trunk/data/conf/hive-site.xml
* /hive/trunk/hbase-handler/src/test/templates/TestHBaseCliDriver.vm
* /hive/trunk/hbase-handler/src/test/templates/TestHBaseNegativeCliDriver.vm
* /hive/trunk/odbc/build.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Context.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/templates/TestCliDriver.vm
* /hive/trunk/ql/src/test/templates/TestNegativeCliDriver.vm
* /hive/trunk/ql/src/test/templates/TestParse.vm
* /hive/trunk/ql/src/test/templates/TestParseNegative.vm
* /hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyFileSystem.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/fs/ProxyLocalFileSystem.java
* /hive/trunk/testutils/hadoop.cmd


 Generate  build the velocity based Hive tests on windows by fixing the path 
 issues
 ---

 Key: HIVE-3126
 URL: https://issues.apache.org/jira/browse/HIVE-3126
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0, 0.10.0, 0.9.1
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows, test
 Fix For: 0.10.0

 Attachments: HIVE-3126.10.patch.txt, HIVE-3126.1.patch.txt, 
 HIVE-3126.2.patch.txt, HIVE-3126.3.patch.txt, HIVE-3126.4.patch.txt, 
 HIVE-3126.5.patch.txt, HIVE-3126.6.patch.txt, HIVE-3126.7.patch.txt, 
 HIVE-3126.8.patch.txt, HIVE-3126.9.patch.txt


 1)Escape the backward slash in Canonical Path if unit test runs on windows.
 2)Diff comparison – 
  a.   Ignore the extra spacing on windows
  b.   Ignore the different line endings on windows  Unix
  c.   Convert the file paths to windows specific. (Handle spaces 
 etc..)
 3)Set the right file scheme  class path separators while invoking the junit 
 task from 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3826) Rollbacks and retries of drops cause org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database row)

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547920#comment-13547920
 ] 

Hudson commented on HIVE-3826:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3826 Rollbacks and retries of drops cause 
org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database row)
(Kevin Wilfong via namit) (Revision 1425247)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1425247
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


 Rollbacks and retries of drops cause 
 org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database 
 row)
 -

 Key: HIVE-3826
 URL: https://issues.apache.org/jira/browse/HIVE-3826
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.11.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.11.0

 Attachments: HIVE-3826.1.patch.txt


 I'm not sure if this is the only cause of the exception 
 org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database 
 row) from the metastore, but one cause seems to be related to a drop command 
 failing, and being retried by the client.
 Based on focusing on a single thread in the metastore with DEBUG level 
 logging, I was seeing the objects that were intended to be dropped remaining 
 in the PersistenceManager cache even after a rollback.  The steps seemed to 
 be as follows:
 1) First attempt to drop the table, the table is pulled into the 
 PersistenceManager cache for the purposes of dropping
 2) The drop fails, e.g. due to a lock wait timeout on the SQL backend, this 
 causes a rollback of the transaction
 3) The drop is retried using a different thread on the metastore Thrift 
 server or a different server and succeeds
 4) Back on the original thread of the original Thrift server someone tries to 
 perform some write operation which produces a commit.  This causes those 
 detached objects related to the dropped table to attempt to reattach, causing 
 JDO to query the SQL backend for those objects which it can't find.  This 
 causes the exception.
 I was able to reproduce this regularly using the following sequence of 
 commands:
 Hive client 1 (Hive1): connected to a metastore Thrift server running a 
 single thread, I hard coded a RuntimeException into the code to drop a table 
 in the ObjectStore, specifically right before the commit in 
 preDropStorageDescriptor, to induce a rollback.  I also turned off all 
 retries at all layers of the metastore.
 Hive client 2 (Hive2): connected to a separate metastore Thrift server 
 running with standard configs and code
 1: On Hive1, CREATE TABLE t1 (c STRING);
 2: On Hive1, DROP TABLE t1; // This failed due to the hard coded exception
 3: On Hive2, DROP TABLE t1; // Succeeds
 4: On Hive1, CREATE DATABASE d1; // This database already existed, I'm not 
 sure why this was necessary, but it didn't work without it, it seemed to have 
 an affect on the order objects were committed in the next step
 5: On Hive1, CREATE DATABASE d2; // This database didn't exist, it would fail 
 with the NucleusObjectNotFoundException
 The object that would cause the exception varied, I saw the MTable, the 
 MSerDeInfo, and MTablePrivilege from the table that attempted to be dropped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3125) sort_array doesn't work with LazyPrimitive

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547921#comment-13547921
 ] 

Hudson commented on HIVE-3125:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3125 sort_array does not work with LazyPrimitive Philip Tromans (via 
egc) (Revision 1353203)

 Result = ABORTED
ecapriolo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1353203
Files : 
* /hive/trunk/data/files/primitive_type_arrays.txt
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFSortArray.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_sort_array.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_sort_array.q.out


 sort_array doesn't work with LazyPrimitive
 --

 Key: HIVE-3125
 URL: https://issues.apache.org/jira/browse/HIVE-3125
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 0.9.0
Reporter: Philip Tromans
Assignee: Philip Tromans
 Fix For: 0.10.0

 Attachments: HIVE-3125.1.patch.txt, HIVE-3125.2.patch.txt


 The sort_array function doesn't work against data that's actually come out of 
 a table. The test suite only covers constants given in the query.
 If you try and use sort_array on an array from a table, then you get a 
 ClassCastException that you can't convert LazyX to Comparable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3729) Error in groupSetExpression rule in Hive grammar

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547922#comment-13547922
 ] 

Hudson commented on HIVE-3729:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3729 Error in groupSetExpression rule in Hive grammar
(Harish Butani via namit) (Revision 1414608)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1414608
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g


 Error in groupSetExpression rule in Hive grammar
 

 Key: HIVE-3729
 URL: https://issues.apache.org/jira/browse/HIVE-3729
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
 Environment: All
Reporter: Harish Butani
Assignee: Harish Butani
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3729.1.patch.txt

   Original Estimate: 5m
  Remaining Estimate: 5m

 Here is the error:
 Hive.g:1902:38: reference to rewrite element groupByExpression without 
 reference on left of -

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3123) Hadoop20Shim. CombineFileRecordReader does not report progress within files

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547923#comment-13547923
 ] 

Hudson commented on HIVE-3123:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3123. Hadoop20Shim. CombineFileRecordReader does not report progress 
within files (Dmytro Molkov via kevinwilfong) (Revision 1350054)

 Result = ABORTED
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1350054
Files : 
* 
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java


 Hadoop20Shim. CombineFileRecordReader does not report progress within files
 ---

 Key: HIVE-3123
 URL: https://issues.apache.org/jira/browse/HIVE-3123
 Project: Hive
  Issue Type: Bug
  Components: Shims
Reporter: Dmytro Molkov
Priority: Trivial
 Fix For: 0.10.0

 Attachments: shim.patch


 When using CombineHiveInputFormat the progress of the task only changes on 
 each of the processed parts of the split, but not within them. This patch 
 fixes the issue and the progress of the task will be reported continuously.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3120) make copyLocal work for parallel tests

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547924#comment-13547924
 ] 

Hudson commented on HIVE-3120:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-3120 make copyLocal work for parallel tests
(Shuai Ding via namit) (Revision 1349548)

 Result = ABORTED
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1349548
Files : 
* /hive/trunk/testutils/ptest/hivetest.py


 make copyLocal work for parallel tests
 --

 Key: HIVE-3120
 URL: https://issues.apache.org/jira/browse/HIVE-3120
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Shuai Ding
 Fix For: 0.10.0


 It would be very useful if I can test a local patch using the
 parallel test framework.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-1643) support range scans and non-key columns in HBase filter pushdown

2013-01-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13547925#comment-13547925
 ] 

Hudson commented on HIVE-1643:
--

Integrated in Hive-trunk-hadoop2 #54 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/54/])
HIVE-2771 [jira] Add support for filter pushdown for key ranges in hbase for
keys of type string
(Ashutosh Chauhan via Carl Steinbach)

Summary:
https://issues.apache.org/jira/browse/HIVE-2771

This patch adds support for key range scans pushdown to hbase for keys of type
string. With this patch filter pushdowns of following types are supported:
a) Point lookups for keys of any types.
b) Range scans for keys of type string. 

Test Plan:
Added hbase_ppd_key_range.q which is modeled after hbase_pushdown.q

This is a subtask of HIVE-1643

Test Plan: EMPTY

Reviewers: JIRA, jsichi, cwsteinbach

Reviewed By: cwsteinbach

CC: jsichi, ashutoshc

Differential Revision: https://reviews.facebook.net/D1551 (Revision 1297675)

 Result = ABORTED
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1297675
Files : 
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
* /hive/trunk/hbase-handler/src/test/queries/hbase_ppd_key_range.q
* /hive/trunk/hbase-handler/src/test/results/hbase_ppd_key_range.q.out
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java


 support range scans and non-key columns in HBase filter pushdown
 

 Key: HIVE-1643
 URL: https://issues.apache.org/jira/browse/HIVE-1643
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: bharath v
  Labels: patch
 Attachments: hbase_handler.patch, Hive-1643.2.patch, HIVE-1643.patch


 HIVE-1226 added support for WHERE rowkey=3.  We would like to support WHERE 
 rowkey BETWEEN 10 and 20, as well as predicates on non-rowkeys (plus 
 conjunctions etc).  Non-rowkey conditions can't be used to filter out entire 
 ranges, but they can be used to push the per-row filter processing as far 
 down as possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   4   5   6   >