[jira] [Updated] (HIVE-3707) Round map/reduce progress down when it is in the range [99.5, 100)

2012-11-15 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3707:
-

   Resolution: Fixed
Fix Version/s: 0.10.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed. Thanks Kevin

 Round map/reduce progress down when it is in the range [99.5, 100)
 --

 Key: HIVE-3707
 URL: https://issues.apache.org/jira/browse/HIVE-3707
 Project: Hive
  Issue Type: Improvement
  Components: Logging, Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3707.1.patch.txt


 In HadoopJobExecHelper the mapProgress and reduceProgress are the value of 
 these counters taken from the running job rounded to an integer percentage.  
 This means that e.g. if the mappers are 99.5% done this is stored as 100%.
 One of the most common questions I see from new users is, the map and reduce 
 both report being 100% done, why is the query still running?
 By rounding down the value in this interval so it's only 100% when it's 
 really 100% we could avoid that confusion.
 Also, the way it appears the QueryPlan and MapRedTask determine if the 
 map/reduce phases are done is by checking if this value == 100.  I couldn't 
 find anywhere where they're used for anything significant, but they're 
 reporting early completion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3531) Simple lock manager for dedicated hive server

2012-11-15 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497827#comment-13497827
 ] 

Carl Steinbach commented on HIVE-3531:
--

@Navis: I added some comments on phabricator. Please take a look when you have 
time. Thanks.

 Simple lock manager for dedicated hive server
 -

 Key: HIVE-3531
 URL: https://issues.apache.org/jira/browse/HIVE-3531
 Project: Hive
  Issue Type: Improvement
  Components: Locking, Server Infrastructure
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-3531.D5871.1.patch


 In many cases, we uses hive server as a sole proxy for executing all the 
 queries. For that, current default lock manager based on zookeeper seemed a 
 little heavy. Simple in-memory lock manager could be enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3531) Simple lock manager for dedicated hive server

2012-11-15 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-3531:
-

Component/s: Locking

 Simple lock manager for dedicated hive server
 -

 Key: HIVE-3531
 URL: https://issues.apache.org/jira/browse/HIVE-3531
 Project: Hive
  Issue Type: Improvement
  Components: Locking, Server Infrastructure
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-3531.D5871.1.patch


 In many cases, we uses hive server as a sole proxy for executing all the 
 queries. For that, current default lock manager based on zookeeper seemed a 
 little heavy. Simple in-memory lock manager could be enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3531) Simple lock manager for dedicated hive server

2012-11-15 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497830#comment-13497830
 ] 

Phabricator commented on HIVE-3531:
---

cwsteinbach has requested changes to the revision HIVE-3531 [jira] Simple lock 
manager for dedicated hive server.

INLINE COMMENTS
  ql/src/test/org/apache/hadoop/hive/ql/lockmgr/TestDedicatedLockManager.java:1 
Missing ASF license header.
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DedicatedLockManager.java:25 
Please consider changing the name to EmbeddedLockManager.
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/DedicatedLockManager.java:27 
I'm not sure that creating a separate class for the InMemoryLockManager code 
makes sense. Please consider removing InMemoryLockManager and moving that code 
here.
  ql/src/java/org/apache/hadoop/hive/ql/lockmgr/InMemoryLockManager.java:145 
Please log this exception.

REVISION DETAIL
  https://reviews.facebook.net/D5871

BRANCH
  DPAL-1906

To: JIRA, cwsteinbach, navis


 Simple lock manager for dedicated hive server
 -

 Key: HIVE-3531
 URL: https://issues.apache.org/jira/browse/HIVE-3531
 Project: Hive
  Issue Type: Improvement
  Components: Locking, Server Infrastructure
Reporter: Navis
Assignee: Navis
Priority: Trivial
 Attachments: HIVE-3531.D5871.1.patch


 In many cases, we uses hive server as a sole proxy for executing all the 
 queries. For that, current default lock manager based on zookeeper seemed a 
 little heavy. Simple in-memory lock manager could be enough.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3377) ant model-jar command fails in metastore

2012-11-15 Thread Darren Yin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497833#comment-13497833
 ] 

Darren Yin commented on HIVE-3377:
--

The problem likely comes from ant-contrib not being loaded by ant when ant 
model-jar is run. If you want it to run without deleting that line, you may be 
able to run something like

ant -lib /usr/share/java model-jar

assuming that you have the ant-contrib jar sitting in /usr/share/java.

 ant model-jar command fails in metastore
 

 Key: HIVE-3377
 URL: https://issues.apache.org/jira/browse/HIVE-3377
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Vandana Ayyalasomayajula
Priority: Minor
  Labels: build

 Running ant model-jar command to set up eclipse dev environment from the 
 following wiki:
 https://cwiki.apache.org/Hive/gettingstarted-eclipsesetup.html
 fails with the following message:
 BUILD FAILED
 **/workspace/hive-trunk/metastore/build.xml:22: The following error occurred 
 while executing this line:
 **/workspace/hive-trunk/build-common.xml:112: Problem: failed to create task 
 or type osfamily
 Cause: The name is undefined.
 Action: Check the spelling.
 Action: Check that any custom tasks/types have been declared.
 Action: Check that any presetdef/macrodef declarations have taken place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3381) Result of outer join is not valid

2012-11-15 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497840#comment-13497840
 ] 

Carl Steinbach commented on HIVE-3381:
--

+1

@Navis: Can you test and commit this patch? Thanks.

 Result of outer join is not valid
 -

 Key: HIVE-3381
 URL: https://issues.apache.org/jira/browse/HIVE-3381
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3381.D5565.3.patch


 Outer joins, especially full outer joins or outer join with filter on 'ON 
 clause' is not showing proper results. For example, query in test join_1to1.q
 {code}
 SELECT * FROM join_1to1_1 a full outer join join_1to1_2 b on a.key1 = b.key1 
 and a.value = 66 and b.value = 66 ORDER BY a.key1 ASC, a.key2 ASC, a.value 
 ASC, b.key1 ASC, b.key2 ASC, b.value ASC;
 {code}
 results
 {code}
 NULL  NULLNULLNULLNULL66
 NULL  NULLNULLNULL10050   66
 NULL  NULLNULL10  10010   66
 NULL  NULLNULL30  10030   88
 NULL  NULLNULL35  10035   88
 NULL  NULLNULL40  10040   88
 NULL  NULLNULL40  10040   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULL66  NULLNULLNULL
 NULL  10050   66  NULLNULLNULL
 5 10005   66  5   10005   66
 1510015   66  NULLNULLNULL
 2010020   66  20  10020   66
 2510025   88  NULLNULLNULL
 3010030   66  NULLNULLNULL
 3510035   88  NULLNULLNULL
 4010040   66  NULLNULLNULL
 4010040   66  40  10040   66
 4010040   88  NULLNULLNULL
 4010040   88  NULLNULLNULL
 5010050   66  NULLNULLNULL
 5010050   66  50  10050   66
 5010050   66  50  10050   66
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 {code} 
 but it seemed not right. This should be 
 {code}
 NULL  NULLNULLNULLNULL66
 NULL  NULLNULLNULL10050   66
 NULL  NULLNULL10  10010   66
 NULL  NULLNULL25  10025   66
 NULL  NULLNULL30  10030   88
 NULL  NULLNULL35  10035   88
 NULL  NULLNULL40  10040   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL80  10040   66
 NULL  NULLNULL80  10040   66
 NULL  NULL66  NULLNULLNULL
 NULL  10050   66  NULLNULLNULL
 5 10005   66  5   10005   66
 1510015   66  NULLNULLNULL
 2010020   66  20  10020   66
 2510025   88  NULLNULLNULL
 3010030   66  NULLNULLNULL
 3510035   88  NULLNULLNULL
 4010040   66  40  10040   66
 4010040   88  NULLNULLNULL
 5010050   66  50  10050   66
 5010050   66  50  10050   66
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3377) ant model-jar command fails in metastore

2012-11-15 Thread Darren Yin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497839#comment-13497839
 ] 

Darren Yin commented on HIVE-3377:
--

Actually, that may not cover all of it. I believe this bug report may be 
somewhat related (both issues with ant-contrib not being properly imported): 
https://issues.apache.org/jira/browse/HIVE-2904

 ant model-jar command fails in metastore
 

 Key: HIVE-3377
 URL: https://issues.apache.org/jira/browse/HIVE-3377
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Vandana Ayyalasomayajula
Priority: Minor
  Labels: build

 Running ant model-jar command to set up eclipse dev environment from the 
 following wiki:
 https://cwiki.apache.org/Hive/gettingstarted-eclipsesetup.html
 fails with the following message:
 BUILD FAILED
 **/workspace/hive-trunk/metastore/build.xml:22: The following error occurred 
 while executing this line:
 **/workspace/hive-trunk/build-common.xml:112: Problem: failed to create task 
 or type osfamily
 Cause: The name is undefined.
 Action: Check the spelling.
 Action: Check that any custom tasks/types have been declared.
 Action: Check that any presetdef/macrodef declarations have taken place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3691) TestDynamicSerDe failed with IBM JDK

2012-11-15 Thread Bing Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497842#comment-13497842
 ] 

Bing Li commented on HIVE-3691:
---

Another option is using LinkedHashMap instead of HashMap, it worked on both Sun 
JDK and IBM JDK as well.

 TestDynamicSerDe failed with IBM JDK
 

 Key: HIVE-3691
 URL: https://issues.apache.org/jira/browse/HIVE-3691
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.9.0
 Environment: ant-1.8.2, IBM JDK 1.6
Reporter: Bing Li
Assignee: Bing Li
Priority: Minor
 Attachments: HIVE-3691.1.patch.txt


 the order of the output in the gloden file are different from JDKs.
 the root cause of this is the implementation of HashMap in JDK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3706) getBoolVar in FileSinkOperator can be optimized

2012-11-15 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3706:
-

   Resolution: Fixed
Fix Version/s: 0.10.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed. Thanks Kevin

 getBoolVar in FileSinkOperator can be optimized
 ---

 Key: HIVE-3706
 URL: https://issues.apache.org/jira/browse/HIVE-3706
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.10.0

 Attachments: HIVE-3706.1.patch.txt


 There's a call to HiveConf.getBoolVar in FileSinkOperator's processOp method. 
  In benchmarks we found this call to be using ~2% of the CPU time on simple 
 queries, e.g. INSERT OVERWRITE TABLE t1 SELECT * FROM t2;
 This boolean value, a flag to collect the RawDataSize stat, won't change 
 during the processing of a query, so we can determine it at initialization 
 and store that value, saving that CPU.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request: allow 't', 'T', '1', 'f', 'F', and '0' to be allowable true/false values for the boolean hive type

2012-11-15 Thread Carl Steinbach

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7759/#review13463
---


This change needs a testcase. Please extend the sample boolean data set located 
in data/files/bool.txt and add a new qfile test that SELECTs from this table. 
See compute_stats_boolean.q for an example of how to load this file into a 
table.


serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java
https://reviews.apache.org/r/7759/#comment28838

Please fix the indentation.


- Carl Steinbach


On Oct. 29, 2012, 7:11 a.m., Alexander Alten-Lorenz wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/7759/
 ---
 
 (Updated Oct. 29, 2012, 7:11 a.m.)
 
 
 Review request for hive.
 
 
 Description
 ---
 
 interpret t as true and f as false for boolean types. PostgreSQL exports 
 represent it that way
 
 
 This addresses bug HIVE-3635.
 https://issues.apache.org/jira/browse/HIVE-3635
 
 
 Diffs
 -
 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyBoolean.java c741c3a 
 
 Diff: https://reviews.apache.org/r/7759/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Alexander Alten-Lorenz
 




[jira] [Updated] (HIVE-3635) allow 't', 'T', '1', 'f', 'F', and '0' to be allowable true/false values for the boolean hive type

2012-11-15 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-3635:
-

Status: Open  (was: Patch Available)

@Alex: I left some comments on reviewboard. Thanks.

  allow 't', 'T', '1', 'f', 'F', and '0' to be allowable true/false values for 
 the boolean hive type
 ---

 Key: HIVE-3635
 URL: https://issues.apache.org/jira/browse/HIVE-3635
 Project: Hive
  Issue Type: Improvement
  Components: CLI
Affects Versions: 0.9.0
Reporter: Alexander Alten-Lorenz
Assignee: Alexander Alten-Lorenz
 Fix For: 0.10.0

 Attachments: HIVE-3635.patch


 interpret t as true and f as false for boolean types. PostgreSQL exports 
 represent it that way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3691) TestDynamicSerDe failed with IBM JDK

2012-11-15 Thread Bing Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Li updated HIVE-3691:
--

Attachment: HIVE-3691.1.patch-trunk.txt

the patch file is based on trunk,using LinkedHashMap

 TestDynamicSerDe failed with IBM JDK
 

 Key: HIVE-3691
 URL: https://issues.apache.org/jira/browse/HIVE-3691
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.9.0
 Environment: ant-1.8.2, IBM JDK 1.6
Reporter: Bing Li
Assignee: Bing Li
Priority: Minor
 Attachments: HIVE-3691.1.patch-trunk.txt, HIVE-3691.1.patch.txt


 the order of the output in the gloden file are different from JDKs.
 the root cause of this is the implementation of HashMap in JDK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3622) reflect udf cannot find method which has arguments of primitive types and String, Binary, Timestamp types mixed

2012-11-15 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497856#comment-13497856
 ] 

Carl Steinbach commented on HIVE-3622:
--

+1

@Navis: I left two small comments on Phabricator, neither of which requires a 
second review round. Can you please test and commit this on your own? Thanks.

 reflect udf cannot find method which has arguments of primitive types and 
 String, Binary, Timestamp types mixed
 ---

 Key: HIVE-3622
 URL: https://issues.apache.org/jira/browse/HIVE-3622
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-3622.D6201.1.patch


 From 
 http://mail-archives.apache.org/mod_mbox/hive-user/201210.mbox/%3CCANkN6JApahvYrVuiy-j4VJ0dO2tzTpePwi7LUNCp12Vwj6d6xw%40mail.gmail.com%3E
 noformat
 Query
 select reflect('java.lang.Integer', 'parseInt', 'a', 16) from src limit 1;
 throws java.lang.NoSuchMethodException: java.lang.Integer.parseInt(null, int)
 noformat

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3691) TestDynamicSerDe failed with IBM JDK

2012-11-15 Thread Bing Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497857#comment-13497857
 ] 

Bing Li commented on HIVE-3691:
---

[~renata] I think LinkedHashMap is enough to this case, what's your opinion on 
this?

 TestDynamicSerDe failed with IBM JDK
 

 Key: HIVE-3691
 URL: https://issues.apache.org/jira/browse/HIVE-3691
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1, 0.8.0, 0.9.0
 Environment: ant-1.8.2, IBM JDK 1.6
Reporter: Bing Li
Assignee: Bing Li
Priority: Minor
 Attachments: HIVE-3691.1.patch-trunk.txt, HIVE-3691.1.patch.txt


 the order of the output in the gloden file are different from JDKs.
 the root cause of this is the implementation of HashMap in JDK

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3622) reflect udf cannot find method which has arguments of primitive types and String, Binary, Timestamp types mixed

2012-11-15 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497858#comment-13497858
 ] 

Phabricator commented on HIVE-3622:
---

cwsteinbach has accepted the revision HIVE-3622 [jira] reflect udf cannot find 
method which has arguments of primitive types and String, Binary, Timestamp 
types mixed.

INLINE COMMENTS
  ql/src/test/queries/clientpositive/udf_reflect.q:25 May as well save some 
test time and add this to the first query.
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFReflect.java:196 
Please expand on this comment. What's ambiguous?

REVISION DETAIL
  https://reviews.facebook.net/D6201

BRANCH
  DPAL-1923

To: JIRA, cwsteinbach, navis


 reflect udf cannot find method which has arguments of primitive types and 
 String, Binary, Timestamp types mixed
 ---

 Key: HIVE-3622
 URL: https://issues.apache.org/jira/browse/HIVE-3622
 Project: Hive
  Issue Type: Bug
  Components: UDF
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-3622.D6201.1.patch


 From 
 http://mail-archives.apache.org/mod_mbox/hive-user/201210.mbox/%3CCANkN6JApahvYrVuiy-j4VJ0dO2tzTpePwi7LUNCp12Vwj6d6xw%40mail.gmail.com%3E
 noformat
 Query
 select reflect('java.lang.Integer', 'parseInt', 'a', 16) from src limit 1;
 throws java.lang.NoSuchMethodException: java.lang.Integer.parseInt(null, int)
 noformat

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3685) TestCliDriver (script_pipe.q) failed with IBM JDK

2012-11-15 Thread Bing Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Li updated HIVE-3685:
--

Attachment: HIVE-3685.1.patch-trunk.txt

the patch file is made based on trunk

 TestCliDriver (script_pipe.q) failed with IBM JDK
 -

 Key: HIVE-3685
 URL: https://issues.apache.org/jira/browse/HIVE-3685
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.7.1, 0.8.0, 0.9.0
 Environment: ant-1.8.2
 IBM JDK 1.6
Reporter: Bing Li
Assignee: Bing Li
 Attachments: HIVE-3685.1.patch-trunk.txt


 1 failed: TestCliDriver (script_pipe.q)
 [junit] Begin query: script_pipe.q
 [junit] java.io.IOException: No such file or directory
 [junit] at java.io.FileOutputStream.writeBytes(Native Method)
 [junit] at java.io.FileOutputStream.write(FileOutputStream.java:293)
 [junit] at 
 java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:76)
 [junit] at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:134)
 [junit] at 
 java.io.BufferedOutputStream.flush(BufferedOutputStream.java:135)
 [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:117)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.TextRecordWriter.close(TextRecordWriter.java:48)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:365)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)
 [junit] at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)
 [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while 
 closing ..
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)
 [junit] at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)
 [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while 
 closing ..
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)
 [junit] at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)
 [junit] org.apache.hadoop.hive.ql.metadata.HiveException: Hit error while 
 closing ..
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ScriptOperator.close(ScriptOperator.java:452)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
 [junit] at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:473)
 [junit] at 
 org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:411)
 [junit] at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)
 [junit] Ended Job = job_local_0001 with errors
 [junit] Error during job, obtaining debugging information...
 [junit] Exception: Client Execution failed with error code = 9
 [junit] See build/ql/tmp/hive.log, or try ant test ... 
 -Dtest.silent=false to get more logs.
 [junit] 

[jira] [Commented] (HIVE-3664) Avoid to create a symlink for hive-contrib.jar file in dist\lib folder.

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497904#comment-13497904
 ] 

Hudson commented on HIVE-3664:
--

Integrated in Hive-trunk-h0.21 #1797 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1797/])
HIVE-3664 : Avoid to create a symlink for hive-contrib.jar file in dist\lib 
folder. (Kanna Karanam via Ashutosh Chauhan) (Revision 1409548)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409548
Files : 
* /hive/trunk/build.xml


 Avoid to create a symlink for hive-contrib.jar file in dist\lib folder.
 ---

 Key: HIVE-3664
 URL: https://issues.apache.org/jira/browse/HIVE-3664
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0, 0.9.1
Reporter: Kanna Karanam
Assignee: Kanna Karanam
  Labels: Windows
 Fix For: 0.10.0

 Attachments: HIVE-3664.1.patch.txt


 It forces us to enumerate all the jars except this jar on Windows instead of 
 directly referencing the “dist\lib\*.jar” folder in the class path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3589) describe/show partition/show tblproperties command should accept database name

2012-11-15 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-3589:
-

Status: Open  (was: Patch Available)

@Navis: I added some comments on phabricator. Thanks.

 describe/show partition/show tblproperties command should accept database name
 --

 Key: HIVE-3589
 URL: https://issues.apache.org/jira/browse/HIVE-3589
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Query Processor
Affects Versions: 0.8.1
Reporter: Sujesh Chirackkal
Assignee: Navis
Priority: Minor
 Attachments: HIVE-3589.D6075.1.patch


 describe command not giving the details when called as describe 
 dbname.tablename.
 Throwing the error Table dbname not found.
 Ex: hive -e describe masterdb.table1 will throw error
 Table masterdb not found

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3589) describe/show partition/show tblproperties command should accept database name

2012-11-15 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497907#comment-13497907
 ] 

Phabricator commented on HIVE-3589:
---

cwsteinbach has requested changes to the revision HIVE-3589 [jira] 
describe/show partition/show tblproperties command should accept database name.

INLINE COMMENTS
  ql/src/test/queries/clientpositive/describe_table.q:5 Please demonstrate that 
this also works from another db/schema, e.g:

  CREATE DATABASE db1;
  USE db1;
  DESCRIBE default.srcpart;
  ...

  This same request also applies to the other tests that this patch modifies.
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java:1802 Formatting.
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java:1407 It 
looks likes this method either returns true or throws a SemanticException. I 
think it should either return true or false, and either never throw an 
exception, or only throw an exception when the db name is not valid for 
syntactic reasons.
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java:1472 
There's some logic in DDLSemanticAnalyzer (see QualifiedNameUtil) that looks 
pretty similar. It would be nice to pull QualifiedNameUtil out into its own 
class in hive-common and reference that code from here and DDLSemanticAnalyzer.

  Also, parseExpression is a little generic. Maybe change the name to 
parseTableName?
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java:1474 I 
think it would be a bit cleaner to use StringUtils.split(), e.g:

  String[] names = StringUtils.split(tableName, .);
  switch (names.length) {
...
  }
  ql/src/java/org/apache/hadoop/hive/ql/plan/DescTableDesc.java:112 If 
expression[0] == null, does that always imply that we're using the default db? 
If so can we return default. + expression[1] in that case? And if we do that, 
then we should probably just enforce that expression[0] != null in the 
constructor.
  ql/src/java/org/apache/hadoop/hive/ql/plan/DescTableDesc.java:38 Please add a 
note explaining that DescTableDesc is overloaded to handle both describe column 
and describe table, and what this parameter is expected to contain in each case.
  ql/src/java/org/apache/hadoop/hive/ql/plan/ShowTblPropertiesDesc.java:34 
expression is too generic in this case. Please change the name to 
qualifiedTableName.
  ql/src/java/org/apache/hadoop/hive/ql/plan/ShowPartitionsDesc.java:64 
s/expression/qualifiedTableName/

REVISION DETAIL
  https://reviews.facebook.net/D6075

BRANCH
  DPAL-1916

To: JIRA, cwsteinbach, navis


 describe/show partition/show tblproperties command should accept database name
 --

 Key: HIVE-3589
 URL: https://issues.apache.org/jira/browse/HIVE-3589
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Query Processor
Affects Versions: 0.8.1
Reporter: Sujesh Chirackkal
Assignee: Navis
Priority: Minor
 Attachments: HIVE-3589.D6075.1.patch


 describe command not giving the details when called as describe 
 dbname.tablename.
 Throwing the error Table dbname not found.
 Ex: hive -e describe masterdb.table1 will throw error
 Table masterdb not found

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1797 - Still Failing

2012-11-15 Thread Apache Jenkins Server
Changes for Build #1764
[kevinwilfong] HIVE-3610. Add a command Explain dependency ... (Sambavi 
Muthukrishnan via kevinwilfong)


Changes for Build #1765

Changes for Build #1766
[hashutosh] HIVE-3441 : testcases escape1,escape2 fail on windows (Thejas Nair 
via Ashutosh Chauhan)

[kevinwilfong] HIVE-3499. add tests to use bucketing metadata for partitions. 
(njain via kevinwilfong)


Changes for Build #1767
[kevinwilfong] HIVE-3276. optimize union sub-queries. (njain via kevinwilfong)


Changes for Build #1768

Changes for Build #1769

Changes for Build #1770
[namit] HIVE-3570 Add/fix facility to collect operator specific statisticsin 
hive + add hash-in/hash-out
counter for GroupBy Optr (Satadru Pan via namit)

[namit] HIVE-3554 Hive List Bucketing - Query logic
(Gang Tim Liu via namit)

[cws] HIVE-3563. Drop database cascade fails when there are indexes on any 
tables (Prasad Mujumdar via cws)


Changes for Build #1771
[kevinwilfong] HIVE-3640. Reducer allocation is incorrect if enforce bucketing 
and mapred.reduce.tasks are both set. (Vighnesh Avadhani via kevinwilfong)


Changes for Build #1772

Changes for Build #1773

Changes for Build #1774

Changes for Build #1775
[namit] HIVE-3673 Sort merge join not used when join columns have different 
names
(Kevin Wilfong via namit)


Changes for Build #1776
[kevinwilfong] HIVE-3627. eclipse misses library: 
javolution-@javolution-version@.jar. (Gang Tim Liu via kevinwilfong)


Changes for Build #1777
[kevinwilfong] HIVE-3524. Storing certain Exception objects thrown in 
HiveMetaStore.java in MetaStoreEndFunctionContext. (Maheshwaran Srinivasan via 
kevinwilfong)

[cws] HIVE-1977. DESCRIBE TABLE syntax doesn't support specifying a database 
qualified table name (Zhenxiao Luo via cws)

[cws] HIVE-3674. Test case TestParse broken after recent checkin (Sambavi 
Muthukrishnan via cws)


Changes for Build #1778
[cws] HIVE-1362. Column level scalar valued statistics on Tables and Partitions 
(Shreepadma Venugopalan via cws)


Changes for Build #1779

Changes for Build #1780
[kevinwilfong] HIVE-3686. Fix compile errors introduced by the interaction of 
HIVE-1362 and HIVE-3524. (Shreepadma Venugopalan via kevinwilfong)


Changes for Build #1781
[namit] HIVE-3687 smb_mapjoin_13.q is nondeterministic
(Kevin Wilfong via namit)


Changes for Build #1782
[hashutosh] HIVE-2715: Upgrade Thrift dependency to 0.9.0 (Ashutosh Chauhan)


Changes for Build #1783
[kevinwilfong] HIVE-3654. block relative path access in hive. (njain via 
kevinwilfong)

[hashutosh] HIVE-3658 : Unable to generate the Hbase related unit tests using 
velocity templates on Windows (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3661 : Remove the Windows specific = related swizzle path 
changes from Proxy FileSystems (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3480 : Resource leak: Fix the file handle leaks in Symbolic 
 Symlink related input formats. (Kanna Karanam via Ashutosh Chauhan)


Changes for Build #1784
[kevinwilfong] HIVE-3675. NaN does not work correctly for round(n). (njain via 
kevinwilfong)

[cws] HIVE-3651. bucketmapjoin?.q tests fail with hadoop 0.23 (Prasad Mujumdar 
via cws)


Changes for Build #1785
[namit] HIVE-3613 Implement grouping_id function
(Ian Gorbachev via namit)

[namit] HIVE-3692 Update parallel test documentation
(Ivan Gorbachev via namit)

[namit] HIVE-3649 Hive List Bucketing - enhance DDL to specify list bucketing 
table
(Gang Tim Liu via namit)


Changes for Build #1786
[namit] HIVE-3696 Revert HIVE-3483 which causes performance regression
(Gang Tim Liu via namit)


Changes for Build #1787
[kevinwilfong] HIVE-3621. Make prompt in Hive CLI configurable. (Jingwei Lu via 
kevinwilfong)

[kevinwilfong] HIVE-3695. TestParse breaks due to HIVE-3675. (njain via 
kevinwilfong)


Changes for Build #1788
[kevinwilfong] HIVE-3557. Access to external URLs in hivetest.py. (Ivan 
Gorbachev via kevinwilfong)


Changes for Build #1789
[hashutosh] HIVE-3662 : TestHiveServer: testScratchDirShouldClearWhileStartup 
is failing on Windows (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3659 : TestHiveHistory::testQueryloglocParentDirNotExist Test 
fails on Windows because of some resource leaks in ZK (Kanna Karanam via 
Ashutosh Chauhan)

[hashutosh] HIVE-3663 Unable to display the MR Job file path on Windows in case 
of MR job failures.  (Kanna Karanam via Ashutosh Chauhan)


Changes for Build #1790

Changes for Build #1791

Changes for Build #1792

Changes for Build #1793
[hashutosh] HIVE-3704 : name of some metastore scripts are not per convention 
(Ashutosh Chauhan)


Changes for Build #1794
[hashutosh] HIVE-3243 : ignore white space between entries of hive/hbase table 
mapping (Shengsheng Huang via Ashutosh Chauhan)

[hashutosh] HIVE-3215 : JobDebugger should use RunningJob.getTrackingURL 
(Bhushan Mandhani via Ashutosh Chauhan)


Changes for Build #1795
[cws] HIVE-3437. 0.23 compatibility: fix unit tests when building against 0.23 
(Chris Drome via cws)


[jira] [Commented] (HIVE-3377) ant model-jar command fails in metastore

2012-11-15 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497919#comment-13497919
 ] 

Carl Steinbach commented on HIVE-3377:
--

The directions on the wiki are out of date. You no longer need to run the 
model-jar and gen-test Ant targets. Doing the following should suffice:

$ ant clean package eclipse-files


 ant model-jar command fails in metastore
 

 Key: HIVE-3377
 URL: https://issues.apache.org/jira/browse/HIVE-3377
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Vandana Ayyalasomayajula
Priority: Minor
  Labels: build

 Running ant model-jar command to set up eclipse dev environment from the 
 following wiki:
 https://cwiki.apache.org/Hive/gettingstarted-eclipsesetup.html
 fails with the following message:
 BUILD FAILED
 **/workspace/hive-trunk/metastore/build.xml:22: The following error occurred 
 while executing this line:
 **/workspace/hive-trunk/build-common.xml:112: Problem: failed to create task 
 or type osfamily
 Cause: The name is undefined.
 Action: Check the spelling.
 Action: Check that any custom tasks/types have been declared.
 Action: Check that any presetdef/macrodef declarations have taken place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-3377) ant model-jar command fails in metastore

2012-11-15 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach resolved HIVE-3377.
--

Resolution: Won't Fix

I updated the instructions on the wiki.

Resolving this ticket as WONTFIX since we no longer support running Ant from 
submodule directories.

 ant model-jar command fails in metastore
 

 Key: HIVE-3377
 URL: https://issues.apache.org/jira/browse/HIVE-3377
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Vandana Ayyalasomayajula
Priority: Minor
  Labels: build

 Running ant model-jar command to set up eclipse dev environment from the 
 following wiki:
 https://cwiki.apache.org/Hive/gettingstarted-eclipsesetup.html
 fails with the following message:
 BUILD FAILED
 **/workspace/hive-trunk/metastore/build.xml:22: The following error occurred 
 while executing this line:
 **/workspace/hive-trunk/build-common.xml:112: Problem: failed to create task 
 or type osfamily
 Cause: The name is undefined.
 Action: Check the spelling.
 Action: Check that any custom tasks/types have been declared.
 Action: Check that any presetdef/macrodef declarations have taken place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-2693) Add DECIMAL data type

2012-11-15 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach reassigned HIVE-2693:


Assignee: Prasad Mujumdar  (was: Josh Wills)

@Vikram: Just spoke with Josh and found out that he doesn't have time to work 
on this. I'm reassigning this to Prasad since he's already working on this 
patch and has done some additional changes that are needed for the JDBC driver. 
Let me know if you want to work on this too and we can find some way to 
collaborate. Thanks.

 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-1.patch.txt, HIVE-2693-all.patch, HIVE-2693-fix.patch, 
 HIVE-2693.patch, HIVE-2693-take3.patch, HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3381) Result of outer join is not valid

2012-11-15 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497952#comment-13497952
 ] 

Namit Jain commented on HIVE-3381:
--

[~navis], can you hold off for a few days ? I also wanted to take a look at 
this.

 Result of outer join is not valid
 -

 Key: HIVE-3381
 URL: https://issues.apache.org/jira/browse/HIVE-3381
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-3381.D5565.3.patch


 Outer joins, especially full outer joins or outer join with filter on 'ON 
 clause' is not showing proper results. For example, query in test join_1to1.q
 {code}
 SELECT * FROM join_1to1_1 a full outer join join_1to1_2 b on a.key1 = b.key1 
 and a.value = 66 and b.value = 66 ORDER BY a.key1 ASC, a.key2 ASC, a.value 
 ASC, b.key1 ASC, b.key2 ASC, b.value ASC;
 {code}
 results
 {code}
 NULL  NULLNULLNULLNULL66
 NULL  NULLNULLNULL10050   66
 NULL  NULLNULL10  10010   66
 NULL  NULLNULL30  10030   88
 NULL  NULLNULL35  10035   88
 NULL  NULLNULL40  10040   88
 NULL  NULLNULL40  10040   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULL66  NULLNULLNULL
 NULL  10050   66  NULLNULLNULL
 5 10005   66  5   10005   66
 1510015   66  NULLNULLNULL
 2010020   66  20  10020   66
 2510025   88  NULLNULLNULL
 3010030   66  NULLNULLNULL
 3510035   88  NULLNULLNULL
 4010040   66  NULLNULLNULL
 4010040   66  40  10040   66
 4010040   88  NULLNULLNULL
 4010040   88  NULLNULLNULL
 5010050   66  NULLNULLNULL
 5010050   66  50  10050   66
 5010050   66  50  10050   66
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 {code} 
 but it seemed not right. This should be 
 {code}
 NULL  NULLNULLNULLNULL66
 NULL  NULLNULLNULL10050   66
 NULL  NULLNULL10  10010   66
 NULL  NULLNULL25  10025   66
 NULL  NULLNULL30  10030   88
 NULL  NULLNULL35  10035   88
 NULL  NULLNULL40  10040   88
 NULL  NULLNULL50  10050   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL70  10040   88
 NULL  NULLNULL80  10040   66
 NULL  NULLNULL80  10040   66
 NULL  NULL66  NULLNULLNULL
 NULL  10050   66  NULLNULLNULL
 5 10005   66  5   10005   66
 1510015   66  NULLNULLNULL
 2010020   66  20  10020   66
 2510025   88  NULLNULLNULL
 3010030   66  NULLNULLNULL
 3510035   88  NULLNULLNULL
 4010040   66  40  10040   66
 4010040   88  NULLNULLNULL
 5010050   66  50  10050   66
 5010050   66  50  10050   66
 5010050   88  NULLNULLNULL
 5010050   88  NULLNULLNULL
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 6010040   66  60  10040   66
 7010040   66  NULLNULLNULL
 7010040   66  NULLNULLNULL
 8010040   88  NULLNULLNULL
 8010040   88  NULLNULLNULL
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HIVE-3683) pdk and builtins, run ant test will failed ,since missing junit*.jar in trunk/testlibs/

2012-11-15 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach resolved HIVE-3683.
--

Resolution: Won't Fix

We don't support running ant from subproject directories. It's possible 
restrict tests to specific submodules by modifying the list of submodules 
contained in the 'iterate-test' macro defined in the top-level build.xml file.

 pdk and builtins, run ant test  will failed ,since missing junit*.jar  in  
 trunk/testlibs/
 

 Key: HIVE-3683
 URL: https://issues.apache.org/jira/browse/HIVE-3683
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0
 Environment: Linux 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 
 20:34:47 UTC 2011 i686 i686 i386 GNU/Linux
 java version 1.6.0_25
 hadoop-0.20.2-cdh3u0
 hive-0.9.0
Reporter: caofangkun
Assignee: Bing Li
Priority: Minor

 ~ hive-0.9.0/builtins$ ant test
 and 
 ~ hive-0.9.0/pdk$ ant test
 will fail for 
 BUILD FAILED
 /builtins/build.xml:45: The following error occurred while executing this 
 line:
 .../pdk/scripts/build-plugin.xml:122: The classpath for junit must 
 include junit.jar if not in Ant's own classpath
 Solution:
 add junit-4.10.jar in trunk/testlibs/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2935) Implement HiveServer2

2012-11-15 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13497973#comment-13497973
 ] 

Carl Steinbach commented on HIVE-2935:
--

@Ashutosh: Sounds good to me. I'm in the processing of rebasing the patch, 
updating the test outputs, and addressing some small issues (e.g. the Python 
problem that Thaddeus found). I plan to get this posted by the end of the week. 
Maybe at that point you can give it a quick pass add +1 it if there are no red 
flags? Thanks.

 Implement HiveServer2
 -

 Key: HIVE-2935
 URL: https://issues.apache.org/jira/browse/HIVE-2935
 Project: Hive
  Issue Type: New Feature
  Components: Server Infrastructure
Reporter: Carl Steinbach
Assignee: Carl Steinbach
  Labels: HiveServer2
 Attachments: beelinepositive.tar.gz, HIVE-2935.1.notest.patch.txt, 
 HIVE-2935.2.notest.patch.txt, HIVE-2935.2.nothrift.patch.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3437) 0.23 compatibility: fix unit tests when building against 0.23

2012-11-15 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-3437:
-

Fix Version/s: 0.9.1

Backported to branch-0.9

 0.23 compatibility: fix unit tests when building against 0.23
 -

 Key: HIVE-3437
 URL: https://issues.apache.org/jira/browse/HIVE-3437
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0, 0.10.0
Reporter: Chris Drome
Assignee: Chris Drome
 Fix For: 0.10.0, 0.9.1

 Attachments: HIVE-3437-0.9-1.patch, HIVE-3437-0.9-2.patch, 
 HIVE-3437-0.9-3.patch, HIVE-3437-0.9-4.patch, HIVE-3437-0.9-5.patch, 
 HIVE-3437-0.9-6.patch, HIVE-3437-0.9.patch, HIVE-3437-trunk-1.patch, 
 HIVE-3437-trunk-2.patch, HIVE-3437-trunk-3.patch, HIVE-3437-trunk-4.patch, 
 HIVE-3437-trunk-5.patch, HIVE-3437-trunk-6.patch, HIVE-3437-trunk-7.patch, 
 HIVE-3437-trunk-8.patch, HIVE-3437-trunk.patch


 Many unit tests fail as a result of building the code against hadoop 0.23. 
 Initial focus will be to fix 0.9.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3711) Create UDAF to calculate an array of Benford's Law

2012-11-15 Thread Erik Shilts (JIRA)
Erik Shilts created HIVE-3711:
-

 Summary: Create UDAF to calculate an array of Benford's Law
 Key: HIVE-3711
 URL: https://issues.apache.org/jira/browse/HIVE-3711
 Project: Hive
  Issue Type: New Feature
  Components: UDF
Reporter: Erik Shilts
Priority: Minor


Benford's Law is a useful analytical tool to determine if a number was 
generated with a random process by evaluating the relative proportions of the 
leading digit. It can be used to detect accounting, financial, and election 
fraud.

[Wikipedia's|http://en.wikipedia.org/wiki/Benford's_law] Benford's Law page has 
a good overview.

Hive is well suited to calculate Benford's Law. The result should be a named 
struct with names 1-9 and values being the corresponding proportions of each 
digit.

An alternative is to calculate the deviations from Benford's Law for each 
digit. The structure of the resulting array would be the same, but the result 
would be the difference between the actual proportions and the proportions 
given the by 
[formula|http://en.wikipedia.org/wiki/Benford's_law#Mathematical_statement] on 
Wikipedia.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3437) 0.23 compatibility: fix unit tests when building against 0.23

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498004#comment-13498004
 ] 

Hudson commented on HIVE-3437:
--

Integrated in Hive-0.9.1-SNAPSHOT-h0.21 #199 (See 
[https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/199/])
HIVE-3437. 0.23 compatibility: fix unit tests when building against 0.23 
(Chris Drome via cws) (Revision 1409752)

 Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409752
Files : 
* /hive/branches/branch-0.9/build-common.xml
* /hive/branches/branch-0.9/build.properties
* /hive/branches/branch-0.9/common/ivy.xml
* /hive/branches/branch-0.9/eclipse-templates/.classpath
* /hive/branches/branch-0.9/hbase-handler/ivy.xml
* /hive/branches/branch-0.9/ivy/libraries.properties
* /hive/branches/branch-0.9/ql/ivy.xml
* 
/hive/branches/branch-0.9/ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java
* 
/hive/branches/branch-0.9/ql/src/java/org/apache/hadoop/hive/ql/exec/JobDebugger.java
* /hive/branches/branch-0.9/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/branches/branch-0.9/ql/src/test/queries/clientnegative/autolocal1.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/local_mapred_error_cache.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/local_mapred_error_cache_hadoop20.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/mapreduce_stack_trace.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/mapreduce_stack_trace_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/auto_join14.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/auto_join14_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/combine2.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/combine2_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/ctas.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/ctas_hadoop20.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/groupby7_noskew_multi_single_reducer.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/groupby_complex_types_multi_single_reducer.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/groupby_multi_single_reducer.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input12.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input12_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input39.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input39_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/join14.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/join14_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/leftsemijoin.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/query_properties.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/repair.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/repair_hadoop20.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/sample_islocalmode_hook.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/sample_islocalmode_hook_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/split_sample.q
* /hive/branches/branch-0.9/ql/src/test/resources
* /hive/branches/branch-0.9/ql/src/test/resources/core-site.xml
* /hive/branches/branch-0.9/ql/src/test/results/clientnegative/autolocal1.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/local_mapred_error_cache.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/local_mapred_error_cache_hadoop20.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/mapreduce_stack_trace_hadoop20.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/auto_join14.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/auto_join14_hadoop20.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/combine2.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/combine2_hadoop20.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/ctas.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/ctas_hadoop20.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/groupby7_noskew_multi_single_reducer.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/groupby_complex_types_multi_single_reducer.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/groupby_multi_single_reducer.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/input12.q.out
* 

[jira] [Commented] (HIVE-3437) 0.23 compatibility: fix unit tests when building against 0.23

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498005#comment-13498005
 ] 

Hudson commented on HIVE-3437:
--

Integrated in Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #199 (See 
[https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/199/])
HIVE-3437. 0.23 compatibility: fix unit tests when building against 0.23 
(Chris Drome via cws) (Revision 1409752)

 Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409752
Files : 
* /hive/branches/branch-0.9/build-common.xml
* /hive/branches/branch-0.9/build.properties
* /hive/branches/branch-0.9/common/ivy.xml
* /hive/branches/branch-0.9/eclipse-templates/.classpath
* /hive/branches/branch-0.9/hbase-handler/ivy.xml
* /hive/branches/branch-0.9/ivy/libraries.properties
* /hive/branches/branch-0.9/ql/ivy.xml
* 
/hive/branches/branch-0.9/ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java
* 
/hive/branches/branch-0.9/ql/src/java/org/apache/hadoop/hive/ql/exec/JobDebugger.java
* /hive/branches/branch-0.9/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/branches/branch-0.9/ql/src/test/queries/clientnegative/autolocal1.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/local_mapred_error_cache.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/local_mapred_error_cache_hadoop20.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/mapreduce_stack_trace.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientnegative/mapreduce_stack_trace_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/auto_join14.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/auto_join14_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/combine2.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/combine2_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/ctas.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/ctas_hadoop20.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/groupby7_noskew_multi_single_reducer.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/groupby_complex_types_multi_single_reducer.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/groupby_multi_single_reducer.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input12.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input12_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input39.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/input39_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/join14.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/join14_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/leftsemijoin.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/query_properties.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/repair.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/repair_hadoop20.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/sample_islocalmode_hook.q
* 
/hive/branches/branch-0.9/ql/src/test/queries/clientpositive/sample_islocalmode_hook_hadoop20.q
* /hive/branches/branch-0.9/ql/src/test/queries/clientpositive/split_sample.q
* /hive/branches/branch-0.9/ql/src/test/resources
* /hive/branches/branch-0.9/ql/src/test/resources/core-site.xml
* /hive/branches/branch-0.9/ql/src/test/results/clientnegative/autolocal1.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/local_mapred_error_cache.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/local_mapred_error_cache_hadoop20.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/mapreduce_stack_trace.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientnegative/mapreduce_stack_trace_hadoop20.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/auto_join14.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/auto_join14_hadoop20.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/combine2.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/combine2_hadoop20.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/ctas.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/ctas_hadoop20.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/groupby7_noskew_multi_single_reducer.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/groupby_complex_types_multi_single_reducer.q.out
* 
/hive/branches/branch-0.9/ql/src/test/results/clientpositive/groupby_multi_single_reducer.q.out
* /hive/branches/branch-0.9/ql/src/test/results/clientpositive/input12.q.out
* 

Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #199

2012-11-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/199/changes

Changes:

[cws] HIVE-3437. 0.23 compatibility: fix unit tests when building against 0.23 
(Chris Drome via cws)

--
[...truncated 5860 lines...]
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-yarn-server-common;0.23.3!hadoop-yarn-server-common.jar
 (263ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/google/inject/extensions/guice-servlet/3.0/guice-servlet-3.0.jar
 ...
[ivy:resolve] .. (63kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
com.google.inject.extensions#guice-servlet;3.0!guice-servlet.jar (39ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/google/inject/guice/3.0/guice-3.0.jar ...
[ivy:resolve] 
..
 (693kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] com.google.inject#guice;3.0!guice.jar (98ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/sun/jersey/jersey-test-framework/jersey-test-framework-grizzly2/1.8/jersey-test-framework-grizzly2-1.8.jar
 ...
[ivy:resolve] .. (12kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
com.sun.jersey.jersey-test-framework#jersey-test-framework-grizzly2;1.8!jersey-test-framework-grizzly2.jar
 (76ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/sun/jersey/contribs/jersey-guice/1.8/jersey-guice-1.8.jar
 ...
[ivy:resolve] .. (14kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
com.sun.jersey.contribs#jersey-guice;1.8!jersey-guice.jar (79ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/javax/inject/javax.inject/1/javax.inject-1.jar ...
[ivy:resolve] .. (2kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] javax.inject#javax.inject;1!javax.inject.jar 
(13ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/aopalliance/aopalliance/1.0/aopalliance-1.0.jar 
...
[ivy:resolve] .. (4kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/sonatype/sisu/inject/cglib/2.2.1-v20090111/cglib-2.2.1-v20090111.jar
 ...
[ivy:resolve]  (272kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.sonatype.sisu.inject#cglib;2.2.1-v20090111!cglib.jar (66ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/zookeeper/zookeeper/3.4.2/zookeeper-3.4.2.jar
 ...
[ivy:resolve] 
..
 (746kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.zookeeper#zookeeper;3.4.2!zookeeper.jar (105ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/jline/jline/0.9.94/jline-0.9.94.jar ...
[ivy:resolve]  (85kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] jline#jline;0.9.94!jline.jar (47ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-shuffle/0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.jar
 ...
[ivy:resolve] .. (15kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] 
[ivy:resolve] :: problems summary ::
[ivy:resolve]  WARNINGS
[ivy:resolve]   problem while downloading module descriptor: 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-annotations/0.23.3/hadoop-annotations-0.23.3.pom:
 
/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-annotations/ivy-0.23.3.xml.original.part
 (No such file or directory) (193ms)
[ivy:resolve]   module not found: 
org.apache.hadoop#hadoop-annotations;0.23.3
[ivy:resolve]    local: tried
[ivy:resolve] 
/home/jenkins/.ivy2/local/org.apache.hadoop/hadoop-annotations/0.23.3/ivys/ivy.xml
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-annotations;0.23.3!hadoop-annotations.jar:
[ivy:resolve] 
/home/jenkins/.ivy2/local/org.apache.hadoop/hadoop-annotations/0.23.3/jars/hadoop-annotations.jar
[ivy:resolve]    apache-snapshot: tried
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-annotations/0.23.3/hadoop-annotations-0.23.3.pom
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-annotations;0.23.3!hadoop-annotations.jar:
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-annotations/0.23.3/hadoop-annotations-0.23.3.jar
[ivy:resolve]    maven2: tried
[ivy:resolve] 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-annotations/0.23.3/hadoop-annotations-0.23.3.pom
[ivy:resolve]    datanucleus-repo: tried
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-annotations;0.23.3!hadoop-annotations.jar:
[ivy:resolve] 
http://www.datanucleus.org/downloads/maven2/org/apache/hadoop/hadoop-annotations/0.23.3/hadoop-annotations-0.23.3.jar
[ivy:resolve]    hadoop-source: tried
[ivy:resolve] -- artifact 

Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #199

2012-11-15 Thread Apache Jenkins Server
See 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/199/changes

Changes:

[cws] HIVE-3437. 0.23 compatibility: fix unit tests when building against 0.23 
(Chris Drome via cws)

--
[...truncated 5970 lines...]
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/codehaus/jettison/jettison/1.1/jettison-1.1.pom
[ivy:resolve] -- artifact org.codehaus.jettison#jettison;1.1!jettison.jar:
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar
[ivy:resolve]    maven2: tried
[ivy:resolve] 
http://repo1.maven.org/maven2/org/codehaus/jettison/jettison/1.1/jettison-1.1.pom
[ivy:resolve]    datanucleus-repo: tried
[ivy:resolve] -- artifact org.codehaus.jettison#jettison;1.1!jettison.jar:
[ivy:resolve] 
http://www.datanucleus.org/downloads/maven2/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar
[ivy:resolve]    hadoop-source: tried
[ivy:resolve] -- artifact org.codehaus.jettison#jettison;1.1!jettison.jar:
[ivy:resolve] 
http://mirror.facebook.net/facebook/hive-deps/hadoop/core/jettison-1.1/jettison-1.1.jar
[ivy:resolve]    hadoop-source2: tried
[ivy:resolve] -- artifact org.codehaus.jettison#jettison;1.1!jettison.jar:
[ivy:resolve] 
http://archive.cloudera.com/hive-deps/hadoop/core/jettison-1.1/jettison-1.1.jar
[ivy:resolve]   problem while downloading module descriptor: 
http://repo1.maven.org/maven2/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.pom:
 
/home/jenkins/.ivy2/cache/com.sun.jersey/jersey-server/ivy-1.8.xml.original.part
 (No such file or directory) (51ms)
[ivy:resolve]   module not found: com.sun.jersey#jersey-server;1.8
[ivy:resolve]    local: tried
[ivy:resolve] 
/home/jenkins/.ivy2/local/com.sun.jersey/jersey-server/1.8/ivys/ivy.xml
[ivy:resolve] -- artifact 
com.sun.jersey#jersey-server;1.8!jersey-server.jar:
[ivy:resolve] 
/home/jenkins/.ivy2/local/com.sun.jersey/jersey-server/1.8/jars/jersey-server.jar
[ivy:resolve]    apache-snapshot: tried
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.pom
[ivy:resolve] -- artifact 
com.sun.jersey#jersey-server;1.8!jersey-server.jar:
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar
[ivy:resolve]    maven2: tried
[ivy:resolve] 
http://repo1.maven.org/maven2/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.pom
[ivy:resolve]    datanucleus-repo: tried
[ivy:resolve] -- artifact 
com.sun.jersey#jersey-server;1.8!jersey-server.jar:
[ivy:resolve] 
http://www.datanucleus.org/downloads/maven2/com/sun/jersey/jersey-server/1.8/jersey-server-1.8.jar
[ivy:resolve]    hadoop-source: tried
[ivy:resolve] -- artifact 
com.sun.jersey#jersey-server;1.8!jersey-server.jar:
[ivy:resolve] 
http://mirror.facebook.net/facebook/hive-deps/hadoop/core/jersey-server-1.8/jersey-server-1.8.jar
[ivy:resolve]    hadoop-source2: tried
[ivy:resolve] -- artifact 
com.sun.jersey#jersey-server;1.8!jersey-server.jar:
[ivy:resolve] 
http://archive.cloudera.com/hive-deps/hadoop/core/jersey-server-1.8/jersey-server-1.8.jar
[ivy:resolve]   problem while downloading module descriptor: 
http://repo1.maven.org/maven2/tomcat/tomcat-parent/5.5.23/tomcat-parent-5.5.23.pom:
 /home/jenkins/.ivy2/cache/tomcat/tomcat-parent/ivy-5.5.23.xml.original.part 
(No such file or directory) (68ms)
[ivy:resolve]   io problem while parsing ivy file: 
http://repo1.maven.org/maven2/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.pom:
 Impossible to load parent for 
file:/home/jenkins/.ivy2/cache/tomcat/jasper-compiler/ivy-5.5.23.xml.original. 
Parent=tomcat#tomcat-parent;5.5.23
[ivy:resolve]   module not found: tomcat#jasper-compiler;5.5.23
[ivy:resolve]    local: tried
[ivy:resolve] 
/home/jenkins/.ivy2/local/tomcat/tomcat-parent/5.5.23/ivys/ivy.xml
[ivy:resolve] -- artifact tomcat#tomcat-parent;5.5.23!tomcat-parent.jar:
[ivy:resolve] 
/home/jenkins/.ivy2/local/tomcat/tomcat-parent/5.5.23/jars/tomcat-parent.jar
[ivy:resolve]    apache-snapshot: tried
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/tomcat/tomcat-parent/5.5.23/tomcat-parent-5.5.23.pom
[ivy:resolve] -- artifact tomcat#tomcat-parent;5.5.23!tomcat-parent.jar:
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/tomcat/tomcat-parent/5.5.23/tomcat-parent-5.5.23.jar
[ivy:resolve]    maven2: tried
[ivy:resolve] 
http://repo1.maven.org/maven2/tomcat/tomcat-parent/5.5.23/tomcat-parent-5.5.23.pom
[ivy:resolve]    datanucleus-repo: tried
[ivy:resolve] -- artifact tomcat#jasper-compiler;5.5.23!jasper-compiler.jar:
[ivy:resolve] 

[jira] [Updated] (HIVE-3633) sort-merge join does not work with sub-queries

2012-11-15 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3633:
-

Attachment: hive.3633.2.patch

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.1.patch, hive.3633.2.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21 #200

2012-11-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21/200/

--
[...truncated 5900 lines...]
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-yarn-api;0.23.3!hadoop-yarn-api.jar (337ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/google/inject/guice/3.0/guice-3.0.jar ...
[ivy:resolve] .. (693kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/sun/jersey/jersey-test-framework/jersey-test-framework-grizzly2/1.8/jersey-test-framework-grizzly2-1.8.jar
 ...
[ivy:resolve] .. (12kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/sun/jersey/contribs/jersey-guice/1.8/jersey-guice-1.8.jar
 ...
[ivy:resolve] .. (14kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
com.sun.jersey.contribs#jersey-guice;1.8!jersey-guice.jar (132ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/javax/inject/javax.inject/1/javax.inject-1.jar ...
[ivy:resolve] .. (2kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] javax.inject#javax.inject;1!javax.inject.jar 
(132ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/aopalliance/aopalliance/1.0/aopalliance-1.0.jar 
...
[ivy:resolve] .. (4kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] aopalliance#aopalliance;1.0!aopalliance.jar 
(128ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/sonatype/sisu/inject/cglib/2.2.1-v20090111/cglib-2.2.1-v20090111.jar
 ...
[ivy:resolve] . (272kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.sonatype.sisu.inject#cglib;2.2.1-v20090111!cglib.jar (190ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-server-common/0.23.3/hadoop-yarn-server-common-0.23.3.jar
 ...
[ivy:resolve] . (147kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/zookeeper/zookeeper/3.4.2/zookeeper-3.4.2.jar
 ...
[ivy:resolve] .. 
(746kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.zookeeper#zookeeper;3.4.2!zookeeper.jar (268ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/jline/jline/0.9.94/jline-0.9.94.jar ...
[ivy:resolve]  (85kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] jline#jline;0.9.94!jline.jar (105ms)
[ivy:resolve] 
[ivy:resolve] :: problems summary ::
[ivy:resolve]  WARNINGS
[ivy:resolve]   problem while downloading module descriptor: 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-shuffle/0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.pom:
 
/home/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-mapreduce-client-shuffle/ivy-0.23.3.xml.original.part
 (No such file or directory) (236ms)
[ivy:resolve]   module not found: 
org.apache.hadoop#hadoop-mapreduce-client-shuffle;0.23.3
[ivy:resolve]    local: tried
[ivy:resolve] 
/home/jenkins/.ivy2/local/org.apache.hadoop/hadoop-mapreduce-client-shuffle/0.23.3/ivys/ivy.xml
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-mapreduce-client-shuffle;0.23.3!hadoop-mapreduce-client-shuffle.jar:
[ivy:resolve] 
/home/jenkins/.ivy2/local/org.apache.hadoop/hadoop-mapreduce-client-shuffle/0.23.3/jars/hadoop-mapreduce-client-shuffle.jar
[ivy:resolve]    apache-snapshot: tried
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-mapreduce-client-shuffle/0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.pom
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-mapreduce-client-shuffle;0.23.3!hadoop-mapreduce-client-shuffle.jar:
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-mapreduce-client-shuffle/0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.jar
[ivy:resolve]    maven2: tried
[ivy:resolve] 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-shuffle/0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.pom
[ivy:resolve]    datanucleus-repo: tried
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-mapreduce-client-shuffle;0.23.3!hadoop-mapreduce-client-shuffle.jar:
[ivy:resolve] 
http://www.datanucleus.org/downloads/maven2/org/apache/hadoop/hadoop-mapreduce-client-shuffle/0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.jar
[ivy:resolve]    hadoop-source: tried
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-mapreduce-client-shuffle;0.23.3!hadoop-mapreduce-client-shuffle.jar:
[ivy:resolve] 
http://mirror.facebook.net/facebook/hive-deps/hadoop/core/hadoop-mapreduce-client-shuffle-0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.jar
[ivy:resolve]    hadoop-source2: tried
[ivy:resolve] -- artifact 
org.apache.hadoop#hadoop-mapreduce-client-shuffle;0.23.3!hadoop-mapreduce-client-shuffle.jar:
[ivy:resolve] 

Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #200

2012-11-15 Thread Apache Jenkins Server
See 
https://builds.apache.org/job/Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false/200/

--
[...truncated 5959 lines...]
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/google/inject/extensions/guice-servlet/3.0/guice-servlet-3.0.jar
 ...
[ivy:resolve]  (63kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar 
...
[ivy:resolve] 
..
 (11446kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] com.cenqua.clover#clover;3.0.2!clover.jar (979ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-api/0.23.3/hadoop-yarn-api-0.23.3.jar
 ...
[ivy:resolve]  
(871kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/google/inject/guice/3.0/guice-3.0.jar ...
[ivy:resolve] .. (693kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/sun/jersey/jersey-test-framework/jersey-test-framework-grizzly2/1.8/jersey-test-framework-grizzly2-1.8.jar
 ...
[ivy:resolve] .. (12kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/com/sun/jersey/contribs/jersey-guice/1.8/jersey-guice-1.8.jar
 ...
[ivy:resolve] .. (14kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/javax/inject/javax.inject/1/javax.inject-1.jar ...
[ivy:resolve] .. (2kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] javax.inject#javax.inject;1!javax.inject.jar 
(102ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/aopalliance/aopalliance/1.0/aopalliance-1.0.jar 
...
[ivy:resolve] .. (4kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/sonatype/sisu/inject/cglib/2.2.1-v20090111/cglib-2.2.1-v20090111.jar
 ...
[ivy:resolve]  (272kB)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-server-common/0.23.3/hadoop-yarn-server-common-0.23.3.jar
 ...
[ivy:resolve] .. (147kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-yarn-server-common;0.23.3!hadoop-yarn-server-common.jar
 (289ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-mapreduce-client-shuffle/0.23.3/hadoop-mapreduce-client-shuffle-0.23.3.jar
 ...
[ivy:resolve] .. (15kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-mapreduce-client-shuffle;0.23.3!hadoop-mapreduce-client-shuffle.jar
 (252ms)
[ivy:resolve] downloading 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-server-nodemanager/0.23.3/hadoop-yarn-server-nodemanager-0.23.3.jar
 ...
[ivy:resolve] .. (385kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ] 
org.apache.hadoop#hadoop-yarn-server-nodemanager;0.23.3!hadoop-yarn-server-nodemanager.jar
 (410ms)
[ivy:resolve] 
[ivy:resolve] :: problems summary ::
[ivy:resolve]  WARNINGS
[ivy:resolve]   impossible to put metadata file in cache: 
http://repo1.maven.org/maven2/jdiff/jdiff/1.0.9/jdiff-1.0.9.pom (1.0.9). 
java.io.FileNotFoundException: 
/home/jenkins/.ivy2/cache/jdiff/jdiff/ivy-1.0.9.xml.original (No such file or 
directory)
[ivy:resolve]   problem while downloading module descriptor: 
http://repo1.maven.org/maven2/org/apache/zookeeper/zookeeper/3.4.2/zookeeper-3.4.2.pom:
 
/home/jenkins/.ivy2/cache/org.apache.zookeeper/zookeeper/ivy-3.4.2.xml.original.part
 (No such file or directory) (101ms)
[ivy:resolve]   module not found: org.apache.zookeeper#zookeeper;3.4.2
[ivy:resolve]    local: tried
[ivy:resolve] 
/home/jenkins/.ivy2/local/org.apache.zookeeper/zookeeper/3.4.2/ivys/ivy.xml
[ivy:resolve] -- artifact 
org.apache.zookeeper#zookeeper;3.4.2!zookeeper.jar:
[ivy:resolve] 
/home/jenkins/.ivy2/local/org.apache.zookeeper/zookeeper/3.4.2/jars/zookeeper.jar
[ivy:resolve]    apache-snapshot: tried
[ivy:resolve] 
https://repository.apache.org/content/repositories/snapshots/org/apache/zookeeper/zookeeper/3.4.2/zookeeper-3.4.2.pom
[ivy:resolve] -- artifact 
org.apache.zookeeper#zookeeper;3.4.2!zookeeper.jar:
[ivy:resolve] 

[jira] [Updated] (HIVE-3680) Include Table information in Hive's AddPartitionEvent.

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3680:
---

   Resolution: Fixed
Fix Version/s: (was: 0.9.1)
   0.10.0
 Assignee: Mithun Radhakrishnan
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Mithun!

 Include Table information in Hive's AddPartitionEvent.
 --

 Key: HIVE-3680
 URL: https://issues.apache.org/jira/browse/HIVE-3680
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.9.1
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Fix For: 0.10.0

 Attachments: HIVE-3680.branch9.patch, HIVE-3680.trunk.patch


 This has to do with a minor overhaul of the HCatalog notifications that we're 
 attempting in HCATALOG-546.
 It is proposed that HCatalog's notifications (on Add/Drop of Partitions) 
 provide details to identify the affected partitions. 
 Using the Partition object in AddPartitionEvent, one is able to retrieve the 
 values of the partition-keys and the name of the Table. However, the 
 partition-keys themselves aren't available (since the Table instance isn't 
 part of the AddPartitionEvent).
 Adding the table-reference to the AddPartitionEvent and DropPartitionEvent 
 classes will expose all the info we need. (The alternative is to query the 
 metastore for the table's schema and use the partition-keys from there. :/)
 I'll post a patch for this shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2693) Add DECIMAL data type

2012-11-15 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498098#comment-13498098
 ] 

Vikram Dixit K commented on HIVE-2693:
--

Hi Carl, I have already done some work with regard to review comments. I am 
planning to continue to work on this. Let me know how you want to proceed. 
Thanks Vikram.

 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-1.patch.txt, HIVE-2693-all.patch, HIVE-2693-fix.patch, 
 HIVE-2693.patch, HIVE-2693-take3.patch, HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3632) datanucleus breaks when using JDK7

2012-11-15 Thread Andy Jefferson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498101#comment-13498101
 ] 

Andy Jefferson commented on HIVE-3632:
--

You refer to a DataNucleus JIRA issue that was marked fixed in June 2012. How 
does that imply that they don't plan to actively support JDK7+ bytecode any 
time soon ? DataNucleus 3.1.x supports JDK1.7+ and has for some time. There 
are 0 reported problems using DataNucleus v3.1 with JDK1.7. You don't define 
not successful

 datanucleus breaks when using JDK7
 --

 Key: HIVE-3632
 URL: https://issues.apache.org/jira/browse/HIVE-3632
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0, 0.9.1
Reporter: Chris Drome
Priority: Critical

 I found serious problems with datanucleus code when using JDK7, resulting in 
 some sort of exception being thrown when datanucleus code is entered.
 I tried source=1.7, target=1.7 with JDK7 as well as source=1.6, target=1.6 
 with JDK7 and there was no visible difference in that the same unit tests 
 failed.
 I tried upgrading datanucleus to 3.0.1, as per HIVE-2084.patch, which did not 
 fix the failing tests.
 I tried upgrading datanucleus to 3.1-release, as per the advise of 
 http://www.datanucleus.org/servlet/jira/browse/NUCENHANCER-86, which suggests 
 using ASMv4 will allow datanucleus to work with JDK7. I was not successful 
 with this either.
 I tried upgrading datanucleus to 3.1.2. I was not successful with this either.
 Regarding datanucleus support for JDK7+, there is the following JIRA
 http://www.datanucleus.org/servlet/jira/browse/NUCENHANCER-81
 which suggests that they don't plan to actively support JDK7+ bytecode any 
 time soon.
 I also tested the following JVM parameters found on
 http://veerasundar.com/blog/2012/01/java-lang-verifyerror-expecting-a-stackmap-frame-at-branch-target-jdk-7/
 with no success either.
 This will become a more serious problem as people move to newer JVMs. If 
 there are other who have solved this issue, please post how this was done. 
 Otherwise, it is a topic that I would like to raise for discussion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3291) fix fs resolvers

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3291:
---

   Resolution: Fixed
Fix Version/s: 0.10.0
 Assignee: Ashish Singh  (was: Giridharan Kesavan)
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Ashish!

 fix fs resolvers 
 -

 Key: HIVE-3291
 URL: https://issues.apache.org/jira/browse/HIVE-3291
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Giridharan Kesavan
Assignee: Ashish Singh
 Fix For: 0.10.0

 Attachments: HIVE-3291.patch, HIVE-3291.patch1, HIVE-3291.patch2


 shims module fails to compile when compiling hive against 1.0 using the fs 
 resolvers as the force=true flag forces it to use the available version of 
 hadoop.
 In a scenario where you want to build hadoop-1.0 and shims would still want 
 to build against 20.2 and if you happen to use fs resolver ie 
 -Dresolvers=true , fs resolvers would just use 1.0 of hadoop for shims and 
 shims compilation will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3680) Include Table information in Hive's AddPartitionEvent.

2012-11-15 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498107#comment-13498107
 ] 

Mithun Radhakrishnan commented on HIVE-3680:


Thank you, [~ashutoshc]. :] Any chance at a backport to branch-9? The patch is 
included.

 Include Table information in Hive's AddPartitionEvent.
 --

 Key: HIVE-3680
 URL: https://issues.apache.org/jira/browse/HIVE-3680
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.9.1
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Fix For: 0.10.0

 Attachments: HIVE-3680.branch9.patch, HIVE-3680.trunk.patch


 This has to do with a minor overhaul of the HCatalog notifications that we're 
 attempting in HCATALOG-546.
 It is proposed that HCatalog's notifications (on Add/Drop of Partitions) 
 provide details to identify the affected partitions. 
 Using the Partition object in AddPartitionEvent, one is able to retrieve the 
 values of the partition-keys and the name of the Table. However, the 
 partition-keys themselves aren't available (since the Table instance isn't 
 part of the AddPartitionEvent).
 Adding the table-reference to the AddPartitionEvent and DropPartitionEvent 
 classes will expose all the info we need. (The alternative is to query the 
 metastore for the table's schema and use the partition-keys from there. :/)
 I'll post a patch for this shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2693) Add DECIMAL data type

2012-11-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498110#comment-13498110
 ] 

Ashutosh Chauhan commented on HIVE-2693:


Vikram,
Current patch also lacks test coverage. Please add following tests:
a) Test which loads data in the table having decimal column via reading from 
text file.
b) Test which does group-by / join on decimal column.

 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-1.patch.txt, HIVE-2693-all.patch, HIVE-2693-fix.patch, 
 HIVE-2693.patch, HIVE-2693-take3.patch, HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3520) ivysettings.xml does not let you override .m2/repository

2012-11-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498120#comment-13498120
 ] 

Ashutosh Chauhan commented on HIVE-3520:


+1 will commit if tests pass.

 ivysettings.xml does not let you override .m2/repository
 

 Key: HIVE-3520
 URL: https://issues.apache.org/jira/browse/HIVE-3520
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Giridharan Kesavan
Assignee: Raja Aluri
 Attachments: HIVE-3520.patch


 ivysettings.xml does not let you override .m2/repository. In other words 
 repo.dir ivysetting should be an overridable property

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3435) Get pdk pluginTest passed when triggered from both builtin tests and pdk tests on hadoop23

2012-11-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498170#comment-13498170
 ] 

Ashutosh Chauhan commented on HIVE-3435:


+1 Looks good to me. Will commit if tests pass.

 Get pdk pluginTest passed when triggered from both builtin tests and pdk 
 tests on hadoop23 
 ---

 Key: HIVE-3435
 URL: https://issues.apache.org/jira/browse/HIVE-3435
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Attachments: HIVE-3435.1.patch.txt, HIVE-3435.2.patch.txt, 
 HIVE-3435.3.patch.txt


 Hive pdk pluginTest is running twice in unit testing, one is triggered from 
 running builtin tests, another is triggered from running pdk tests.
 HIVE-3413 fixed pdk pluginTest on hadoop23 when triggered from running 
 builtin tests. While, when triggered from running pdk tests directly on 
 hadoop23, it is failing:
 Testcase: SELECT tp_rot13('Mixed Up!') FROM onerow; took 6.426 sec
 FAILED
 expected:[]Zvkrq Hc! but was:[2012-09-04 18:13:01,668 WARN [main] 
 conf.HiveConf (HiveConf.java:clinit(73)) - hive-site.xml not found on 
 CLASSPATH
 ]Zvkrq Hc!
 junit.framework.ComparisonFailure: expected:[]Zvkrq Hc! but 
 was:[2012-09-04 18:13:01,668 WARN [main] conf.HiveConf 
 (HiveConf.java:clinit(73)) - hive-site.xml not found on CLASSPATH
 ]Zvkrq Hc!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request: DB based token store

2012-11-15 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7941/#review13476
---



trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
https://reviews.apache.org/r/7941/#comment28882

Yeah.. will refactor that.


- Ashutosh Chauhan


On Nov. 13, 2012, 8:45 a.m., Ashutosh Chauhan wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/7941/
 ---
 
 (Updated Nov. 13, 2012, 8:45 a.m.)
 
 
 Review request for hive and Carl Steinbach.
 
 
 Description
 ---
 
 DB based token store
 
 
 This addresses bug HIVE-3255.
 https://issues.apache.org/jira/browse/HIVE-3255
 
 
 Diffs
 -
 
   trunk/metastore/scripts/upgrade/derby/012-HIVE-3255.derby.sql PRE-CREATION 
   trunk/metastore/scripts/upgrade/derby/upgrade-0.9.0-to-0.10.0.derby.sql 
 1408480 
   trunk/metastore/scripts/upgrade/mysql/012-HIVE-3255.mysql.sql PRE-CREATION 
   trunk/metastore/scripts/upgrade/mysql/upgrade-0.9.0-to-0.10.0.mysql.sql 
 1408480 
   trunk/metastore/scripts/upgrade/oracle/012-HIVE-3255.oracle.sql 
 PRE-CREATION 
   trunk/metastore/scripts/upgrade/oracle/upgrate-0.9.0-to-0.10.0.oracle.sql 
 PRE-CREATION 
   trunk/metastore/scripts/upgrade/postgres/012-HIVE-3255.postgres.sql 
 PRE-CREATION 
   
 trunk/metastore/scripts/upgrade/postgres/upgrade-0.9.0-to-0.10.0.postgres.sql 
 1408480 
   
 trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 1408480 
   trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
 1408480 
   trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 
 1408480 
   
 trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MDelegationToken.java
  PRE-CREATION 
   
 trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MMasterKey.java
  PRE-CREATION 
   trunk/metastore/src/model/package.jdo 1408480 
   
 trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/DBTokenStore.java
  PRE-CREATION 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/DelegationTokenStore.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/MemoryTokenStore.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/ZooKeeperTokenStore.java
  1408480 
   
 trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestDBTokenStore.java
  PRE-CREATION 
   
 trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
  1408480 
 
 Diff: https://reviews.apache.org/r/7941/diff/
 
 
 Testing
 ---
 
 Includes unit test
 
 
 Thanks,
 
 Ashutosh Chauhan
 




Re: Review Request: DB based token store

2012-11-15 Thread Ashutosh Chauhan


 On Nov. 15, 2012, 6:35 a.m., Mark Grover wrote:
  trunk/metastore/scripts/upgrade/mysql/upgrade-0.9.0-to-0.10.0.mysql.sql, 
  line 4
  https://reviews.apache.org/r/7941/diff/2/?file=19#file19line4
 
  Are the prefix numbers there to keep things sorted in the 
  upgrade/database directory?
  
  If so,  (I am being nitpicky) why use 12 instead of 11? Moreover, you 
  are using 011 as prefix in oracle directory while 012 in mysql. Should we 
  try to be consistent?

Because when I generated the patch, there wasn't 11, but now there is. 
https://issues.apache.org/jira/browse/HIVE-3704


 On Nov. 15, 2012, 6:35 a.m., Mark Grover wrote:
  trunk/metastore/scripts/upgrade/oracle/upgrate-0.9.0-to-0.10.0.oracle.sql, 
  line 1
  https://reviews.apache.org/r/7941/diff/2/?file=188891#file188891line1
 
  Fix filename to have upgrade (instead of upgrade):-)

Will do.


 On Nov. 15, 2012, 6:35 a.m., Mark Grover wrote:
  trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java, 
  line 430
  https://reviews.apache.org/r/7941/diff/2/?file=188896#file188896line430
 
  Any particular reason why this is not abstract as well?

That doesn't make a difference. Its a method in interface. But for consistency, 
I will add abstract in there.


 On Nov. 15, 2012, 6:35 a.m., Mark Grover wrote:
  trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java, 
  line 5310
  https://reviews.apache.org/r/7941/diff/2/?file=188895#file188895line5310
 
  Nitpicky: The same code to query for the token appears in addToken(), 
  removeToken(), getToken(). Should we consider refactoring it?

Yeah, make sense. Will do.


- Ashutosh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7941/#review13461
---


On Nov. 13, 2012, 8:45 a.m., Ashutosh Chauhan wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/7941/
 ---
 
 (Updated Nov. 13, 2012, 8:45 a.m.)
 
 
 Review request for hive and Carl Steinbach.
 
 
 Description
 ---
 
 DB based token store
 
 
 This addresses bug HIVE-3255.
 https://issues.apache.org/jira/browse/HIVE-3255
 
 
 Diffs
 -
 
   trunk/metastore/scripts/upgrade/derby/012-HIVE-3255.derby.sql PRE-CREATION 
   trunk/metastore/scripts/upgrade/derby/upgrade-0.9.0-to-0.10.0.derby.sql 
 1408480 
   trunk/metastore/scripts/upgrade/mysql/012-HIVE-3255.mysql.sql PRE-CREATION 
   trunk/metastore/scripts/upgrade/mysql/upgrade-0.9.0-to-0.10.0.mysql.sql 
 1408480 
   trunk/metastore/scripts/upgrade/oracle/012-HIVE-3255.oracle.sql 
 PRE-CREATION 
   trunk/metastore/scripts/upgrade/oracle/upgrate-0.9.0-to-0.10.0.oracle.sql 
 PRE-CREATION 
   trunk/metastore/scripts/upgrade/postgres/012-HIVE-3255.postgres.sql 
 PRE-CREATION 
   
 trunk/metastore/scripts/upgrade/postgres/upgrade-0.9.0-to-0.10.0.postgres.sql 
 1408480 
   
 trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
 1408480 
   trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
 1408480 
   trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 
 1408480 
   
 trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MDelegationToken.java
  PRE-CREATION 
   
 trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MMasterKey.java
  PRE-CREATION 
   trunk/metastore/src/model/package.jdo 1408480 
   
 trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/DBTokenStore.java
  PRE-CREATION 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/DelegationTokenStore.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/MemoryTokenStore.java
  1408480 
   
 trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/ZooKeeperTokenStore.java
  1408480 
   
 trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestDBTokenStore.java
  PRE-CREATION 
   
 trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
  1408480 
 
 Diff: https://reviews.apache.org/r/7941/diff/
 
 
 Testing
 ---
 
 Includes unit test
 
 
 Thanks,
 
 Ashutosh Chauhan
 




[jira] [Commented] (HIVE-3706) getBoolVar in FileSinkOperator can be optimized

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498207#comment-13498207
 ] 

Hudson commented on HIVE-3706:
--

Integrated in Hive-trunk-h0.21 #1798 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1798/])
HIVE-3706 getBoolVar in FileSinkOperator can be optimized
(Kevin Wilfong via namit) (Revision 1409691)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409691
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java


 getBoolVar in FileSinkOperator can be optimized
 ---

 Key: HIVE-3706
 URL: https://issues.apache.org/jira/browse/HIVE-3706
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.10.0

 Attachments: HIVE-3706.1.patch.txt


 There's a call to HiveConf.getBoolVar in FileSinkOperator's processOp method. 
  In benchmarks we found this call to be using ~2% of the CPU time on simple 
 queries, e.g. INSERT OVERWRITE TABLE t1 SELECT * FROM t2;
 This boolean value, a flag to collect the RawDataSize stat, won't change 
 during the processing of a query, so we can determine it at initialization 
 and store that value, saving that CPU.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3471) Implement grouping sets in hive

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498208#comment-13498208
 ] 

Hudson commented on HIVE-3471:
--

Integrated in Hive-trunk-h0.21 #1798 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1798/])
HIVE-3471 Implement grouping sets in hive
(Ivan Gorbachev via namit) (Revision 1409664)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409664
Files : 
* /hive/trunk/data/files/grouping_sets.txt
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientnegative/groupby_grouping_sets1.q
* /hive/trunk/ql/src/test/queries/clientnegative/groupby_grouping_sets2.q
* /hive/trunk/ql/src/test/queries/clientnegative/groupby_grouping_sets3.q
* /hive/trunk/ql/src/test/queries/clientnegative/groupby_grouping_sets4.q
* /hive/trunk/ql/src/test/queries/clientnegative/groupby_grouping_sets5.q
* /hive/trunk/ql/src/test/queries/clientpositive/groupby_grouping_sets1.q
* /hive/trunk/ql/src/test/results/clientnegative/groupby_grouping_sets1.q.out
* /hive/trunk/ql/src/test/results/clientnegative/groupby_grouping_sets2.q.out
* /hive/trunk/ql/src/test/results/clientnegative/groupby_grouping_sets3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/groupby_grouping_sets4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/groupby_grouping_sets5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby_grouping_sets1.q.out


 Implement grouping sets in hive
 ---

 Key: HIVE-3471
 URL: https://issues.apache.org/jira/browse/HIVE-3471
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Reporter: Namit Jain
Assignee: Ivan Gorbachev
 Fix For: 0.10.0

 Attachments: jira-3471.0.patch, jira-3471.1.patch, jira-3471.2.patch, 
 jira-3471.3.pach




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3707) Round map/reduce progress down when it is in the range [99.5, 100)

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498209#comment-13498209
 ] 

Hudson commented on HIVE-3707:
--

Integrated in Hive-trunk-h0.21 #1798 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1798/])
HIVE-3707 Round map/reduce progress down when it is in the range [99.5, 100)
(Kevin Wilfong via namit) (Revision 1409680)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409680
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HadoopJobExecHelper.java


 Round map/reduce progress down when it is in the range [99.5, 100)
 --

 Key: HIVE-3707
 URL: https://issues.apache.org/jira/browse/HIVE-3707
 Project: Hive
  Issue Type: Improvement
  Components: Logging, Query Processor
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
Priority: Minor
 Fix For: 0.10.0

 Attachments: HIVE-3707.1.patch.txt


 In HadoopJobExecHelper the mapProgress and reduceProgress are the value of 
 these counters taken from the running job rounded to an integer percentage.  
 This means that e.g. if the mappers are 99.5% done this is stored as 100%.
 One of the most common questions I see from new users is, the map and reduce 
 both report being 100% done, why is the query still running?
 By rounding down the value in this interval so it's only 100% when it's 
 really 100% we could avoid that confusion.
 Also, the way it appears the QueryPlan and MapRedTask determine if the 
 map/reduce phases are done is by checking if this value == 100.  I couldn't 
 find anywhere where they're used for anything significant, but they're 
 reporting early completion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1798 - Still Failing

2012-11-15 Thread Apache Jenkins Server
Changes for Build #1764
[kevinwilfong] HIVE-3610. Add a command Explain dependency ... (Sambavi 
Muthukrishnan via kevinwilfong)


Changes for Build #1765

Changes for Build #1766
[hashutosh] HIVE-3441 : testcases escape1,escape2 fail on windows (Thejas Nair 
via Ashutosh Chauhan)

[kevinwilfong] HIVE-3499. add tests to use bucketing metadata for partitions. 
(njain via kevinwilfong)


Changes for Build #1767
[kevinwilfong] HIVE-3276. optimize union sub-queries. (njain via kevinwilfong)


Changes for Build #1768

Changes for Build #1769

Changes for Build #1770
[namit] HIVE-3570 Add/fix facility to collect operator specific statisticsin 
hive + add hash-in/hash-out
counter for GroupBy Optr (Satadru Pan via namit)

[namit] HIVE-3554 Hive List Bucketing - Query logic
(Gang Tim Liu via namit)

[cws] HIVE-3563. Drop database cascade fails when there are indexes on any 
tables (Prasad Mujumdar via cws)


Changes for Build #1771
[kevinwilfong] HIVE-3640. Reducer allocation is incorrect if enforce bucketing 
and mapred.reduce.tasks are both set. (Vighnesh Avadhani via kevinwilfong)


Changes for Build #1772

Changes for Build #1773

Changes for Build #1774

Changes for Build #1775
[namit] HIVE-3673 Sort merge join not used when join columns have different 
names
(Kevin Wilfong via namit)


Changes for Build #1776
[kevinwilfong] HIVE-3627. eclipse misses library: 
javolution-@javolution-version@.jar. (Gang Tim Liu via kevinwilfong)


Changes for Build #1777
[kevinwilfong] HIVE-3524. Storing certain Exception objects thrown in 
HiveMetaStore.java in MetaStoreEndFunctionContext. (Maheshwaran Srinivasan via 
kevinwilfong)

[cws] HIVE-1977. DESCRIBE TABLE syntax doesn't support specifying a database 
qualified table name (Zhenxiao Luo via cws)

[cws] HIVE-3674. Test case TestParse broken after recent checkin (Sambavi 
Muthukrishnan via cws)


Changes for Build #1778
[cws] HIVE-1362. Column level scalar valued statistics on Tables and Partitions 
(Shreepadma Venugopalan via cws)


Changes for Build #1779

Changes for Build #1780
[kevinwilfong] HIVE-3686. Fix compile errors introduced by the interaction of 
HIVE-1362 and HIVE-3524. (Shreepadma Venugopalan via kevinwilfong)


Changes for Build #1781
[namit] HIVE-3687 smb_mapjoin_13.q is nondeterministic
(Kevin Wilfong via namit)


Changes for Build #1782
[hashutosh] HIVE-2715: Upgrade Thrift dependency to 0.9.0 (Ashutosh Chauhan)


Changes for Build #1783
[kevinwilfong] HIVE-3654. block relative path access in hive. (njain via 
kevinwilfong)

[hashutosh] HIVE-3658 : Unable to generate the Hbase related unit tests using 
velocity templates on Windows (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3661 : Remove the Windows specific = related swizzle path 
changes from Proxy FileSystems (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3480 : Resource leak: Fix the file handle leaks in Symbolic 
 Symlink related input formats. (Kanna Karanam via Ashutosh Chauhan)


Changes for Build #1784
[kevinwilfong] HIVE-3675. NaN does not work correctly for round(n). (njain via 
kevinwilfong)

[cws] HIVE-3651. bucketmapjoin?.q tests fail with hadoop 0.23 (Prasad Mujumdar 
via cws)


Changes for Build #1785
[namit] HIVE-3613 Implement grouping_id function
(Ian Gorbachev via namit)

[namit] HIVE-3692 Update parallel test documentation
(Ivan Gorbachev via namit)

[namit] HIVE-3649 Hive List Bucketing - enhance DDL to specify list bucketing 
table
(Gang Tim Liu via namit)


Changes for Build #1786
[namit] HIVE-3696 Revert HIVE-3483 which causes performance regression
(Gang Tim Liu via namit)


Changes for Build #1787
[kevinwilfong] HIVE-3621. Make prompt in Hive CLI configurable. (Jingwei Lu via 
kevinwilfong)

[kevinwilfong] HIVE-3695. TestParse breaks due to HIVE-3675. (njain via 
kevinwilfong)


Changes for Build #1788
[kevinwilfong] HIVE-3557. Access to external URLs in hivetest.py. (Ivan 
Gorbachev via kevinwilfong)


Changes for Build #1789
[hashutosh] HIVE-3662 : TestHiveServer: testScratchDirShouldClearWhileStartup 
is failing on Windows (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3659 : TestHiveHistory::testQueryloglocParentDirNotExist Test 
fails on Windows because of some resource leaks in ZK (Kanna Karanam via 
Ashutosh Chauhan)

[hashutosh] HIVE-3663 Unable to display the MR Job file path on Windows in case 
of MR job failures.  (Kanna Karanam via Ashutosh Chauhan)


Changes for Build #1790

Changes for Build #1791

Changes for Build #1792

Changes for Build #1793
[hashutosh] HIVE-3704 : name of some metastore scripts are not per convention 
(Ashutosh Chauhan)


Changes for Build #1794
[hashutosh] HIVE-3243 : ignore white space between entries of hive/hbase table 
mapping (Shengsheng Huang via Ashutosh Chauhan)

[hashutosh] HIVE-3215 : JobDebugger should use RunningJob.getTrackingURL 
(Bhushan Mandhani via Ashutosh Chauhan)


Changes for Build #1795
[cws] HIVE-3437. 0.23 compatibility: fix unit tests when building against 0.23 
(Chris Drome via cws)


[jira] [Updated] (HIVE-3633) sort-merge join does not work with sub-queries

2012-11-15 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-3633:
-

Attachment: hive.3633.3.patch

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.1.patch, hive.3633.2.patch, hive.3633.3.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3633) sort-merge join does not work with sub-queries

2012-11-15 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498230#comment-13498230
 ] 

Namit Jain commented on HIVE-3633:
--

running tests

 sort-merge join does not work with sub-queries
 --

 Key: HIVE-3633
 URL: https://issues.apache.org/jira/browse/HIVE-3633
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3633.1.patch, hive.3633.2.patch, hive.3633.3.patch


 Consider the following query:
 create table smb_bucket_1(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 create table smb_bucket_2(key int, value string) CLUSTERED BY (key) SORTED BY 
 (key) INTO 6 BUCKETS STORED AS TEXTFILE;
 -- load the above tables
 set hive.optimize.bucketmapjoin = true;
 set hive.optimize.bucketmapjoin.sortedmerge = true;
 set hive.input.format = 
 org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat;
 explain
 select count(*) from
 (
 select /*+mapjoin(a)*/ a.key as key1, b.key as key2, a.value as value1, 
 b.value as value2
 from smb_bucket_1 a join smb_bucket_2 b on a.key = b.key)
 subq;
 The above query does not use sort-merge join. This would be very useful as we 
 automatically convert the queries to use sorting and bucketing properties for 
 join.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3645) RCFileWriter does not implement the right function to support Federation

2012-11-15 Thread Arup Malakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498262#comment-13498262
 ] 

Arup Malakar commented on HIVE-3645:



Looking at PIG-2791 looks like the following needs to be done:

1. Use getDefaultBlockSize(Path) and getDefaultReplication(Path) instead of 
getDefaultBlockSize() and getDefaultReplication(). As the ones without Path 
argument wont work in case of federated namenode. These methods need to 
be shimmed.
 
2. Bump hadoop dependency to 2.0.0-alpha as  
getDefaultBlockSize(Path)/getDefaultReplication(Path) are  not available in 
0.23.1


 RCFileWriter does not implement the right function to support Federation
 

 Key: HIVE-3645
 URL: https://issues.apache.org/jira/browse/HIVE-3645
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0, 0.10.0
 Environment: Hadoop 0.23.3 federation, Hive 0.9 and Pig 0.10
Reporter: Viraj Bhat

 Create a table using Hive DDL
 {code}
 CREATE TABLE tmp_hcat_federated_numbers_part_1 (
   id   int,  
   intnum   int,
   floatnum float
 )partitioned by (
   part1string,
   part2string
 )
 STORED AS rcfile
 LOCATION 'viewfs:///database/tmp_hcat_federated_numbers_part_1';
 {code}
 Populate it using Pig:
 {code}
 A = load 'default.numbers_pig' using org.apache.hcatalog.pig.HCatLoader();
 B = filter A by id =  500;
 C = foreach B generate (int)id, (int)intnum, (float)floatnum;
 store C into
 'default.tmp_hcat_federated_numbers_part_1'
 using org.apache.hcatalog.pig.HCatStorer
('part1=pig, part2=hcat_pig_insert',
 'id: int,intnum: int,floatnum: float');
 {code}
 Generates the following error when running on a Federated Cluster:
 {quote}
 2012-10-29 20:40:25,011 [main] ERROR
 org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate
 exception from backed error: AttemptID:attempt_1348522594824_0846_m_00_3
 Info:Error: org.apache.hadoop.fs.viewfs.NotInMountpointException:
 getDefaultReplication on empty path is invalid
 at
 org.apache.hadoop.fs.viewfs.ViewFileSystem.getDefaultReplication(ViewFileSystem.java:479)
 at org.apache.hadoop.hive.ql.io.RCFile$Writer.init(RCFile.java:723)
 at org.apache.hadoop.hive.ql.io.RCFile$Writer.init(RCFile.java:705)
 at
 org.apache.hadoop.hive.ql.io.RCFileOutputFormat.getRecordWriter(RCFileOutputFormat.java:86)
 at
 org.apache.hcatalog.mapreduce.FileOutputFormatContainer.getRecordWriter(FileOutputFormatContainer.java:100)
 at
 org.apache.hcatalog.mapreduce.HCatOutputFormat.getRecordWriter(HCatOutputFormat.java:228)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.getRecordWriter(PigOutputFormat.java:84)
 at
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.init(MapTask.java:587)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:706)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3712) Use varbinary instead of longvarbinary to store min and max column values in column stats schema

2012-11-15 Thread Shreepadma Venugopalan (JIRA)
Shreepadma Venugopalan created HIVE-3712:


 Summary: Use varbinary instead of longvarbinary to store min and 
max column values in column stats schema
 Key: HIVE-3712
 URL: https://issues.apache.org/jira/browse/HIVE-3712
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Statistics
Affects Versions: 0.9.0
Reporter: Shreepadma Venugopalan
Assignee: Shreepadma Venugopalan


JDBC type longvarbinary maps to BLOB SQL type in some databases. Storing min 
and max column values for numeric types takes up 8 bytes and hence doesn't 
require a BLOB. Storing these values in a BLOB will impact performance without 
providing much benefits. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


HIVE optimizer enhancements in 0.9.0+ releases

2012-11-15 Thread Sukhendu Chakraborty
Hi,

I am a HIVE user who is working on anlytical applications on large
data sets. For us, the HIVE performance is critical for the success of
our product. I was wondering if there are any recent improvements that
were made in the optimizer layer.  One of the relevant references I
found on the web is the HIVE paper
(http://infolab.stanford.edu/~ragho/hive-icde2010.pdf) . If you can
send me any pointers on current enhancements, that would be great.

Some specific improvements I am looking for are:
1. Cost based optimization (logical or physical)
2. multi-query optimization techniques and performing generic n-way
joins in a single map-reduce job (quoted from the future work section
of the paper above)
3. Using and generation of table statistics for generation of
betterplans/faster execution etc. I know there was some code added to
generate column statistics for HIVE tables. Any table level statistics
generation?

Thanks for your help,
-Sukhendu


[jira] [Updated] (HIVE-3234) getting the reporter in the recordwriter

2012-11-15 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-3234:
--

Attachment: HIVE-3234.D6699.2.patch

omalley updated the revision HIVE-3234 [jira] getting the reporter in the 
recordwriter.
Reviewers: JIRA, ashutoshc

  I've updated the patch based on Ashutosh's feedback. In particular, I've
  pushed the Reporter through the RowContainer.


REVISION DETAIL
  https://reviews.facebook.net/D6699

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/exec/AbstractMapJoinOperator.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/RowContainer.java
  ql/src/java/org/apache/hadoop/hive/ql/io/HiveFileFormatUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/BlockMergeTask.java
  ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/RCFileMergeMapper.java
  ql/src/test/org/apache/hadoop/hive/ql/io/udf/Rot13InputFormat.java
  ql/src/test/org/apache/hadoop/hive/ql/io/udf/Rot13OutputFormat.java
  ql/src/test/queries/clientpositive/custom_input_output_format.q
  ql/src/test/results/clientpositive/custom_input_output_format.q.out

To: JIRA, ashutoshc, omalley


 getting the reporter in the recordwriter
 

 Key: HIVE-3234
 URL: https://issues.apache.org/jira/browse/HIVE-3234
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.9.1
 Environment: any
Reporter: Jimmy Hu
Assignee: Owen O'Malley
  Labels: newbie
 Fix For: 0.9.1

 Attachments: HIVE-3234.D6699.1.patch, HIVE-3234.D6699.2.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 We would like to generate some custom statistics and report back to 
 map/reduce later wen implement the 
  FileSinkOperator.RecordWriter interface. However, the current interface 
 design doesn't allow us to get the map reduce reporter object. Please extend 
 the current FileSinkOperator.RecordWriter interface so that it's close() 
 method passes in a map reduce reporter object. 
 For the same reason, please also extend the RecordReader interface too to 
 include a reporter object so that users can passes in custom map reduce  
 counters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3709) Stop storing default ConfVars in temp file

2012-11-15 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3709:


Attachment: HIVE-3709.2.patch.txt

 Stop storing default ConfVars in temp file
 --

 Key: HIVE-3709
 URL: https://issues.apache.org/jira/browse/HIVE-3709
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3709.1.patch.txt, HIVE-3709.2.patch.txt


 To work around issues with Hadoop's Configuration object, specifically it's 
 addResource(InputStream), default configurations are written to a temp file 
 (I think HIVE-2362 introduced this).
 This, however, introduces the problem that once that file is deleted from 
 /tmp the client crashes.  This is particularly problematic for long running 
 services like the metastore server.
 Writing a custom InputStream to deal with the problems in the Configuration 
 object should provide a work around, which does not introduce a time bomb 
 into Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3709) Stop storing default ConfVars in temp file

2012-11-15 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498291#comment-13498291
 ] 

Kevin Wilfong commented on HIVE-3709:
-

Thanks Carl, I switched to caching the byte[] and returning a new InputStream 
wrapping that byte[]. Now those two tests pass.

 Stop storing default ConfVars in temp file
 --

 Key: HIVE-3709
 URL: https://issues.apache.org/jira/browse/HIVE-3709
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3709.1.patch.txt, HIVE-3709.2.patch.txt


 To work around issues with Hadoop's Configuration object, specifically it's 
 addResource(InputStream), default configurations are written to a temp file 
 (I think HIVE-2362 introduced this).
 This, however, introduces the problem that once that file is deleted from 
 /tmp the client crashes.  This is particularly problematic for long running 
 services like the metastore server.
 Writing a custom InputStream to deal with the problems in the Configuration 
 object should provide a work around, which does not introduce a time bomb 
 into Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3709) Stop storing default ConfVars in temp file

2012-11-15 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3709:


Status: Patch Available  (was: Open)

 Stop storing default ConfVars in temp file
 --

 Key: HIVE-3709
 URL: https://issues.apache.org/jira/browse/HIVE-3709
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3709.1.patch.txt, HIVE-3709.2.patch.txt


 To work around issues with Hadoop's Configuration object, specifically it's 
 addResource(InputStream), default configurations are written to a temp file 
 (I think HIVE-2362 introduced this).
 This, however, introduces the problem that once that file is deleted from 
 /tmp the client crashes.  This is particularly problematic for long running 
 services like the metastore server.
 Writing a custom InputStream to deal with the problems in the Configuration 
 object should provide a work around, which does not introduce a time bomb 
 into Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


HIVE optimizer enhancements in 0.9.0+ releases

2012-11-15 Thread Sukhendu Chakraborty
Hi,

I am a HIVE user who is working on anlytical applications on large
data sets. For us, the HIVE performance is critical for the success of
our product. I was wondering if there are any recent improvements that
were made in the optimizer layer.  One of the relevant references I
found on the web is the HIVE paper
(http://infolab.stanford.edu/~ragho/hive-icde2010.pdf) . If you can
send me any pointers on current enhancements, that would be great.

Some specific improvements I am looking for are:
1. Cost based optimization (logical or physical)
2. multi-query optimization techniques and performing generic n-way
joins in a single map-reduce job (quoted from the future work section
of the paper above)
3. Using and generation of table statistics for generation of
betterplans/faster execution etc. I know there was some code added to
generate column statistics for HIVE tables. Any table level statistics
generation?

Thanks for your help,
-Sukhendu


HIVE optimizer enhancements in 0.9.0+ releases

2012-11-15 Thread Sukhendu Chakraborty
Hi,

I am a HIVE user who is working on anlytical applications on large
data sets. For us, the HIVE performance is critical for the success of
our product. I was wondering if there are any recent improvements that
were made in the optimizer layer.  One of the relevant references I
found on the web is the HIVE paper
(http://infolab.stanford.edu/~ragho/hive-icde2010.pdf) . If you can
send me any pointers on current enhancements, that would be great.

Some specific improvements I am looking for are:
1. Cost based optimization (logical or physical)
2. multi-query optimization techniques and performing generic n-way
joins in a single map-reduce job (quoted from the future work section
of the paper above)
3. Using and generation of table statistics for generation of
betterplans/faster execution etc. I know there was some code added to
generate column statistics for HIVE tables. Any other statistics
generation?

Thanks for your help,
-Sukhendu


[jira] [Commented] (HIVE-3709) Stop storing default ConfVars in temp file

2012-11-15 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498325#comment-13498325
 ] 

Carl Steinbach commented on HIVE-3709:
--

+1. Running tests.

 Stop storing default ConfVars in temp file
 --

 Key: HIVE-3709
 URL: https://issues.apache.org/jira/browse/HIVE-3709
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3709.1.patch.txt, HIVE-3709.2.patch.txt


 To work around issues with Hadoop's Configuration object, specifically it's 
 addResource(InputStream), default configurations are written to a temp file 
 (I think HIVE-2362 introduced this).
 This, however, introduces the problem that once that file is deleted from 
 /tmp the client crashes.  This is particularly problematic for long running 
 services like the metastore server.
 Writing a custom InputStream to deal with the problems in the Configuration 
 object should provide a work around, which does not introduce a time bomb 
 into Hive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3403) user should not specify mapjoin to perform sort-merge bucketed join

2012-11-15 Thread Kevin Wilfong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Wilfong updated HIVE-3403:


Status: Open  (was: Patch Available)

 user should not specify mapjoin to perform sort-merge bucketed join
 ---

 Key: HIVE-3403
 URL: https://issues.apache.org/jira/browse/HIVE-3403
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3403.1.patch, hive.3403.2.patch, hive.3403.3.patch, 
 hive.3403.4.patch, hive.3403.5.patch, hive.3403.6.patch, hive.3403.7.patch, 
 hive.3403.8.patch


 Currently, in order to perform a sort merge bucketed join, the user needs
 to set hive.optimize.bucketmapjoin.sortedmerge to true, and also specify the 
 mapjoin hint.
 The user should not specify any hints.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3403) user should not specify mapjoin to perform sort-merge bucketed join

2012-11-15 Thread Kevin Wilfong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498326#comment-13498326
 ] 

Kevin Wilfong commented on HIVE-3403:
-

Comments on Phabricator.

 user should not specify mapjoin to perform sort-merge bucketed join
 ---

 Key: HIVE-3403
 URL: https://issues.apache.org/jira/browse/HIVE-3403
 Project: Hive
  Issue Type: Bug
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: hive.3403.1.patch, hive.3403.2.patch, hive.3403.3.patch, 
 hive.3403.4.patch, hive.3403.5.patch, hive.3403.6.patch, hive.3403.7.patch, 
 hive.3403.8.patch


 Currently, in order to perform a sort merge bucketed join, the user needs
 to set hive.optimize.bucketmapjoin.sortedmerge to true, and also specify the 
 mapjoin hint.
 The user should not specify any hints.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3648) HiveMetaStoreFsImpl is not compatible with hadoop viewfs

2012-11-15 Thread Arup Malakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498328#comment-13498328
 ] 

Arup Malakar commented on HIVE-3648:


Review for trunk: https://reviews.facebook.net/D6759

[arc diff  origin/trunk  --jira HIVE-3648 throws error.]

 HiveMetaStoreFsImpl is not compatible with hadoop viewfs
 

 Key: HIVE-3648
 URL: https://issues.apache.org/jira/browse/HIVE-3648
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.9.0, 0.10.0
Reporter: Kihwal Lee
 Attachments: HIVE-3648-trunk-0.patch


 HiveMetaStoreFsImpl#deleteDir() method calls Trash#moveToTrash(). This may 
 not work when viewfs is used. It needs to call Trash#moveToAppropriateTrash() 
 instead.  Please note that this method is not available in hadoop versions 
 earlier than 0.23.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request: DB based token store

2012-11-15 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/7941/
---

(Updated Nov. 15, 2012, 9:52 p.m.)


Review request for hive and Carl Steinbach.


Changes
---

Updated patch based on Mark and Prasad's review comments.


Description
---

DB based token store


This addresses bug HIVE-3255.
https://issues.apache.org/jira/browse/HIVE-3255


Diffs (updated)
-

  trunk/metastore/scripts/upgrade/derby/012-HIVE-3255.derby.sql PRE-CREATION 
  trunk/metastore/scripts/upgrade/derby/hive-schema-0.10.0.derby.sql 1409909 
  trunk/metastore/scripts/upgrade/derby/upgrade-0.9.0-to-0.10.0.derby.sql 
1409909 
  trunk/metastore/scripts/upgrade/mysql/012-HIVE-3255.mysql.sql PRE-CREATION 
  trunk/metastore/scripts/upgrade/mysql/hive-schema-0.10.0.mysql.sql 1409909 
  trunk/metastore/scripts/upgrade/mysql/upgrade-0.9.0-to-0.10.0.mysql.sql 
1409909 
  trunk/metastore/scripts/upgrade/oracle/012-HIVE-3255.oracle.sql PRE-CREATION 
  trunk/metastore/scripts/upgrade/oracle/hive-schema-0.10.0.oracle.sql 1409909 
  trunk/metastore/scripts/upgrade/oracle/upgrade-0.9.0-to-0.10.0.oracle.sql 
PRE-CREATION 
  trunk/metastore/scripts/upgrade/postgres/012-HIVE-3255.postgres.sql 
PRE-CREATION 
  trunk/metastore/scripts/upgrade/postgres/hive-schema-0.10.0.postgres.sql 
1409909 
  trunk/metastore/scripts/upgrade/postgres/upgrade-0.9.0-to-0.10.0.postgres.sql 
1409909 
  trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
1409909 
  trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 
1409909 
  trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RawStore.java 
1409909 
  
trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MDelegationToken.java
 PRE-CREATION 
  
trunk/metastore/src/model/org/apache/hadoop/hive/metastore/model/MMasterKey.java
 PRE-CREATION 
  trunk/metastore/src/model/package.jdo 1409909 
  
trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyRawStoreForJdoConnection.java
 1409909 
  
trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/DBTokenStore.java
 PRE-CREATION 
  
trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/DelegationTokenStore.java
 1409909 
  
trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge20S.java
 1409909 
  
trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/MemoryTokenStore.java
 1409909 
  
trunk/shims/src/common-secure/java/org/apache/hadoop/hive/thrift/ZooKeeperTokenStore.java
 1409909 
  
trunk/shims/src/common-secure/test/org/apache/hadoop/hive/thrift/TestDBTokenStore.java
 PRE-CREATION 
  
trunk/shims/src/common/java/org/apache/hadoop/hive/thrift/HadoopThriftAuthBridge.java
 1409909 

Diff: https://reviews.apache.org/r/7941/diff/


Testing
---

Includes unit test


Thanks,

Ashutosh Chauhan



[jira] [Updated] (HIVE-3255) Add DBTokenStore to store Delegation Tokens in DB

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3255:
---

Attachment: hive-3255_3.patch

Updated patch incorporating Mark and Prasad's comments.

 Add DBTokenStore to store Delegation Tokens in DB
 -

 Key: HIVE-3255
 URL: https://issues.apache.org/jira/browse/HIVE-3255
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Security
Affects Versions: 0.9.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: hive-3255_2.patch, hive-3255_3.patch, hive-3255.patch


 Before HIVE-1696 metastore was completely stateless and all the data is in 
 backend db. HIVE-1696 added delegation tokens for metastore which metastore 
 needs to keep in memory. HIVE-2467 added support for storing delegation 
 tokens in ZooKeeper via an interface {{DelegationTokenStore}}. This jira is 
 about using DB backend for storing tokens which metastore uses for storing 
 data. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3234) getting the reporter in the recordwriter

2012-11-15 Thread Phabricator (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498390#comment-13498390
 ] 

Phabricator commented on HIVE-3234:
---

ashutoshc has accepted the revision HIVE-3234 [jira] getting the reporter in 
the recordwriter.

  Thanks Owen for incorporating changes. Looks good.

REVISION DETAIL
  https://reviews.facebook.net/D6699

BRANCH
  h-3234

To: JIRA, ashutoshc, omalley


 getting the reporter in the recordwriter
 

 Key: HIVE-3234
 URL: https://issues.apache.org/jira/browse/HIVE-3234
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.9.1
 Environment: any
Reporter: Jimmy Hu
Assignee: Owen O'Malley
  Labels: newbie
 Fix For: 0.9.1

 Attachments: HIVE-3234.D6699.1.patch, HIVE-3234.D6699.2.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 We would like to generate some custom statistics and report back to 
 map/reduce later wen implement the 
  FileSinkOperator.RecordWriter interface. However, the current interface 
 design doesn't allow us to get the map reduce reporter object. Please extend 
 the current FileSinkOperator.RecordWriter interface so that it's close() 
 method passes in a map reduce reporter object. 
 For the same reason, please also extend the RecordReader interface too to 
 include a reporter object so that users can passes in custom map reduce  
 counters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3234) getting the reporter in the recordwriter

2012-11-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498394#comment-13498394
 ] 

Ashutosh Chauhan commented on HIVE-3234:


+1 will commit if tests pass.

 getting the reporter in the recordwriter
 

 Key: HIVE-3234
 URL: https://issues.apache.org/jira/browse/HIVE-3234
 Project: Hive
  Issue Type: Improvement
  Components: Serializers/Deserializers
Affects Versions: 0.9.1
 Environment: any
Reporter: Jimmy Hu
Assignee: Owen O'Malley
  Labels: newbie
 Fix For: 0.9.1

 Attachments: HIVE-3234.D6699.1.patch, HIVE-3234.D6699.2.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 We would like to generate some custom statistics and report back to 
 map/reduce later wen implement the 
  FileSinkOperator.RecordWriter interface. However, the current interface 
 design doesn't allow us to get the map reduce reporter object. Please extend 
 the current FileSinkOperator.RecordWriter interface so that it's close() 
 method passes in a map reduce reporter object. 
 For the same reason, please also extend the RecordReader interface too to 
 include a reporter object so that users can passes in custom map reduce  
 counters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: hive 0.10 release

2012-11-15 Thread Ashutosh Chauhan
Good progress. Looks like folks are on board. I propose to cut the branch
in next couple of days. There are few jiras which are patch ready which I
want to get into the hive-0.10 release, including HIVE-3255 HIVE-2517
HIVE-3400 HIVE-3678
Ed has already made a request for HIVE-3083.  If folks have other patches
they want see in 0.10, please chime in.
Also, request to other committers to help in review patches. There are
quite a few in Patch Available state.

Thanks,
Ashutosh

On Thu, Nov 8, 2012 at 3:22 PM, Owen O'Malley omal...@apache.org wrote:

 +1


 On Thu, Nov 8, 2012 at 3:18 PM, Carl Steinbach c...@cloudera.com wrote:

  +1
 
  On Wed, Nov 7, 2012 at 11:23 PM, Alexander Lorenz wget.n...@gmail.com
  wrote:
 
   +1, good karma
  
   On Nov 8, 2012, at 4:58 AM, Namit Jain nj...@fb.com wrote:
  
+1 to the idea
   
On 11/8/12 6:33 AM, Edward Capriolo edlinuxg...@gmail.com wrote:
   
That sounds good. I think this issue needs to be solved as well as
anything else that produces a bugus query result.
   
https://issues.apache.org/jira/browse/HIVE-3083
   
Edward
   
On Wed, Nov 7, 2012 at 7:50 PM, Ashutosh Chauhan 
  hashut...@apache.org
wrote:
Hi,
   
Its been a while since we released 0.10 more than six months ago.
 All
this
while, lot of action has happened with various cool features
 landing
  in
trunk. Additionally, I am looking forward to HiveServer2 landing in
trunk.  So, I propose that we cut the branch for 0.10 soon
 afterwards
and
than release it. Thoughts?
   
Thanks,
Ashutosh
   
  
   --
   Alexander Alten-Lorenz
   http://mapredit.blogspot.com
   German Hadoop LinkedIn Group: http://goo.gl/N8pCF
  
  
 



[jira] [Created] (HIVE-3713) Metastore: Sporadic unit test failures

2012-11-15 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-3713:


 Summary: Metastore: Sporadic unit test failures
 Key: HIVE-3713
 URL: https://issues.apache.org/jira/browse/HIVE-3713
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner


For instance: 
https://builds.apache.org/job/Hive-trunk-h0.21/1792/testReport/org.apache.hadoop.hive.metastore/

Found the following issues:

testListener: Assumes that a certain tmp database hasn't been created yet, but 
doesn't enforce it

testSynchronized: Assumes that there's only one database, but doesn't enforce 
the fact

testDatabaseLocation: Fails if the user running the tests is root and doesn't 
clean up after itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3713) Metastore: Sporadic unit test failures

2012-11-15 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-3713:
-

Attachment: HIVE-3713.1-r1409996.txt

 Metastore: Sporadic unit test failures
 --

 Key: HIVE-3713
 URL: https://issues.apache.org/jira/browse/HIVE-3713
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
 Attachments: HIVE-3713.1-r1409996.txt


 For instance: 
 https://builds.apache.org/job/Hive-trunk-h0.21/1792/testReport/org.apache.hadoop.hive.metastore/
 Found the following issues:
 testListener: Assumes that a certain tmp database hasn't been created yet, 
 but doesn't enforce it
 testSynchronized: Assumes that there's only one database, but doesn't enforce 
 the fact
 testDatabaseLocation: Fails if the user running the tests is root and doesn't 
 clean up after itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2693) Add DECIMAL data type

2012-11-15 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-2693:
--

Attachment: HIVE-2693-10.patch

Here's the updated patch with test updates, additional patches for failed tests 
and JDBC support -
There were a few tests still failing, like 
clientnegative/invalid_cast_from_binary_[1-6] which required out file updates.
The patch 78 removed decimal round function which breaks things like 
round(int/int) and also round(decimal), so its added back. Also added JDBC 
support for handling decimal type data and metadata which was missing in Josh's 
original patch. I will log a separate ticket for HiveServer2 driver once both 
the patches are committed.

I think the outstanding issue is the implicit type conversion for UDFs. Firstly 
this changes expressions like (int/int) from double to decimal. This could be a 
problem of existing clients like ODBC, perl and python which expect this to be 
a double. Besides this leads to inconsistent behavior on division by 0, for 
example 1.1/0.0 stays NaN but 1/0 throws exception since it gets promoted by 
division of decimal which behaves differently from double. The BigDecimal 
throws an exception in case of division by 0. I added a couple of patches and 
modified the udf_round_2 test so that it returns NULL (which is also MySQL 
default behavior). Perhaps we should change the other cases also to from NaN to 
NULL and support a configuration option to fall back to old behavior (which can 
be done in a separate patch).

@Vikram, we can collaborate on this. You can  add your new changes on top of it 
and update on reviewboard.


 Add DECIMAL data type
 -

 Key: HIVE-2693
 URL: https://issues.apache.org/jira/browse/HIVE-2693
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor, Types
Affects Versions: 0.10.0
Reporter: Carl Steinbach
Assignee: Prasad Mujumdar
 Attachments: 2693_7.patch, 2693_8.patch, 2693_fix_all_tests1.patch, 
 HIVE-2693-10.patch, HIVE-2693-1.patch.txt, HIVE-2693-all.patch, 
 HIVE-2693-fix.patch, HIVE-2693.patch, HIVE-2693-take3.patch, 
 HIVE-2693-take4.patch


 Add support for the DECIMAL data type. HIVE-2272 (TIMESTAMP) provides a nice 
 template for how to do this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3713) Metastore: Sporadic unit test failures

2012-11-15 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-3713:
-

Status: Patch Available  (was: Open)

 Metastore: Sporadic unit test failures
 --

 Key: HIVE-3713
 URL: https://issues.apache.org/jira/browse/HIVE-3713
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
 Attachments: HIVE-3713.1-r1409996.txt


 For instance: 
 https://builds.apache.org/job/Hive-trunk-h0.21/1792/testReport/org.apache.hadoop.hive.metastore/
 Found the following issues:
 testListener: Assumes that a certain tmp database hasn't been created yet, 
 but doesn't enforce it
 testSynchronized: Assumes that there's only one database, but doesn't enforce 
 the fact
 testDatabaseLocation: Fails if the user running the tests is root and doesn't 
 clean up after itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HIVE-2599) Support Composit/Compound Keys with HBaseStorageHandler

2012-11-15 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni reassigned HIVE-2599:
--

Assignee: Swarnim Kulkarni

 Support Composit/Compound Keys with HBaseStorageHandler
 ---

 Key: HIVE-2599
 URL: https://issues.apache.org/jira/browse/HIVE-2599
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.8.0
Reporter: Hans Uhlig
Assignee: Swarnim Kulkarni

 It would be really nice for hive to be able to understand composite keys from 
 an underlying HBase schema. Currently we have to store key fields twice to be 
 able to both key and make data available. I noticed John Sichi mentioned in 
 HIVE-1228 that this would be a separate issue but I cant find any follow up. 
 How feasible is this in the HBaseStorageHandler?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3705) Adding authorization capability to the metastore

2012-11-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498423#comment-13498423
 ] 

Ashutosh Chauhan commented on HIVE-3705:


Sushanth,
Patch doesn't apply cleanly on trunk. Can you refresh it for trunk.

 Adding authorization capability to the metastore
 

 Key: HIVE-3705
 URL: https://issues.apache.org/jira/browse/HIVE-3705
 Project: Hive
  Issue Type: New Feature
  Components: Authorization, Metastore
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-3705.D6681.1.patch, HIVE-3705.D6681.2.patch, 
 hivesec_investigation.pdf


 In an environment where multiple clients access a single metastore, and we 
 want to evolve hive security to a point where it's no longer simply 
 preventing users from shooting their own foot, we need to be able to 
 authorize metastore calls as well, instead of simply performing every 
 metastore api call that's made.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3553) Support binary qualifiers for Hive/HBase integration

2012-11-15 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni updated HIVE-3553:
---

Attachment: HIVE-3553.1.patch.txt

 Support binary qualifiers for Hive/HBase integration
 

 Key: HIVE-3553
 URL: https://issues.apache.org/jira/browse/HIVE-3553
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.9.0
Reporter: Swarnim Kulkarni
 Fix For: 0.10.0

 Attachments: HIVE-3553.1.patch.txt


 Along with regular qualifiers, we should support binary HBase qualifiers as 
 well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3553) Support binary qualifiers for Hive/HBase integration

2012-11-15 Thread Swarnim Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swarnim Kulkarni updated HIVE-3553:
---

Assignee: Swarnim Kulkarni
  Status: Patch Available  (was: Open)

 Support binary qualifiers for Hive/HBase integration
 

 Key: HIVE-3553
 URL: https://issues.apache.org/jira/browse/HIVE-3553
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.9.0
Reporter: Swarnim Kulkarni
Assignee: Swarnim Kulkarni
 Fix For: 0.10.0

 Attachments: HIVE-3553.1.patch.txt


 Along with regular qualifiers, we should support binary HBase qualifiers as 
 well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3553) Support binary qualifiers for Hive/HBase integration

2012-11-15 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498429#comment-13498429
 ] 

Swarnim Kulkarni commented on HIVE-3553:


This patch should be up for review now.

 Support binary qualifiers for Hive/HBase integration
 

 Key: HIVE-3553
 URL: https://issues.apache.org/jira/browse/HIVE-3553
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Affects Versions: 0.9.0
Reporter: Swarnim Kulkarni
Assignee: Swarnim Kulkarni
 Fix For: 0.10.0

 Attachments: HIVE-3553.1.patch.txt


 Along with regular qualifiers, we should support binary HBase qualifiers as 
 well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3435) Get pdk pluginTest passed when triggered from both builtin tests and pdk tests on hadoop23

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3435:
---

   Resolution: Fixed
Fix Version/s: 0.10.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Zhenxiao!

 Get pdk pluginTest passed when triggered from both builtin tests and pdk 
 tests on hadoop23 
 ---

 Key: HIVE-3435
 URL: https://issues.apache.org/jira/browse/HIVE-3435
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0

 Attachments: HIVE-3435.1.patch.txt, HIVE-3435.2.patch.txt, 
 HIVE-3435.3.patch.txt


 Hive pdk pluginTest is running twice in unit testing, one is triggered from 
 running builtin tests, another is triggered from running pdk tests.
 HIVE-3413 fixed pdk pluginTest on hadoop23 when triggered from running 
 builtin tests. While, when triggered from running pdk tests directly on 
 hadoop23, it is failing:
 Testcase: SELECT tp_rot13('Mixed Up!') FROM onerow; took 6.426 sec
 FAILED
 expected:[]Zvkrq Hc! but was:[2012-09-04 18:13:01,668 WARN [main] 
 conf.HiveConf (HiveConf.java:clinit(73)) - hive-site.xml not found on 
 CLASSPATH
 ]Zvkrq Hc!
 junit.framework.ComparisonFailure: expected:[]Zvkrq Hc! but 
 was:[2012-09-04 18:13:01,668 WARN [main] conf.HiveConf 
 (HiveConf.java:clinit(73)) - hive-site.xml not found on CLASSPATH
 ]Zvkrq Hc!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3705) Adding authorization capability to the metastore

2012-11-15 Thread Shreepadma Venugopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498520#comment-13498520
 ] 

Shreepadma Venugopalan commented on HIVE-3705:
--

@Sushanth: Thanks for posting the document and the patch. Securing the 
metastore is necessary to provide reliable authorization in Hive. I looked at 
the document and the code and have the following high level questions,

 a)The document contains an example of how the current pluggable authorization 
provider can be exploited to circumvent security. This patch seems to introduce 
a new config param - hive.security.metastore.authorization.manager - that 
allows a pluggable authorization provider. Perhaps I'm missing something here, 
but wondering how we would prevent a user from plugging in their own 
authorization provider. 

 b)The current Hive authorization model exposes semantics that is confusing and 
at times inconsistent. While this patch has moved the auth checks to the 
metastore (IMO, this is the right thing to do) it seems to implement the 
existing semantics. Wondering if there is a plan to fix the semantics at some 
point.

 c)How do we obtain the userid for performing authorization? Are we using the 
authentication id from the Thrift context? If so, how do we handle the case 
where the authentication id is different from the authorization id, for e.g., 
HS2 authenticates to the metastore as HS2 but is executing a statement on 
behalf of user 'u1'? Thanks.

 Adding authorization capability to the metastore
 

 Key: HIVE-3705
 URL: https://issues.apache.org/jira/browse/HIVE-3705
 Project: Hive
  Issue Type: New Feature
  Components: Authorization, Metastore
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-3705.D6681.1.patch, HIVE-3705.D6681.2.patch, 
 hivesec_investigation.pdf


 In an environment where multiple clients access a single metastore, and we 
 want to evolve hive security to a point where it's no longer simply 
 preventing users from shooting their own foot, we need to be able to 
 authorize metastore calls as well, instead of simply performing every 
 metastore api call that's made.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3520) ivysettings.xml does not let you override .m2/repository

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3520:
---

   Resolution: Fixed
Fix Version/s: 0.10.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Raja!

 ivysettings.xml does not let you override .m2/repository
 

 Key: HIVE-3520
 URL: https://issues.apache.org/jira/browse/HIVE-3520
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.9.0
Reporter: Giridharan Kesavan
Assignee: Raja Aluri
 Fix For: 0.10.0

 Attachments: HIVE-3520.patch


 ivysettings.xml does not let you override .m2/repository. In other words 
 repo.dir ivysetting should be an overridable property

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3713) Metastore: Sporadic unit test failures

2012-11-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498533#comment-13498533
 ] 

Ashutosh Chauhan commented on HIVE-3713:


I have also seen these failures in my test runs. Changes look good. +1 will 
commit if tests pass.

 Metastore: Sporadic unit test failures
 --

 Key: HIVE-3713
 URL: https://issues.apache.org/jira/browse/HIVE-3713
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
 Attachments: HIVE-3713.1-r1409996.txt


 For instance: 
 https://builds.apache.org/job/Hive-trunk-h0.21/1792/testReport/org.apache.hadoop.hive.metastore/
 Found the following issues:
 testListener: Assumes that a certain tmp database hasn't been created yet, 
 but doesn't enforce it
 testSynchronized: Assumes that there's only one database, but doesn't enforce 
 the fact
 testDatabaseLocation: Fails if the user running the tests is root and doesn't 
 clean up after itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3713) Metastore: Sporadic unit test failures

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3713:
---

Assignee: Gunther Hagleitner

 Metastore: Sporadic unit test failures
 --

 Key: HIVE-3713
 URL: https://issues.apache.org/jira/browse/HIVE-3713
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.10.0
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-3713.1-r1409996.txt


 For instance: 
 https://builds.apache.org/job/Hive-trunk-h0.21/1792/testReport/org.apache.hadoop.hive.metastore/
 Found the following issues:
 testListener: Assumes that a certain tmp database hasn't been created yet, 
 but doesn't enforce it
 testSynchronized: Assumes that there's only one database, but doesn't enforce 
 the fact
 testDatabaseLocation: Fails if the user running the tests is root and doesn't 
 clean up after itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1799 - Still Failing

2012-11-15 Thread Apache Jenkins Server
Changes for Build #1764
[kevinwilfong] HIVE-3610. Add a command Explain dependency ... (Sambavi 
Muthukrishnan via kevinwilfong)


Changes for Build #1765

Changes for Build #1766
[hashutosh] HIVE-3441 : testcases escape1,escape2 fail on windows (Thejas Nair 
via Ashutosh Chauhan)

[kevinwilfong] HIVE-3499. add tests to use bucketing metadata for partitions. 
(njain via kevinwilfong)


Changes for Build #1767
[kevinwilfong] HIVE-3276. optimize union sub-queries. (njain via kevinwilfong)


Changes for Build #1768

Changes for Build #1769

Changes for Build #1770
[namit] HIVE-3570 Add/fix facility to collect operator specific statisticsin 
hive + add hash-in/hash-out
counter for GroupBy Optr (Satadru Pan via namit)

[namit] HIVE-3554 Hive List Bucketing - Query logic
(Gang Tim Liu via namit)

[cws] HIVE-3563. Drop database cascade fails when there are indexes on any 
tables (Prasad Mujumdar via cws)


Changes for Build #1771
[kevinwilfong] HIVE-3640. Reducer allocation is incorrect if enforce bucketing 
and mapred.reduce.tasks are both set. (Vighnesh Avadhani via kevinwilfong)


Changes for Build #1772

Changes for Build #1773

Changes for Build #1774

Changes for Build #1775
[namit] HIVE-3673 Sort merge join not used when join columns have different 
names
(Kevin Wilfong via namit)


Changes for Build #1776
[kevinwilfong] HIVE-3627. eclipse misses library: 
javolution-@javolution-version@.jar. (Gang Tim Liu via kevinwilfong)


Changes for Build #1777
[kevinwilfong] HIVE-3524. Storing certain Exception objects thrown in 
HiveMetaStore.java in MetaStoreEndFunctionContext. (Maheshwaran Srinivasan via 
kevinwilfong)

[cws] HIVE-1977. DESCRIBE TABLE syntax doesn't support specifying a database 
qualified table name (Zhenxiao Luo via cws)

[cws] HIVE-3674. Test case TestParse broken after recent checkin (Sambavi 
Muthukrishnan via cws)


Changes for Build #1778
[cws] HIVE-1362. Column level scalar valued statistics on Tables and Partitions 
(Shreepadma Venugopalan via cws)


Changes for Build #1779

Changes for Build #1780
[kevinwilfong] HIVE-3686. Fix compile errors introduced by the interaction of 
HIVE-1362 and HIVE-3524. (Shreepadma Venugopalan via kevinwilfong)


Changes for Build #1781
[namit] HIVE-3687 smb_mapjoin_13.q is nondeterministic
(Kevin Wilfong via namit)


Changes for Build #1782
[hashutosh] HIVE-2715: Upgrade Thrift dependency to 0.9.0 (Ashutosh Chauhan)


Changes for Build #1783
[kevinwilfong] HIVE-3654. block relative path access in hive. (njain via 
kevinwilfong)

[hashutosh] HIVE-3658 : Unable to generate the Hbase related unit tests using 
velocity templates on Windows (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3661 : Remove the Windows specific = related swizzle path 
changes from Proxy FileSystems (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3480 : Resource leak: Fix the file handle leaks in Symbolic 
 Symlink related input formats. (Kanna Karanam via Ashutosh Chauhan)


Changes for Build #1784
[kevinwilfong] HIVE-3675. NaN does not work correctly for round(n). (njain via 
kevinwilfong)

[cws] HIVE-3651. bucketmapjoin?.q tests fail with hadoop 0.23 (Prasad Mujumdar 
via cws)


Changes for Build #1785
[namit] HIVE-3613 Implement grouping_id function
(Ian Gorbachev via namit)

[namit] HIVE-3692 Update parallel test documentation
(Ivan Gorbachev via namit)

[namit] HIVE-3649 Hive List Bucketing - enhance DDL to specify list bucketing 
table
(Gang Tim Liu via namit)


Changes for Build #1786
[namit] HIVE-3696 Revert HIVE-3483 which causes performance regression
(Gang Tim Liu via namit)


Changes for Build #1787
[kevinwilfong] HIVE-3621. Make prompt in Hive CLI configurable. (Jingwei Lu via 
kevinwilfong)

[kevinwilfong] HIVE-3695. TestParse breaks due to HIVE-3675. (njain via 
kevinwilfong)


Changes for Build #1788
[kevinwilfong] HIVE-3557. Access to external URLs in hivetest.py. (Ivan 
Gorbachev via kevinwilfong)


Changes for Build #1789
[hashutosh] HIVE-3662 : TestHiveServer: testScratchDirShouldClearWhileStartup 
is failing on Windows (Kanna Karanam via Ashutosh Chauhan)

[hashutosh] HIVE-3659 : TestHiveHistory::testQueryloglocParentDirNotExist Test 
fails on Windows because of some resource leaks in ZK (Kanna Karanam via 
Ashutosh Chauhan)

[hashutosh] HIVE-3663 Unable to display the MR Job file path on Windows in case 
of MR job failures.  (Kanna Karanam via Ashutosh Chauhan)


Changes for Build #1790

Changes for Build #1791

Changes for Build #1792

Changes for Build #1793
[hashutosh] HIVE-3704 : name of some metastore scripts are not per convention 
(Ashutosh Chauhan)


Changes for Build #1794
[hashutosh] HIVE-3243 : ignore white space between entries of hive/hbase table 
mapping (Shengsheng Huang via Ashutosh Chauhan)

[hashutosh] HIVE-3215 : JobDebugger should use RunningJob.getTrackingURL 
(Bhushan Mandhani via Ashutosh Chauhan)


Changes for Build #1795
[cws] HIVE-3437. 0.23 compatibility: fix unit tests when building against 0.23 
(Chris Drome via cws)


[jira] [Commented] (HIVE-3291) fix fs resolvers

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498536#comment-13498536
 ] 

Hudson commented on HIVE-3291:
--

Integrated in Hive-trunk-h0.21 #1799 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1799/])
HIVE-3291 : fix fs resolvers (Ashish Singh via Ashutosh Chauhan) (Revision 
1409862)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409862
Files : 
* /hive/trunk/ivy/ivysettings.xml


 fix fs resolvers 
 -

 Key: HIVE-3291
 URL: https://issues.apache.org/jira/browse/HIVE-3291
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Giridharan Kesavan
Assignee: Ashish Singh
 Fix For: 0.10.0

 Attachments: HIVE-3291.patch, HIVE-3291.patch1, HIVE-3291.patch2


 shims module fails to compile when compiling hive against 1.0 using the fs 
 resolvers as the force=true flag forces it to use the available version of 
 hadoop.
 In a scenario where you want to build hadoop-1.0 and shims would still want 
 to build against 20.2 and if you happen to use fs resolver ie 
 -Dresolvers=true , fs resolvers would just use 1.0 of hadoop for shims and 
 shims compilation will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3680) Include Table information in Hive's AddPartitionEvent.

2012-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498537#comment-13498537
 ] 

Hudson commented on HIVE-3680:
--

Integrated in Hive-trunk-h0.21 #1799 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1799/])
HIVE-3680 : Include Table information in Hive's AddPartitionEvent. (Mithun 
Radhakrishnan via Ashutosh Chauhan) (Revision 1409861)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1409861
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/AddPartitionEvent.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/events/DropPartitionEvent.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreEventListener.java


 Include Table information in Hive's AddPartitionEvent.
 --

 Key: HIVE-3680
 URL: https://issues.apache.org/jira/browse/HIVE-3680
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.9.1
Reporter: Mithun Radhakrishnan
Assignee: Mithun Radhakrishnan
 Fix For: 0.10.0

 Attachments: HIVE-3680.branch9.patch, HIVE-3680.trunk.patch


 This has to do with a minor overhaul of the HCatalog notifications that we're 
 attempting in HCATALOG-546.
 It is proposed that HCatalog's notifications (on Add/Drop of Partitions) 
 provide details to identify the affected partitions. 
 Using the Partition object in AddPartitionEvent, one is able to retrieve the 
 values of the partition-keys and the name of the Table. However, the 
 partition-keys themselves aren't available (since the Table instance isn't 
 part of the AddPartitionEvent).
 Adding the table-reference to the AddPartitionEvent and DropPartitionEvent 
 classes will expose all the info we need. (The alternative is to query the 
 metastore for the table's schema and use the partition-keys from there. :/)
 I'll post a patch for this shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2691) Specify location of log4j configuration files via configuration properties

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-2691:
---

Status: Open  (was: Patch Available)

Patch doesn't apply cleanly anymore. Zhenxiao, can you please refresh the patch.

 Specify location of log4j configuration files via configuration properties
 --

 Key: HIVE-2691
 URL: https://issues.apache.org/jira/browse/HIVE-2691
 Project: Hive
  Issue Type: New Feature
  Components: Configuration, Logging
Reporter: Carl Steinbach
Assignee: Zhenxiao Luo
 Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1131.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.1.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.2.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.3.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.4.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.5.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D1203.6.patch, 
 ASF.LICENSE.NOT.GRANTED--HIVE-2691.D2667.1.patch, HIVE-2691.1.patch.txt, 
 HIVE-2691.D2667.1.patch


 Oozie needs to be able to override the default location of the log4j 
 configuration
 files from the Hive command line, e.g:
 {noformat}
 hive -hiveconf hive.log4j.file=/home/carl/hive-log4j.properties -hiveconf 
 hive.log4j.exec.file=/home/carl/hive-exec-log4j.properties
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3428) Fix log4j configuration errors when running hive on hadoop23

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3428:
---

Status: Open  (was: Patch Available)

Patch doesn't apply cleanly. Zhenxiao, can you please refresh the patch?

 Fix log4j configuration errors when running hive on hadoop23
 

 Key: HIVE-3428
 URL: https://issues.apache.org/jira/browse/HIVE-3428
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.10.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Attachments: HIVE-3428.1.patch.txt, HIVE-3428.2.patch.txt, 
 HIVE-3428.3.patch.txt, HIVE-3428.4.patch.txt, HIVE-3428.5.patch.txt, 
 HIVE-3428.6.patch.txt


 There are log4j configuration errors when running hive on hadoop23, some of 
 them may fail testcases, since the following log4j error message could 
 printed to console, or to output file, which diffs from the expected output:
 [junit]  log4j:ERROR Could not find value for key log4j.appender.NullAppender
 [junit]  log4j:ERROR Could not instantiate appender named NullAppender.
 [junit]  12/09/04 11:34:42 WARN conf.HiveConf: hive-site.xml not found on 
 CLASSPATH

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3398) Hive serde should support empty struct

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3398:
---

Fix Version/s: (was: 0.9.0)
   Status: Open  (was: Patch Available)

Patch doesn't apply cleanly anymore. Feng can you refresh the patch?

 Hive serde should support empty struct
 --

 Key: HIVE-3398
 URL: https://issues.apache.org/jira/browse/HIVE-3398
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0
Reporter: Feng Peng
 Attachments: HIVE-3398_serde_empty_struct.patch


 Right now TypeInfoUtils expects at least one field in a STRUCT, which is not 
 always true, e.g., empty struct is allowed in Thrift. We should modify 
 TypeInfoUtils so that empty struct can be correctly processed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3264) Add support for binary dataype to AvroSerde

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3264:
---

Status: Open  (was: Patch Available)

Marking as open, since looks like it needs some more work.

 Add support for binary dataype to AvroSerde
 ---

 Key: HIVE-3264
 URL: https://issues.apache.org/jira/browse/HIVE-3264
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.9.0
Reporter: Jakob Homan
  Labels: patch
 Attachments: HIVE-3264-1.patch, HIVE-3264-2.patch, HIVE-3264-3.patch, 
 HIVE-3264-4.patch, HIVE-3264-5.patch


 When the AvroSerde was written, Hive didn't have a binary type, so Avro's 
 byte array type is converted an array of small ints.  Now that HIVE-2380 is 
 in, this step isn't necessary and we can convert both Avro's bytes type and 
 probably fixed type to Hive's binary type.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3255) Add DBTokenStore to store Delegation Tokens in DB

2012-11-15 Thread Mark Grover (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498571#comment-13498571
 ] 

Mark Grover commented on HIVE-3255:
---

Looks good to me (non-committer). Thanks Ashutosh!

 Add DBTokenStore to store Delegation Tokens in DB
 -

 Key: HIVE-3255
 URL: https://issues.apache.org/jira/browse/HIVE-3255
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Security
Affects Versions: 0.9.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: hive-3255_2.patch, hive-3255_3.patch, hive-3255.patch


 Before HIVE-1696 metastore was completely stateless and all the data is in 
 backend db. HIVE-1696 added delegation tokens for metastore which metastore 
 needs to keep in memory. HIVE-2467 added support for storing delegation 
 tokens in ZooKeeper via an interface {{DelegationTokenStore}}. This jira is 
 about using DB backend for storing tokens which metastore uses for storing 
 data. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3035) Modify clean target to remove ~/.ivy2/local/org.apache.hive ~/.ivy2/cache/org.apache.hive

2012-11-15 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3035:
---

   Resolution: Not A Problem
Fix Version/s: 0.10.0
   Status: Resolved  (was: Patch Available)

Stated problem doesn't exist on trunk any longer.

 Modify clean target to remove ~/.ivy2/local/org.apache.hive 
 ~/.ivy2/cache/org.apache.hive
 -

 Key: HIVE-3035
 URL: https://issues.apache.org/jira/browse/HIVE-3035
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.10.0
Reporter: Ashutosh Chauhan
Assignee: Edward Capriolo
 Fix For: 0.10.0

 Attachments: hive-3035.1.patch.txt


 Reported by Carl in HIVE-3014. Not sure if both dirs need to be removed or 
 only one of them will suffice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3392) Hive unnecessarily validates table SerDes when dropping a table

2012-11-15 Thread Ajesh Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13498660#comment-13498660
 ] 

Ajesh Kumar commented on HIVE-3392:
---

Hi Edward Capriolo
As per my previous comment, can you suggest if we need to work more on this or 
we can consider this issue as resolved?

 Hive unnecessarily validates table SerDes when dropping a table
 ---

 Key: HIVE-3392
 URL: https://issues.apache.org/jira/browse/HIVE-3392
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Jonathan Natkins
Assignee: Ajesh Kumar
  Labels: patch
 Attachments: HIVE-3392.2.patch.txt, HIVE-3392.Test Case - 
 with_trunk_version.txt


 natty@hadoop1:~$ hive
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
  to class path
 Added resource: 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 hive create table test (a int) row format serde 'hive.serde.JSONSerDe';  
   
 OK
 Time taken: 2.399 seconds
 natty@hadoop1:~$ hive
 hive drop table test;

 FAILED: Hive Internal Error: 
 java.lang.RuntimeException(MetaException(message:org.apache.hadoop.hive.serde2.SerDeException
  SerDe hive.serde.JSONSerDe does not exist))
 java.lang.RuntimeException: 
 MetaException(message:org.apache.hadoop.hive.serde2.SerDeException SerDe 
 hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:262)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:253)
   at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:490)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:162)
   at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:943)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:700)
   at 
 org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:210)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 Caused by: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
 SerDe com.cloudera.hive.serde.JSONSerDe does not exist)
   at 
 org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:211)
   at 
 org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
   ... 20 more
 hive add jar 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar;
 Added 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
  to class path
 Added resource: 
 /home/natty/source/sample-code/custom-serdes/target/custom-serdes-1.0-SNAPSHOT.jar
 hive drop table test;
 OK
 Time taken: 0.658 seconds
 hive 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira