Hive-trunk-h0.21 - Build # 1066 - Failure

2011-11-08 Thread Apache Jenkins Server
Changes for Build #1066
[jvs] HIVE-2527 [jira] Consecutive string literals should be combined into a 
single
string literal.
(Jonathan Chang via jvs)

Summary:
HIVE

C, Python, etc. all support this magical feature.

Test Plan: EMPTY

Reviewers: JIRA, jsichi

Reviewed By: jsichi

CC: akramer, jonchang, jsichi

Differential Revision: 147




1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception
See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
more logs.
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:9330)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1066)

Status: Failure

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1066/ to 
view the results.


[jira] [Commented] (HIVE-2527) Consecutive string literals should be combined into a single string literal.

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146138#comment-13146138
 ] 

Hudson commented on HIVE-2527:
--

Integrated in Hive-trunk-h0.21 #1066 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1066/])
HIVE-2527 [jira] Consecutive string literals should be combined into a 
single
string literal.
(Jonathan Chang via jvs)

Summary:
HIVE

C, Python, etc. all support this magical feature.

Test Plan: EMPTY

Reviewers: JIRA, jsichi

Reviewed By: jsichi

CC: akramer, jonchang, jsichi

Differential Revision: 147

jvs : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199066
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/Hive.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/queries/clientpositive/literal_string.q
* /hive/trunk/ql/src/test/results/clientpositive/literal_string.q.out


 Consecutive string literals should be combined into a single string literal.
 

 Key: HIVE-2527
 URL: https://issues.apache.org/jira/browse/HIVE-2527
 Project: Hive
  Issue Type: Improvement
Reporter: Jonathan Chang
Assignee: Jonathan Chang
Priority: Minor
 Fix For: 0.9.0

 Attachments: D147.1.patch, D147.2.patch, D147.3.patch, D147.3.patch, 
 D147.4.patch, D147.4.patch


 C, Python, etc. all support this magical feature.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-trunk-h0.21 - Build # 1067 - Fixed

2011-11-08 Thread Apache Jenkins Server
Changes for Build #1066
[jvs] HIVE-2527 [jira] Consecutive string literals should be combined into a 
single
string literal.
(Jonathan Chang via jvs)

Summary:
HIVE

C, Python, etc. all support this magical feature.

Test Plan: EMPTY

Reviewers: JIRA, jsichi

Reviewed By: jsichi

CC: akramer, jonchang, jsichi

Differential Revision: 147


Changes for Build #1067
[namit] HIVE-2466 mapjoin_subquery dump small table (mapjoin table) to the same 
file
(binlijin via namit)

[nzhang] HIVE-2545. Make metastore log4j configuration file configurable again. 
(Kevin Wilfong via Ning Zhang)




All tests passed

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1067)

Status: Fixed

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1067/ to 
view the results.


[jira] [Commented] (HIVE-2466) mapjoin_subquery dump small table (mapjoin table) to the same file

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146227#comment-13146227
 ] 

Hudson commented on HIVE-2466:
--

Integrated in Hive-trunk-h0.21 #1067 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1067/])
HIVE-2466 mapjoin_subquery dump small table (mapjoin table) to the same file
(binlijin via namit)

namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199117
Files : 
* /hive/trunk/data/files/x.txt
* /hive/trunk/data/files/y.txt
* /hive/trunk/data/files/z.txt
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapredLocalTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/GenMRSkewJoinProcessor.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/HashTableSinkDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapJoinDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java
* /hive/trunk/ql/src/test/queries/clientpositive/mapjoin_subquery2.q
* /hive/trunk/ql/src/test/results/clientpositive/mapjoin_subquery2.q.out


 mapjoin_subquery  dump small table (mapjoin table) to the same file
 ---

 Key: HIVE-2466
 URL: https://issues.apache.org/jira/browse/HIVE-2466
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.7.1
Reporter: binlijin
Assignee: binlijin
Priority: Critical
 Attachments: D285.1.patch, D285.2.patch, hive-2466.1.patch, 
 hive-2466.2.patch, hive-2466.3.patch, hive-2466.4.patch


 in mapjoin_subquery.q  there is a query:
 SELECT /*+ MAPJOIN(z) */ subq.key1, z.value
 FROM
 (SELECT /*+ MAPJOIN(x) */ x.key as key1, x.value as value1, y.key as key2, 
 y.value as value2 
  FROM src1 x JOIN src y ON (x.key = y.key)) subq
  JOIN srcpart z ON (subq.key1 = z.key and z.ds='2008-04-08' and z.hr=11);
 when dump x and z to a local file,there all dump to the same file, so we lost 
 the data of x

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2545) Make metastore log4j configuration file configurable again.

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146226#comment-13146226
 ] 

Hudson commented on HIVE-2545:
--

Integrated in Hive-trunk-h0.21 #1067 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1067/])
HIVE-2545. Make metastore log4j configuration file configurable again. 
(Kevin Wilfong via Ning Zhang)

nzhang : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199114
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java


 Make metastore log4j configuration file configurable again.
 ---

 Key: HIVE-2545
 URL: https://issues.apache.org/jira/browse/HIVE-2545
 Project: Hive
  Issue Type: Improvement
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Fix For: 0.9.0

 Attachments: HIVE-2545.1.patch.txt


 The patch for https://issues.apache.org/jira/browse/HIVE-2139 hard coded the 
 metastore to use hive-log4j.properties as the log4j configuration file.  
 Previously this was configurable through the log4j.configuration variable 
 passed into Java.  It should be configurable again, though not necessarily 
 through the same means.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1434) Cassandra Storage Handler

2011-11-08 Thread Commented

[ 
https://issues.apache.org/jira/browse/HIVE-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146270#comment-13146270
 ] 

Nicolas Lalevée commented on HIVE-1434:
---

I finally found the source of the brisk version. As suggested by Jonathan, I 
made it a patch there: CASSANDRA-913

 Cassandra Storage Handler
 -

 Key: HIVE-1434
 URL: https://issues.apache.org/jira/browse/HIVE-1434
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.7.0
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-1434-r1182878.patch, cas-handle.tar.gz, 
 cass_handler.diff, hive-1434-1.txt, hive-1434-2-patch.txt, 
 hive-1434-2011-02-26.patch.txt, hive-1434-2011-03-07.patch.txt, 
 hive-1434-2011-03-07.patch.txt, hive-1434-2011-03-14.patch.txt, 
 hive-1434-3-patch.txt, hive-1434-4-patch.txt, hive-1434-5.patch.txt, 
 hive-1434.2011-02-27.diff.txt, hive-cassandra.2011-02-25.txt, hive.diff


 Add a cassandra storage handler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2512) After HIVE-2145, Hive disallow any use of function in cluster-by clause

2011-11-08 Thread Chinna Rao Lalam (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2512:
---

Attachment: HIVE-2512.2.patch

 After HIVE-2145, Hive disallow any use of function in cluster-by clause
 ---

 Key: HIVE-2512
 URL: https://issues.apache.org/jira/browse/HIVE-2512
 Project: Hive
  Issue Type: Bug
Reporter: Ning Zhang
Assignee: Chinna Rao Lalam
 Attachments: HIVE-2512.1.patch, HIVE-2512.2.patch, HIVE-2512.patch


 After HIVE-2145, the following query returns a semantic analysis error: 
 FROM src SELECT * cluster by rand();
 FAILED: Error in semantic analysis: functions are not supported in order by
 Looking back at HIVE-2145, it's clear that the patch is more restrictive than 
 necessary. 
 Chinna, are you able to work on it? Please let me know if you don't have 
 cycles to do it now. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2512) After HIVE-2145, Hive disallow any use of function in cluster-by clause

2011-11-08 Thread Chinna Rao Lalam (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146373#comment-13146373
 ] 

Chinna Rao Lalam commented on HIVE-2512:


select key, count(1) cnt from src group by key order by count(1) limit 10;

Here order by is requested with the aggregate function in execution flow of 
this query in 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.DefaultExprProcessor.getXpathOrFuncExprNodeDesc(ASTNode,
 boolean, ArrayListExprNodeDesc, TypeCheckCtx) while constructing 
ExprNodeGenericFuncDesc it is expecting GenericUDF() but here it is requested 
with aggregate function so it is returning null and it is throwing 
NullPointerException

So before constructing ExprNodeGenericFuncDesc added a check if it is UDAF 
throw exception

And it should work with below queries ,

select key,min(key) from src group by key having min(key)  100;
select key,min(key) as mininum from src group by key order by mininum;

 After HIVE-2145, Hive disallow any use of function in cluster-by clause
 ---

 Key: HIVE-2512
 URL: https://issues.apache.org/jira/browse/HIVE-2512
 Project: Hive
  Issue Type: Bug
Reporter: Ning Zhang
Assignee: Chinna Rao Lalam
 Attachments: HIVE-2512.1.patch, HIVE-2512.2.patch, HIVE-2512.patch


 After HIVE-2145, the following query returns a semantic analysis error: 
 FROM src SELECT * cluster by rand();
 FAILED: Error in semantic analysis: functions are not supported in order by
 Looking back at HIVE-2145, it's clear that the patch is more restrictive than 
 necessary. 
 Chinna, are you able to work on it? Please let me know if you don't have 
 cycles to do it now. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Review Request: Hive disallow any use of function in cluster-by clause

2011-11-08 Thread chinnarao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2479/
---

(Updated 2011-11-08 16:03:21.111741)


Review request for hive and Ning Zhang.


Changes
---

select key, count(1) cnt from src group by key order by count(1) limit 10;

Here order by is requested with the aggregate function in execution flow of 
this query in 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.DefaultExprProcessor.getXpathOrFuncExprNodeDesc(ASTNode,
 boolean, ArrayListExprNodeDesc, TypeCheckCtx) while constructing 
ExprNodeGenericFuncDesc it is expecting GenericUDF() but here it is requested 
with aggregate function so it is returning null and it is throwing 
NullPointerException

So before constructing ExprNodeGenericFuncDesc added a check if it is UDAF 
throw exception

And it should work with below queries ,

select key,min(key) from src group by key having min(key)  100;
select key,min(key) as mininum from src group by key order by mininum;


Summary
---

SemanticAnalyzer.genReduceSinkPlan() added a function validation check for 
orderby and sortby in the same flow clusterby also executing so it is throwing 
execption for the clusterby also.

Orderby and Sortby wont support the functions so need to add the function 
validation check. 
Now validation check is modified like it should check only for the orderby and 
sortby.


This addresses bug HIVE-2512.
https://issues.apache.org/jira/browse/HIVE-2512


Diffs (updated)
-

  trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ErrorMsg.java 1198626 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java 
1198626 
  trunk/ql/src/test/queries/clientnegative/orderby_function.q PRE-CREATION 
  trunk/ql/src/test/queries/clientnegative/orderby_function1.q PRE-CREATION 
  trunk/ql/src/test/queries/clientnegative/orderby_function2.q PRE-CREATION 
  trunk/ql/src/test/queries/clientnegative/sortby_function.q PRE-CREATION 
  trunk/ql/src/test/queries/clientpositive/orderby_function.q PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/orderby_function.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/orderby_function1.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/orderby_function2.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/sortby_function.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientpositive/orderby_function.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/2479/diff


Testing
---

All unit tests passed


Thanks,

chinna



[jira] [Commented] (HIVE-2512) After HIVE-2145, Hive disallow any use of function in cluster-by clause

2011-11-08 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146377#comment-13146377
 ] 

jirapos...@reviews.apache.org commented on HIVE-2512:
-


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2479/
---

(Updated 2011-11-08 16:03:21.111741)


Review request for hive and Ning Zhang.


Changes
---

select key, count(1) cnt from src group by key order by count(1) limit 10;

Here order by is requested with the aggregate function in execution flow of 
this query in 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.DefaultExprProcessor.getXpathOrFuncExprNodeDesc(ASTNode,
 boolean, ArrayListExprNodeDesc, TypeCheckCtx) while constructing 
ExprNodeGenericFuncDesc it is expecting GenericUDF() but here it is requested 
with aggregate function so it is returning null and it is throwing 
NullPointerException

So before constructing ExprNodeGenericFuncDesc added a check if it is UDAF 
throw exception

And it should work with below queries ,

select key,min(key) from src group by key having min(key)  100;
select key,min(key) as mininum from src group by key order by mininum;


Summary
---

SemanticAnalyzer.genReduceSinkPlan() added a function validation check for 
orderby and sortby in the same flow clusterby also executing so it is throwing 
execption for the clusterby also.

Orderby and Sortby wont support the functions so need to add the function 
validation check. 
Now validation check is modified like it should check only for the orderby and 
sortby.


This addresses bug HIVE-2512.
https://issues.apache.org/jira/browse/HIVE-2512


Diffs (updated)
-

  trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ErrorMsg.java 1198626 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java 
1198626 
  trunk/ql/src/test/queries/clientnegative/orderby_function.q PRE-CREATION 
  trunk/ql/src/test/queries/clientnegative/orderby_function1.q PRE-CREATION 
  trunk/ql/src/test/queries/clientnegative/orderby_function2.q PRE-CREATION 
  trunk/ql/src/test/queries/clientnegative/sortby_function.q PRE-CREATION 
  trunk/ql/src/test/queries/clientpositive/orderby_function.q PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/orderby_function.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/orderby_function1.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/orderby_function2.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/sortby_function.q.out PRE-CREATION 
  trunk/ql/src/test/results/clientpositive/orderby_function.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/2479/diff


Testing
---

All unit tests passed


Thanks,

chinna



 After HIVE-2145, Hive disallow any use of function in cluster-by clause
 ---

 Key: HIVE-2512
 URL: https://issues.apache.org/jira/browse/HIVE-2512
 Project: Hive
  Issue Type: Bug
Reporter: Ning Zhang
Assignee: Chinna Rao Lalam
 Attachments: HIVE-2512.1.patch, HIVE-2512.2.patch, HIVE-2512.patch


 After HIVE-2145, the following query returns a semantic analysis error: 
 FROM src SELECT * cluster by rand();
 FAILED: Error in semantic analysis: functions are not supported in order by
 Looking back at HIVE-2145, it's clear that the patch is more restrictive than 
 necessary. 
 Chinna, are you able to work on it? Please let me know if you don't have 
 cycles to do it now. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2512) After HIVE-2145, Hive disallow any use of function in cluster-by clause

2011-11-08 Thread Chinna Rao Lalam (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2512:
---

Status: Patch Available  (was: Open)

 After HIVE-2145, Hive disallow any use of function in cluster-by clause
 ---

 Key: HIVE-2512
 URL: https://issues.apache.org/jira/browse/HIVE-2512
 Project: Hive
  Issue Type: Bug
Reporter: Ning Zhang
Assignee: Chinna Rao Lalam
 Attachments: HIVE-2512.1.patch, HIVE-2512.2.patch, HIVE-2512.patch


 After HIVE-2145, the following query returns a semantic analysis error: 
 FROM src SELECT * cluster by rand();
 FAILED: Error in semantic analysis: functions are not supported in order by
 Looking back at HIVE-2145, it's clear that the patch is more restrictive than 
 necessary. 
 Chinna, are you able to work on it? Please let me know if you don't have 
 cycles to do it now. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2559) Add target to intall Hive JARs/POMs in the local Maven cache

2011-11-08 Thread Alejandro Abdelnur (Created) (JIRA)
Add target to intall Hive JARs/POMs in the local Maven cache


 Key: HIVE-2559
 URL: https://issues.apache.org/jira/browse/HIVE-2559
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Alejandro Abdelnur
Priority: Critical


HIVE-2391 is producing usable Maven artifacts.

However, it only as a target to deploy/publish those artifacts to Apache Maven 
repos.

There should be a new target to locally install Hive Maven artifacts, thus 
enabling their use from other projects before they are committed/publish to 
Apache Maven (this is critical to test patches that may address issues in 
downstream components).


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2478) Support dry run option in hive

2011-11-08 Thread Sushanth Sowmyan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146396#comment-13146396
 ] 

Sushanth Sowmyan commented on HIVE-2478:


Hi,

I have a question - does EXPLAIN execute any user-specified code? I'm faced 
with a environment issue where I don't want to execute any user-specified code 
(UDFs,etc) or run anything, merely to syntax check. Would EXPLAIN still fit my 
need?

Thanks!

 Support dry run option in hive
 --

 Key: HIVE-2478
 URL: https://issues.apache.org/jira/browse/HIVE-2478
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.9.0
Reporter: kalyan ram
Priority: Minor
 Attachments: HIVE-2478-1.patch


 Hive currently doesn't support a dry run option. For some complex queries we 
 just want to verify the query syntax initally before running it. A dry run 
 option where just the parsing is done without actual execution is a good 
 option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2559) Add target to install Hive JARs/POMs in the local Maven cache

2011-11-08 Thread Alejandro Abdelnur (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HIVE-2559:
-

Summary: Add target to install Hive JARs/POMs in the local Maven cache  
(was: Add target to intall Hive JARs/POMs in the local Maven cache)

 Add target to install Hive JARs/POMs in the local Maven cache
 -

 Key: HIVE-2559
 URL: https://issues.apache.org/jira/browse/HIVE-2559
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Alejandro Abdelnur
Priority: Critical

 HIVE-2391 is producing usable Maven artifacts.
 However, it only as a target to deploy/publish those artifacts to Apache 
 Maven repos.
 There should be a new target to locally install Hive Maven artifacts, thus 
 enabling their use from other projects before they are committed/publish to 
 Apache Maven (this is critical to test patches that may address issues in 
 downstream components).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2472) Metastore statistics are not being updated for CTAS queries.

2011-11-08 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146438#comment-13146438
 ] 

jirapos...@reviews.apache.org commented on HIVE-2472:
-



bq.  On 2011-11-07 22:24:59, Ning Zhang wrote:
bq.   trunk/ql/src/test/results/clientpositive/ctas.q.out, line 25
bq.   https://reviews.apache.org/r/2583/diff/4/?file=56200#file56200line25
bq.  
bq.   Here I think the plan should be stage-3 (StatsTask) depends on 
stage-4 (DDLTask), which depends on stage-0 (MoveTask).
bq.   
bq.   Also can you change the .q file to add describe formatted 
created_table to verify that the stats are gathered for the newly created 
table after CTAS?

Change to amend this was done in Semantic Analyzer around line 7050


- Robert


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2583/#review3089
---


On 2011-11-08 04:08:52, Robert Surówka wrote:
bq.  
bq.  ---
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/2583/
bq.  ---
bq.  
bq.  (Updated 2011-11-08 04:08:52)
bq.  
bq.  
bq.  Review request for Ning Zhang and Kevin Wilfong.
bq.  
bq.  
bq.  Summary
bq.  ---
bq.  
bq.  Explanation of how stats for CTAS were added (line numbers may be slightly 
off due to repository changes):
bq.  
bq.  
bq.  Because CTAS contains an INSERT, the approach was to reuse as much, from 
what is already there for INSERT, as possible.
bq.  
bq.  There were 2 main issues: to make sure that FileSinkOperators will gather 
stats, and that there will be StatsTask that will then aggregate them and store 
to Metastore.
bq.  
bq.  FileSinkOperator gathers stats if conf.isGatherStats (line 576) is true. 
It is set to true upon adding StatsTask in GenMRFileSink1 (126) which will 
happen if isInsertTable will be true, which is set in 105 (I didn't change 
comment since it is still being set due to INSERT OVERWRITE that is just a part 
of the CTAS). To make it true, one must set that CTAS contains insert into the 
table, add the TableSpec, which was done in SemanticAnalyzer (1051) 
(BaseSemanticAnalyzer tableSpec() must had been changed to support 
TOK_CREATETABLE). 
bq.  
bq.  Next issue, was to supply to StatsWork (part of StatsTask) information 
about the table being created. To do that, database name was added to 
CreateTableDesc, and it is set in SemanticAnalyzer (7878). Then this 
CreateTableDesc is added to LoadFileDesc (just to get table info) in 
SemanticAnalyzer(4000), which then is added to StatsWork in GenMRFileFileSink1 
(170). This StatskWork is later used by StatsTask to get the table info.
bq.  
bq.  Another thing was that StatsTask would be called before the 
CreateTableTask. To remedy that, a change in SemanticAnalyzer(7048) was made, 
so for CTAS the StatsTask will be moved to be after the crtTblTask.
bq.  
bq.  Finally in StatsTask, support for the LoadFileDesc was added (which is 
present for CTAS). Importantly, line 306 was changed, since for CTAS there was 
an empty partitionList, instead of null (this last change took me around 3 
hours to find, since this was last place I looked at, when figuring what's 
wrong).
bq.  
bq.  
bq.  I noticed that to database.q.out Cannot get table db1.db1.conflict_name 
in line 1224 was added, but it wasn't present there in previous diff version 
that contained exactly same Java code, so I assume it is due to some other work 
happening concurrently.
bq.  
bq.  
bq.  This addresses bug HIVE-2472.
bq.  https://issues.apache.org/jira/browse/HIVE-2472
bq.  
bq.  
bq.  Diffs
bq.  -
bq.  
bq.trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/StatsTask.java 1199067 
bq.
trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRFileSink1.java 
1199067 
bq.
trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java 
1199067 
bq.trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
1199067 
bq.trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateTableDesc.java 
1199067 
bq.trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/LoadFileDesc.java 
1199067 
bq.trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/PlanUtils.java 1199067 
bq.trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/StatsWork.java 1199067 
bq.trunk/ql/src/test/queries/clientpositive/ctas.q 1199067 
bq.trunk/ql/src/test/results/clientpositive/ctas.q.out 1199067 
bq.trunk/ql/src/test/results/clientpositive/database.q.out 1199067 
bq.trunk/ql/src/test/results/clientpositive/merge3.q.out 1199067 
bq.trunk/ql/src/test/results/clientpositive/rcfile_createas1.q.out 1199067 
bq.

[jira] [Commented] (HIVE-2553) Use hashing instead of list traversal for IN operator for primitive types

2011-11-08 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146443#comment-13146443
 ] 

jirapos...@reviews.apache.org commented on HIVE-2553:
-


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2733/
---

(Updated 2011-11-08 18:03:51.950692)


Review request for Ning Zhang.


Summary (updated)
---

Introduction of Hashing for IN operator for constant values


This addresses bug HIVE-2553.
https://issues.apache.org/jira/browse/HIVE-2553


Diffs
-

  trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java 
1199066 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java 
1199066 
  
trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/StructTypeInfo.java 
1199066 

Diff: https://reviews.apache.org/r/2733/diff


Testing
---

Worked on some sample queries


Thanks,

Robert



 Use hashing instead of list traversal for IN operator for primitive types
 -

 Key: HIVE-2553
 URL: https://issues.apache.org/jira/browse/HIVE-2553
 Project: Hive
  Issue Type: Improvement
Reporter: Robert Surówka
Assignee: Robert Surówka
Priority: Minor
 Attachments: HIVE-2553.1.patch, HIVE-2553.2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2478) Support dry run option in hive

2011-11-08 Thread John Sichi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146486#comment-13146486
 ] 

John Sichi commented on HIVE-2478:
--

Yes, it does execute user-defined code, e.g. for resolving the types of UDF 
invocations.  Without that, you'd have to stop immediately after ANTLR parsing 
(pure syntax check, no semantic analysis).


 Support dry run option in hive
 --

 Key: HIVE-2478
 URL: https://issues.apache.org/jira/browse/HIVE-2478
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.9.0
Reporter: kalyan ram
Priority: Minor
 Attachments: HIVE-2478-1.patch


 Hive currently doesn't support a dry run option. For some complex queries we 
 just want to verify the query syntax initally before running it. A dry run 
 option where just the parsing is done without actual execution is a good 
 option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2391) published POMs in Maven repo are incorrect

2011-11-08 Thread John Sichi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146488#comment-13146488
 ] 

John Sichi commented on HIVE-2391:
--

OK, +1, committed to trunk.  I'll leave this open and you can mark it resolved 
once you commit the backport?


 published POMs in Maven repo are incorrect
 --

 Key: HIVE-2391
 URL: https://issues.apache.org/jira/browse/HIVE-2391
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.7.1
Reporter: Alejandro Abdelnur
Assignee: Carl Steinbach
Priority: Critical
 Fix For: 0.8.0

 Attachments: HIVE-2391.1.patch.txt, HIVE-2391.2.patch.txt, 
 HIVE-2391.3.patch.txt, HIVE-2391.4.patch.txt, HIVE-2391.5.patch.txt, 
 HIVE-2391.wip.1.patch.txt


 The Hive artifacts published in Apache Maven SNAPSHOTS repo are incorrect. 
 Dependencies are not complete.
 Even after adding as dependencies ALL the Hive artifacts it is not possible 
 to compile a project using Hive JARs (I'm trying to integrate Oozie Hive 
 Action using Apache Hive).
 As a reference the Hive CDH POMs dependencies could be used (Using those 
 artifacts I'm able to compile/test/run Hive from within Oozie).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2434) add a TM to Hive logo image

2011-11-08 Thread John Sichi (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2434:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pure awesomeness.

 add a TM to Hive logo image
 ---

 Key: HIVE-2434
 URL: https://issues.apache.org/jira/browse/HIVE-2434
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi
Assignee: Charles Chen
 Attachments: hive  job tracker icons (font outlines)-withtm.pdf, 
 hive_logo_medium.jpg, hive_logo_medium.pdf


 http://www.apache.org/foundation/marks/pmcs.html#graphics
 And maybe the feather?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-1434) Cassandra Storage Handler

2011-11-08 Thread John Sichi (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-1434:
-

Status: Open  (was: Patch Available)

 Cassandra Storage Handler
 -

 Key: HIVE-1434
 URL: https://issues.apache.org/jira/browse/HIVE-1434
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.7.0
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-1434-r1182878.patch, cas-handle.tar.gz, 
 cass_handler.diff, hive-1434-1.txt, hive-1434-2-patch.txt, 
 hive-1434-2011-02-26.patch.txt, hive-1434-2011-03-07.patch.txt, 
 hive-1434-2011-03-07.patch.txt, hive-1434-2011-03-14.patch.txt, 
 hive-1434-3-patch.txt, hive-1434-4-patch.txt, hive-1434-5.patch.txt, 
 hive-1434.2011-02-27.diff.txt, hive-cassandra.2011-02-25.txt, hive.diff


 Add a cassandra storage handler.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2548) How to submit documentation fixes

2011-11-08 Thread John Sichi (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2548:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 How to submit documentation fixes
 -

 Key: HIVE-2548
 URL: https://issues.apache.org/jira/browse/HIVE-2548
 Project: Hive
  Issue Type: Bug
  Components: Documentation
Affects Versions: 0.7.1
 Environment: general linux
Reporter: Stephen Boesch
Assignee: Stephen Boesch
Priority: Minor
 Fix For: 0.8.0

   Original Estimate: 1h
  Remaining Estimate: 1h

 I am walking through the developer's guide and tutorial and finding issues: 
 e.g. broken links.   Is there a way to try out updates to the docs and submit 
 patches?
 Here is the first example on https://cwiki.apache.org/Hive/tutorial.html
 The following examples highlight some salient features of the system. A 
 detailed set of query test cases can be found at Hive Query Test Cases and 
 the corresponding results can be found at Query Test Case Results.
 The first link is listed as 
 http://svn.apache.org/viewvc/hadoop/hive/trunk/ql/src/test/queries/clientpositive/
 Second link is 
 http://svn.apache.org/viewvc/hadoop/hive/trunk/ql/src/test/results/clientpositive/
 Both links are 404's

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2547) Tiny bug in init-hive-dfs.sh

2011-11-08 Thread John Sichi (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2547:
-

Status: Open  (was: Patch Available)

 Tiny bug in init-hive-dfs.sh 
 -

 Key: HIVE-2547
 URL: https://issues.apache.org/jira/browse/HIVE-2547
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.8.0
 Environment: ubuntu / general linux
Reporter: Stephen Boesch
Assignee: Stephen Boesch
Priority: Minor
  Labels: initialization
 Fix For: 0.8.0

   Original Estimate: 5m
  Remaining Estimate: 5m

 init-hive-dfs.sh seems to have a small typo on line 73 in which it requests 
 $HADOOP instead of $HADOOP_EXEC

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2478) Support dry run option in hive

2011-11-08 Thread Sushanth Sowmyan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-2478:
---

Attachment: HIVE-2478-2.patch

Would this modification of Kalyan's patch do the trick then?

(Tests not added yet, will do so if intent is okay)

 Support dry run option in hive
 --

 Key: HIVE-2478
 URL: https://issues.apache.org/jira/browse/HIVE-2478
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.9.0
Reporter: kalyan ram
Priority: Minor
 Attachments: HIVE-2478-1.patch, HIVE-2478-2.patch


 Hive currently doesn't support a dry run option. For some complex queries we 
 just want to verify the query syntax initally before running it. A dry run 
 option where just the parsing is done without actual execution is a good 
 option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2532) Evaluation of non-deterministic/stateful UDFs should not be skipped even if constant oi is returned.

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146526#comment-13146526
 ] 

Phabricator commented on HIVE-2532:
---

jsichi has commented on the revision HIVE-2532 [jira] Evaluation of 
non-deterministic/stateful UDFs should not be skipped even if constant oi is 
returned..

  Try this:

  SELECT 1+ASSERT_TRUE(x  2) FROM src LATERAL VIEW EXPLODE(ARRAY(1, 2)) a AS x 
LIMIT 2;

  I assume it should hit an exception, but it actually passes.  Guess why?



INLINE COMMENTS
  
ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java:144
 FunctionRegistry.isDeterministic always returns false for a stateful UDF, so 
you don't need to double-check here.

REVISION DETAIL
  https://reviews.facebook.net/D273


 Evaluation of non-deterministic/stateful UDFs should not be skipped even if 
 constant oi is returned.
 

 Key: HIVE-2532
 URL: https://issues.apache.org/jira/browse/HIVE-2532
 Project: Hive
  Issue Type: Bug
Reporter: Jonathan Chang
Assignee: Jonathan Chang
 Attachments: D273.1.patch


 Even if constant oi is returned, these may have stateful/side-effect behavior 
 and hence need to be called each cycle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2546) add explain formatted

2011-11-08 Thread He Yongqiang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146532#comment-13146532
 ] 

He Yongqiang commented on HIVE-2546:


+1, will commit after tests pass

 add explain formatted
 -

 Key: HIVE-2546
 URL: https://issues.apache.org/jira/browse/HIVE-2546
 Project: Hive
  Issue Type: Improvement
Reporter: Namit Jain
Assignee: Namit Jain
 Attachments: D261.1.patch, D261.2.patch, hive.2546.1.patch


 The output can be a json string.
 This can be easily parsed by some program that way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread John Sichi (Created) (JIRA)
speed up Hive unit tests by configuring Derby to be non-durable
---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota


Try setting derby.system.durability=test to see if it can speed up metastore 
writes while running Hive ant test.

http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-0.8.0-SNAPSHOT-h0.21 - Build # 86 - Fixed

2011-11-08 Thread Apache Jenkins Server
Changes for Build #85

Changes for Build #86



All tests passed

The Apache Jenkins build system has built Hive-0.8.0-SNAPSHOT-h0.21 (build #86)

Status: Fixed

Check console output at 
https://builds.apache.org/job/Hive-0.8.0-SNAPSHOT-h0.21/86/ to view the results.


[jira] [Commented] (HIVE-2532) Evaluation of non-deterministic/stateful UDFs should not be skipped even if constant oi is returned.

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146562#comment-13146562
 ] 

Phabricator commented on HIVE-2532:
---

jonchang has planned changes to the revision HIVE-2532 [jira] Evaluation of 
non-deterministic/stateful UDFs should not be skipped even if constant oi is 
returned..

  Ugh.  Will fix and add new unittest

REVISION DETAIL
  https://reviews.facebook.net/D273


 Evaluation of non-deterministic/stateful UDFs should not be skipped even if 
 constant oi is returned.
 

 Key: HIVE-2532
 URL: https://issues.apache.org/jira/browse/HIVE-2532
 Project: Hive
  Issue Type: Bug
Reporter: Jonathan Chang
Assignee: Jonathan Chang
 Attachments: D273.1.patch


 Even if constant oi is returned, these may have stateful/side-effect behavior 
 and hence need to be called each cycle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2478) Support dry run option in hive

2011-11-08 Thread John Sichi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146571#comment-13146571
 ] 

John Sichi commented on HIVE-2478:
--

If we're going to add this, it would be best to generalize it so that you can 
choose what phase to stop after, e.g.

hive.exec.dryrun={off,parse,analyze,plan}


 Support dry run option in hive
 --

 Key: HIVE-2478
 URL: https://issues.apache.org/jira/browse/HIVE-2478
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.9.0
Reporter: kalyan ram
Priority: Minor
 Attachments: HIVE-2478-1.patch, HIVE-2478-2.patch


 Hive currently doesn't support a dry run option. For some complex queries we 
 just want to verify the query syntax initally before running it. A dry run 
 option where just the parsing is done without actual execution is a good 
 option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2478) Support dry run option in hive

2011-11-08 Thread Sushanth Sowmyan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146575#comment-13146575
 ] 

Sushanth Sowmyan commented on HIVE-2478:


I like that idea! Ok, will look into it.

 Support dry run option in hive
 --

 Key: HIVE-2478
 URL: https://issues.apache.org/jira/browse/HIVE-2478
 Project: Hive
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 0.9.0
Reporter: kalyan ram
Priority: Minor
 Attachments: HIVE-2478-1.patch, HIVE-2478-2.patch


 Hive currently doesn't support a dry run option. For some complex queries we 
 just want to verify the query syntax initally before running it. A dry run 
 option where just the parsing is done without actual execution is a good 
 option.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2560:
--

Attachment: D327.1.patch

mareksapotafb requested code review of HIVE-2560 [jira] speed up Hive unit 
tests by configuring Derby to be non-durable.
Reviewers: JIRA

  Set the derby.system.durability property in build-common.xml

  Try setting derby.system.durability=test to see if it can speed up metastore 
writes while running Hive ant test.

  http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D327

AFFECTED FILES
  build-common.xml

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/657/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146581#comment-13146581
 ] 

Phabricator commented on HIVE-2560:
---

mareksapotafb has commented on the revision HIVE-2560 [jira] speed up Hive 
unit tests by configuring Derby to be non-durable.

  Now I just have to test and see if it actually works=)

REVISION DETAIL
  https://reviews.facebook.net/D327


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2560:
--

Attachment: D327.2.patch

mareksapotafb updated the revision HIVE-2560 [jira] speed up Hive unit tests 
by configuring Derby to be non-durable.
Reviewers: JIRA

  Maybe I should place it here?

REVISION DETAIL
  https://reviews.facebook.net/D327

AFFECTED FILES
  build-common.xml
  conf/hive-default.xml


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch, D327.2.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146608#comment-13146608
 ] 

Phabricator commented on HIVE-2560:
---

jsichi has commented on the revision HIVE-2560 [jira] speed up Hive unit tests 
by configuring Derby to be non-durable.

  You should be able to verify that the setting has been picked up by seeing 
this in derby.log:

  WARNING: The database is booted with derby.system.durability=test.
  In this mode, it is possible that database may not be able to recover, 
committed transactions may be lost, and the database may be in an inconsistent 
state. Please use this mode only when these consequences are acceptable.


REVISION DETAIL
  https://reviews.facebook.net/D327


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch, D327.2.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146613#comment-13146613
 ] 

Phabricator commented on HIVE-2560:
---

mareksapotafb has commented on the revision HIVE-2560 [jira] speed up Hive 
unit tests by configuring Derby to be non-durable.

  With both changes the setting got picked up.  I can check which change 
actually caused it after the tests finish.

REVISION DETAIL
  https://reviews.facebook.net/D327


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch, D327.2.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2523) add a new builtins subproject

2011-11-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2523:
--

Attachment: D267.2.patch

jsichi updated the revision HIVE-2523 [jira] add a new builtins subproject.
Reviewers: JIRA

  Fix some tests, still need to run through the whole suite again.

REVISION DETAIL
  https://reviews.facebook.net/D267

AFFECTED FILES
  eclipse-templates/.classpath
  builtins
  builtins/test
  builtins/test/iris.txt
  builtins/test/cleanup.sql
  builtins/test/onerow.txt
  builtins/test/setup.sql
  builtins/ivy.xml
  builtins/src
  builtins/src/org
  builtins/src/org/apache
  builtins/src/org/apache/hive
  builtins/src/org/apache/hive/builtins
  builtins/src/org/apache/hive/builtins/UDAFUnionMap.java
  builtins/src/org/apache/hive/builtins/BuiltinUtils.java
  builtins/build-plugin.xml
  builtins/build.xml
  build.xml
  bin/hive
  ql/src/test/results/clientpositive/show_functions.q.out
  ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionTask.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java
  ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
  pdk/scripts/build-plugin.xml


 add a new builtins subproject
 -

 Key: HIVE-2523
 URL: https://issues.apache.org/jira/browse/HIVE-2523
 Project: Hive
  Issue Type: Improvement
  Components: UDF
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.9.0

 Attachments: D267.1.patch, D267.2.patch


 Now that we have a PDK, we can make it easier to add builtin functions to 
 Hive by putting them in a plugin which automatically gets loaded by Hive.  
 This issue will add the necessary framework and one example function; then 
 new functions can be added here, and over time we could migrate old ones here 
 if desired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2561) Add an annotation to UDFs to allow them to specify additional FILE/JAR resources necessary for execution

2011-11-08 Thread Jonathan Chang (Created) (JIRA)
Add an annotation to UDFs to allow them to specify additional FILE/JAR 
resources necessary for execution


 Key: HIVE-2561
 URL: https://issues.apache.org/jira/browse/HIVE-2561
 Project: Hive
  Issue Type: New Feature
Reporter: Jonathan Chang
Assignee: Jonathan Chang


Often times UDFs will have dependencies to external JARs/FILEs.  It makes sense 
for these to be encoded by the UDF (rather than having the caller remember the 
set of files that need to be ADDed).  Let's add an annotation to UDFs which 
will cause these resources to be auto-added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2560:
--

Attachment: D327.3.patch

mareksapotafb updated the revision HIVE-2560 [jira] speed up Hive unit tests 
by configuring Derby to be non-durable.
Reviewers: JIRA

  Back to only Ant change

REVISION DETAIL
  https://reviews.facebook.net/D327

AFFECTED FILES
  build-common.xml


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch, D327.2.patch, D327.3.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146630#comment-13146630
 ] 

Phabricator commented on HIVE-2560:
---

mareksapotafb has commented on the revision HIVE-2560 [jira] speed up Hive 
unit tests by configuring Derby to be non-durable.

  The change in build-common was enough for derby to pick it up, but 
unfortunately it didn't give any speed up.

REVISION DETAIL
  https://reviews.facebook.net/D327


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch, D327.2.patch, D327.3.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread John Sichi (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi resolved HIVE-2560.
--

Resolution: Invalid

Closing this as invalid since my guess on its usefulness was wrong.

 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch, D327.2.patch, D327.3.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2560) speed up Hive unit tests by configuring Derby to be non-durable

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146655#comment-13146655
 ] 

Phabricator commented on HIVE-2560:
---

mareksapotafb has abandoned the revision HIVE-2560 [jira] speed up Hive unit 
tests by configuring Derby to be non-durable.

REVISION DETAIL
  https://reviews.facebook.net/D327


 speed up Hive unit tests by configuring Derby to be non-durable
 ---

 Key: HIVE-2560
 URL: https://issues.apache.org/jira/browse/HIVE-2560
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Affects Versions: 0.9.0
Reporter: John Sichi
Assignee: Marek Sapota
 Attachments: D327.1.patch, D327.2.patch, D327.3.patch


 Try setting derby.system.durability=test to see if it can speed up metastore 
 writes while running Hive ant test.
 http://db.apache.org/derby/docs/10.1/tuning/rtunproperdurability.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2391) published POMs in Maven repo are incorrect

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146669#comment-13146669
 ] 

Hudson commented on HIVE-2391:
--

Integrated in Hive-trunk-h0.21 #1069 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1069/])
HIVE-2391. Published POMs in Maven repo are incorrect
(Carl Steinbach via jvs)

jvs : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199399
Files : 
* /hive/trunk/ant/build.xml
* /hive/trunk/ant/ivy.xml
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/build.xml
* /hive/trunk/cli/build.xml
* /hive/trunk/cli/ivy.xml
* /hive/trunk/cli/lib/README
* /hive/trunk/cli/lib/jline-0.9.94.LICENSE
* /hive/trunk/cli/lib/jline-0.9.94.jar
* /hive/trunk/common/build.xml
* /hive/trunk/common/ivy.xml
* /hive/trunk/contrib/build.xml
* /hive/trunk/contrib/ivy.xml
* /hive/trunk/contrib/src/test/queries/clientpositive/dboutput.q
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/hbase-handler/build.xml
* /hive/trunk/hbase-handler/ivy.xml
* /hive/trunk/hwi/build.xml
* /hive/trunk/hwi/ivy.xml
* /hive/trunk/ivy.xml
* /hive/trunk/ivy/common-configurations.xml
* /hive/trunk/ivy/ivysettings.xml
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/jdbc/build.xml
* /hive/trunk/jdbc/ivy.xml
* /hive/trunk/lib/README
* /hive/trunk/lib/asm-3.1.LICENSE
* /hive/trunk/lib/asm-3.1.jar
* /hive/trunk/lib/commons-collections-3.2.1.LICENSE
* /hive/trunk/lib/commons-collections-3.2.1.jar
* /hive/trunk/lib/commons-lang-2.4.LICENSE
* /hive/trunk/lib/commons-lang-2.4.jar
* /hive/trunk/lib/commons-logging-1.0.4.jar
* /hive/trunk/lib/commons-logging-api-1.0.4.jar
* /hive/trunk/lib/derby.LICENSE
* /hive/trunk/lib/derby.jar
* /hive/trunk/lib/json-LICENSE.txt
* /hive/trunk/lib/json-README.txt
* /hive/trunk/lib/json.jar
* /hive/trunk/lib/velocity-1.5.jar
* /hive/trunk/lib/velocity.LICENSE
* /hive/trunk/metastore/build.xml
* /hive/trunk/metastore/ivy.xml
* /hive/trunk/odbc/ivy.xml
* /hive/trunk/pdk/build.xml
* /hive/trunk/pdk/ivy.xml
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/ivy.xml
* /hive/trunk/ql/lib/README
* /hive/trunk/ql/lib/antlr-2.7.7.LICENSE
* /hive/trunk/ql/lib/antlr-2.7.7.jar
* /hive/trunk/ql/lib/antlr-3.0.1.LICENSE
* /hive/trunk/ql/lib/antlr-3.0.1.jar
* /hive/trunk/ql/lib/antlr-runtime-3.0.1.LICENSE
* /hive/trunk/ql/lib/antlr-runtime-3.0.1.jar
* /hive/trunk/ql/lib/stringtemplate-3.1b1.LICENSE
* /hive/trunk/ql/lib/stringtemplate-3.1b1.jar
* /hive/trunk/ql/src/test/queries/clientpositive/set_processor_namespaces.q
* /hive/trunk/ql/src/test/results/clientpositive/set_processor_namespaces.q.out
* /hive/trunk/serde/build.xml
* /hive/trunk/serde/ivy.xml
* /hive/trunk/service/build.xml
* /hive/trunk/service/ivy.xml
* /hive/trunk/shims/build.xml
* /hive/trunk/shims/ivy.xml


 published POMs in Maven repo are incorrect
 --

 Key: HIVE-2391
 URL: https://issues.apache.org/jira/browse/HIVE-2391
 Project: Hive
  Issue Type: Bug
  Components: Build Infrastructure
Affects Versions: 0.7.1
Reporter: Alejandro Abdelnur
Assignee: Carl Steinbach
Priority: Critical
 Fix For: 0.8.0

 Attachments: HIVE-2391.1.patch.txt, HIVE-2391.2.patch.txt, 
 HIVE-2391.3.patch.txt, HIVE-2391.4.patch.txt, HIVE-2391.5.patch.txt, 
 HIVE-2391.wip.1.patch.txt


 The Hive artifacts published in Apache Maven SNAPSHOTS repo are incorrect. 
 Dependencies are not complete.
 Even after adding as dependencies ALL the Hive artifacts it is not possible 
 to compile a project using Hive JARs (I'm trying to integrate Oozie Hive 
 Action using Apache Hive).
 As a reference the Hive CDH POMs dependencies could be used (Using those 
 artifacts I'm able to compile/test/run Hive from within Oozie).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2561) Add an annotation to UDFs to allow them to specify additional FILE/JAR resources necessary for execution

2011-11-08 Thread Andrew T. Fiore (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146678#comment-13146678
 ] 

Andrew T. Fiore commented on HIVE-2561:
---

I like the idea of using an annotation to load or require files, but it would 
be awesome if there were a way to bundle or package dependency files and store 
them in a consistent but reconfigurable location.  That way we could avoid 
specifying them with (potentially brittle) absolute paths in the UDF source 
files.

 Add an annotation to UDFs to allow them to specify additional FILE/JAR 
 resources necessary for execution
 

 Key: HIVE-2561
 URL: https://issues.apache.org/jira/browse/HIVE-2561
 Project: Hive
  Issue Type: New Feature
Reporter: Jonathan Chang
Assignee: Jonathan Chang

 Often times UDFs will have dependencies to external JARs/FILEs.  It makes 
 sense for these to be encoded by the UDF (rather than having the caller 
 remember the set of files that need to be ADDed).  Let's add an annotation to 
 UDFs which will cause these resources to be auto-added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2561) Add an annotation to UDFs to allow them to specify additional FILE/JAR resources necessary for execution

2011-11-08 Thread Jonathan Chang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146680#comment-13146680
 ] 

Jonathan Chang commented on HIVE-2561:
--

Hm, how would you feel about something more general then, like a method on the 
class that would be called.  That method could potentially access things like 
environment vars, config settings, etc. to dynamically specify the dependencies?

 Add an annotation to UDFs to allow them to specify additional FILE/JAR 
 resources necessary for execution
 

 Key: HIVE-2561
 URL: https://issues.apache.org/jira/browse/HIVE-2561
 Project: Hive
  Issue Type: New Feature
Reporter: Jonathan Chang
Assignee: Jonathan Chang

 Often times UDFs will have dependencies to external JARs/FILEs.  It makes 
 sense for these to be encoded by the UDF (rather than having the caller 
 remember the set of files that need to be ADDed).  Let's add an annotation to 
 UDFs which will cause these resources to be auto-added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2561) Add an annotation to UDFs to allow them to specify additional FILE/JAR resources necessary for execution

2011-11-08 Thread Andrew T. Fiore (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146697#comment-13146697
 ] 

Andrew T. Fiore commented on HIVE-2561:
---

That seems like a good idea. Do you mean a member method in the UDF class (or 
its children) or a method that a UDF class could call on something else to 
fetch the info?  That is, would UDFs pull these environment vars, etc., by 
calling some getter as needed, or would they be pushed to them by the Hive 
runtime (e.g., by calling a setter on a UDF class before its evaluate() is 
called)?  Is there already some way for any Hive class to get info from a 
central config store somewhere?

I'm unclear on what ADD JAR and ADD FILE in the Hive CLI do.  In my own 
tests with loading files from an absolute path, I can directly read (i.e., open 
a file handle) from an absolute path when running in local mode.  When the job 
is sent out to the cluster, this fails, though that might be because the 
cluster nodes don't mount the same share in the same way as my local machine.  
However, if I ADD FILE [absolute-path] from the CLI, then the UDF running on 
the cluster node can open a file handle with just the file name (i.e., no path).

Point being -- will the approach we're talking about work if the dependency 
files live at different absolute paths from the point of view of the CLI client 
machine and the cluster nodes?

 Add an annotation to UDFs to allow them to specify additional FILE/JAR 
 resources necessary for execution
 

 Key: HIVE-2561
 URL: https://issues.apache.org/jira/browse/HIVE-2561
 Project: Hive
  Issue Type: New Feature
Reporter: Jonathan Chang
Assignee: Jonathan Chang

 Often times UDFs will have dependencies to external JARs/FILEs.  It makes 
 sense for these to be encoded by the UDF (rather than having the caller 
 remember the set of files that need to be ADDed).  Let's add an annotation to 
 UDFs which will cause these resources to be auto-added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2561) Add an annotation to UDFs to allow them to specify additional FILE/JAR resources necessary for execution

2011-11-08 Thread Jonathan Chang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146703#comment-13146703
 ] 

Jonathan Chang commented on HIVE-2561:
--

I believe (and I will check this tomorrow), that UDFs have access to the config 
during the init phase which happens on the local machine.  There is an 
outstanding JIRA to make the config available to the running UDFs but it's not 
needed here since all the ADDs happen before execution.  I'll probably end up 
adding some utility functions to make getting these values easier.

So for ADD FILE and the like, you should not refer to the absolute path in your 
UDF.  In essence, ADD FILE is copying those paths to some temp directory which 
is the working directory of the UDF.  

 Add an annotation to UDFs to allow them to specify additional FILE/JAR 
 resources necessary for execution
 

 Key: HIVE-2561
 URL: https://issues.apache.org/jira/browse/HIVE-2561
 Project: Hive
  Issue Type: New Feature
Reporter: Jonathan Chang
Assignee: Jonathan Chang

 Often times UDFs will have dependencies to external JARs/FILEs.  It makes 
 sense for these to be encoded by the UDF (rather than having the caller 
 remember the set of files that need to be ADDed).  Let's add an annotation to 
 UDFs which will cause these resources to be auto-added.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2562) HIVE-2247 Changed the Thrift API causing compatibility issues.

2011-11-08 Thread Kevin Wilfong (Created) (JIRA)
HIVE-2247 Changed the Thrift API causing compatibility issues.
--

 Key: HIVE-2562
 URL: https://issues.apache.org/jira/browse/HIVE-2562
 Project: Hive
  Issue Type: Bug
Reporter: Kevin Wilfong
Assignee: Weiyan Wang


HIVE-2247 Added a parameter to alter_partition in the Metastore Thrift API 
which has been causing compatibility issues with some scripts.  We would like 
to change this to have two methods, one called alter_partition which takes the 
old parameters, and one called something else (I'll leave the naming up to you) 
which has the new parameters.  The implementation of the old method should just 
call the new method with null for the new parameter.

This will fix the compatibility issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2556) upgrade script 008-HIVE-2246.mysql.sql contains syntax errors

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146712#comment-13146712
 ] 

Phabricator commented on HIVE-2556:
---

pauly has accepted the revision HIVE-2556 [jira] upgrade script 
008-HIVE-2246.mysql.sql contains syntax errors.

REVISION DETAIL
  https://reviews.facebook.net/D309


 upgrade script 008-HIVE-2246.mysql.sql contains syntax errors
 -

 Key: HIVE-2556
 URL: https://issues.apache.org/jira/browse/HIVE-2556
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Ning Zhang
Assignee: Ning Zhang
 Fix For: 0.8.0, 0.9.0

 Attachments: D309.1.patch, HIVE-2556.patch


 source script_name gives syntax errors. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2563) OutOfMemory errors when using dynamic partition inserts with large number of partitions

2011-11-08 Thread Evan Pollan (Created) (JIRA)
OutOfMemory errors when using dynamic partition inserts with large number of 
partitions
---

 Key: HIVE-2563
 URL: https://issues.apache.org/jira/browse/HIVE-2563
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1
 Environment: Cloudera CDH3 Update 2 distro on Ubuntu 10.04 64 bit 
cluster nodes
Reporter: Evan Pollan


I'm trying to use dynamic partition inserts to mimic a legacy file generation 
process that creates a single file per combination of two record attributes, 
one with a low cardinality, and one with a high degree of cardinality.  In a 
small data set, I can do this successfully.  Using a larger data set on the 
same 11 node cluster, with a combined cardinality resulting in ~1600 
partitions, I get out of memory errors in the reduce phase 100% of the time.  

I'm running with the following settings, writing to a textfile-backed table 
with two partitions of type string:

SET hive.exec.compress.output=true; 
SET io.seqfile.compression.type=BLOCK;
SET mapred.max.map.failures.percent=100;
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.exec.max.dynamic.partitions=1;
SET hive.exec.max.dynamic.partitions.pernode=1;

(I've also tried gzip compression with the same result)


Here's an example of the error:

2011-11-09 00:51:52,425 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: 
New Final Path: FS 
hdfs://ec2-50-19-131-121.compute-1.amazonaws.com/tmp/hive-hdfs/hive_2011-11-09_00-48-57_840_6003656718210084497/_tmp.-ext-1/requestday=2011-09-29/clientname=-JA/08_0.deflate
2011-11-09 00:51:52,461 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2011-11-09 00:51:52,464 FATAL org.apache.hadoop.mapred.Child: Error running 
child : java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.init(DFSClient.java:2931)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:544)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:219)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
at 
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat.getHiveRecordWriter(HiveIgnoreKeyTextOutputFormat.java:80)
at 
org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:247)
at 
org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:235)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:458)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutWriters(FileSinkOperator.java:599)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:539)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
at 
org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:959)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:798)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:724)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at 
org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2556) upgrade script 008-HIVE-2246.mysql.sql contains syntax errors

2011-11-08 Thread Paul Yang (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146715#comment-13146715
 ] 

Paul Yang commented on HIVE-2556:
-

+1 Will commit.

 upgrade script 008-HIVE-2246.mysql.sql contains syntax errors
 -

 Key: HIVE-2556
 URL: https://issues.apache.org/jira/browse/HIVE-2556
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Ning Zhang
Assignee: Ning Zhang
 Fix For: 0.8.0, 0.9.0

 Attachments: D309.1.patch, HIVE-2556.patch


 source script_name gives syntax errors. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2433) add DOAP file for Hive

2011-11-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2433:
--

Attachment: D333.1.patch

jsichi requested code review of HIVE-2433 [jira] add DOAP file for Hive.
Reviewers: JIRA

  Generated Hive DOAP file.

  http://www.apache.org/foundation/marks/pmcs.html#metadata

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D333

AFFECTED FILES
  doap_Hive.rdf

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/681/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 add DOAP file for Hive
 --

 Key: HIVE-2433
 URL: https://issues.apache.org/jira/browse/HIVE-2433
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi
 Attachments: D333.1.patch


 http://www.apache.org/foundation/marks/pmcs.html#metadata

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2556) upgrade script 008-HIVE-2246.mysql.sql contains syntax errors

2011-11-08 Thread Paul Yang (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Yang updated HIVE-2556:


  Resolution: Fixed
Release Note: Committed to trunk and branch 0.8.0 Thanks Ning!
  Status: Resolved  (was: Patch Available)

 upgrade script 008-HIVE-2246.mysql.sql contains syntax errors
 -

 Key: HIVE-2556
 URL: https://issues.apache.org/jira/browse/HIVE-2556
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Ning Zhang
Assignee: Ning Zhang
 Fix For: 0.8.0, 0.9.0

 Attachments: D309.1.patch, HIVE-2556.patch


 source script_name gives syntax errors. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2433) add DOAP file for Hive

2011-11-08 Thread John Sichi (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2433:
-

Attachment: D333.1.patch

 add DOAP file for Hive
 --

 Key: HIVE-2433
 URL: https://issues.apache.org/jira/browse/HIVE-2433
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi
 Attachments: D333.1.patch, D333.1.patch


 http://www.apache.org/foundation/marks/pmcs.html#metadata

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2563) OutOfMemory errors when using dynamic partition inserts with large number of partitions

2011-11-08 Thread Evan Pollan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146723#comment-13146723
 ] 

Evan Pollan commented on HIVE-2563:
---

By the way, the total number of records in this table (if were able to insert 
successfully :), is just over 5 million.

 OutOfMemory errors when using dynamic partition inserts with large number of 
 partitions
 ---

 Key: HIVE-2563
 URL: https://issues.apache.org/jira/browse/HIVE-2563
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.7.1
 Environment: Cloudera CDH3 Update 2 distro on Ubuntu 10.04 64 bit 
 cluster nodes
Reporter: Evan Pollan

 I'm trying to use dynamic partition inserts to mimic a legacy file generation 
 process that creates a single file per combination of two record attributes, 
 one with a low cardinality, and one with a high degree of cardinality.  In a 
 small data set, I can do this successfully.  Using a larger data set on the 
 same 11 node cluster, with a combined cardinality resulting in ~1600 
 partitions, I get out of memory errors in the reduce phase 100% of the time.  
 I'm running with the following settings, writing to a textfile-backed table 
 with two partitions of type string:
 SET hive.exec.compress.output=true; 
 SET io.seqfile.compression.type=BLOCK;
 SET mapred.max.map.failures.percent=100;
 SET hive.exec.dynamic.partition=true;
 SET hive.exec.dynamic.partition.mode=nonstrict;
 SET hive.exec.max.dynamic.partitions=1;
 SET hive.exec.max.dynamic.partitions.pernode=1;
 (I've also tried gzip compression with the same result)
 Here's an example of the error:
 2011-11-09 00:51:52,425 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: 
 New Final Path: FS 
 hdfs://ec2-50-19-131-121.compute-1.amazonaws.com/tmp/hive-hdfs/hive_2011-11-09_00-48-57_840_6003656718210084497/_tmp.-ext-1/requestday=2011-09-29/clientname=-JA/08_0.deflate
 2011-11-09 00:51:52,461 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
 Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
 2011-11-09 00:51:52,464 FATAL org.apache.hadoop.mapred.Child: Error running 
 child : java.lang.OutOfMemoryError: unable to create new native thread
   at java.lang.Thread.start0(Native Method)
   at java.lang.Thread.start(Thread.java:640)
   at 
 org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.init(DFSClient.java:2931)
   at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:544)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:219)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:584)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:565)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:472)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:464)
   at 
 org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat.getHiveRecordWriter(HiveIgnoreKeyTextOutputFormat.java:80)
   at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:247)
   at 
 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:235)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:458)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutWriters(FileSinkOperator.java:599)
   at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:539)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:959)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:798)
   at 
 org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:724)
   at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
   at 
 org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
   at 
 org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 

[jira] [Created] (HIVE-2564) Set dbname at JDBC URL or properties

2011-11-08 Thread Shinsuke Sugaya (Created) (JIRA)
Set dbname at JDBC URL or properties


 Key: HIVE-2564
 URL: https://issues.apache.org/jira/browse/HIVE-2564
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.7.1
Reporter: Shinsuke Sugaya


The current Hive implementation ignores a database name at JDBC URL, 
though we can set it by executing use DBNAME statement.
I think it is better to also specify a database name at JDBC URL or database 
properties.
Therefore, I'll attach the patch.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2564) Set dbname at JDBC URL or properties

2011-11-08 Thread Shinsuke Sugaya (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinsuke Sugaya updated HIVE-2564:
--

Attachment: hive-2564.patch

Attached patch.

 Set dbname at JDBC URL or properties
 

 Key: HIVE-2564
 URL: https://issues.apache.org/jira/browse/HIVE-2564
 Project: Hive
  Issue Type: Improvement
  Components: JDBC
Affects Versions: 0.7.1
Reporter: Shinsuke Sugaya
 Attachments: hive-2564.patch


 The current Hive implementation ignores a database name at JDBC URL, 
 though we can set it by executing use DBNAME statement.
 I think it is better to also specify a database name at JDBC URL or database 
 properties.
 Therefore, I'll attach the patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2246) Dedupe tables' column schemas from partitions in the metastore db

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146787#comment-13146787
 ] 

Hudson commented on HIVE-2246:
--

Integrated in Hive-0.8.0-SNAPSHOT-h0.21 #87 (See 
[https://builds.apache.org/job/Hive-0.8.0-SNAPSHOT-h0.21/87/])
HIVE-2556. upgrade script 008-HIVE-2246.mysql.sql contains syntax errors. 
(Ning Zhang via pauly)

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

pauly : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199595
Files : 
* 
/hive/branches/branch-0.8/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql


 Dedupe tables' column schemas from partitions in the metastore db
 -

 Key: HIVE-2246
 URL: https://issues.apache.org/jira/browse/HIVE-2246
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sohan Jain
Assignee: Sohan Jain
 Fix For: 0.8.0

 Attachments: HIVE-2246.2.patch, HIVE-2246.3.patch, HIVE-2246.4.patch, 
 HIVE-2246.8.patch


 Note: this patch proposes a schema change, and is therefore incompatible with 
 the current metastore.
 We can re-organize the JDO models to reduce space usage to keep the metastore 
 scalable for the future.  Currently, partitions are the fastest growing 
 objects in the metastore, and the metastore keeps a separate copy of the 
 columns list for each partition.  We can normalize the metastore db by 
 decoupling Columns from Storage Descriptors and not storing duplicate lists 
 of the columns for each partition. 
 An idea is to create an additional level of indirection with a Column 
 Descriptor that has a list of columns.  A table has a reference to its 
 latest Column Descriptor (note: a table may have more than one Column 
 Descriptor in the case of schema evolution).  Partitions and Indexes can 
 reference the same Column Descriptors as their parent table.
 Currently, the COLUMNS table in the metastore has roughly (number of 
 partitions + number of tables) * (average number of columns pertable) rows.  
 We can reduce this to (number of tables) * (average number of columns per 
 table) rows, while incurring a small cost proportional to the number of 
 tables to store the Column Descriptors.
 Please see the latest review board for additional implementation details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2556) upgrade script 008-HIVE-2246.mysql.sql contains syntax errors

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146786#comment-13146786
 ] 

Hudson commented on HIVE-2556:
--

Integrated in Hive-0.8.0-SNAPSHOT-h0.21 #87 (See 
[https://builds.apache.org/job/Hive-0.8.0-SNAPSHOT-h0.21/87/])
HIVE-2556. upgrade script 008-HIVE-2246.mysql.sql contains syntax errors. 
(Ning Zhang via pauly)

- begin *PUBLIC* platform impact section -
Bugzilla: #
- end platform impact -

pauly : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199595
Files : 
* 
/hive/branches/branch-0.8/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql


 upgrade script 008-HIVE-2246.mysql.sql contains syntax errors
 -

 Key: HIVE-2556
 URL: https://issues.apache.org/jira/browse/HIVE-2556
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Ning Zhang
Assignee: Ning Zhang
 Fix For: 0.8.0, 0.9.0

 Attachments: D309.1.patch, HIVE-2556.patch


 source script_name gives syntax errors. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2246) Dedupe tables' column schemas from partitions in the metastore db

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146794#comment-13146794
 ] 

Hudson commented on HIVE-2246:
--

Integrated in Hive-trunk-h0.21 #1070 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1070/])
HIVE-2556. upgrade script 008-HIVE-2246.mysql.sql contains syntax errors. 
(Ning Zhang via pauly)

pauly : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199593
Files : 
* /hive/trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql


 Dedupe tables' column schemas from partitions in the metastore db
 -

 Key: HIVE-2246
 URL: https://issues.apache.org/jira/browse/HIVE-2246
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Reporter: Sohan Jain
Assignee: Sohan Jain
 Fix For: 0.8.0

 Attachments: HIVE-2246.2.patch, HIVE-2246.3.patch, HIVE-2246.4.patch, 
 HIVE-2246.8.patch


 Note: this patch proposes a schema change, and is therefore incompatible with 
 the current metastore.
 We can re-organize the JDO models to reduce space usage to keep the metastore 
 scalable for the future.  Currently, partitions are the fastest growing 
 objects in the metastore, and the metastore keeps a separate copy of the 
 columns list for each partition.  We can normalize the metastore db by 
 decoupling Columns from Storage Descriptors and not storing duplicate lists 
 of the columns for each partition. 
 An idea is to create an additional level of indirection with a Column 
 Descriptor that has a list of columns.  A table has a reference to its 
 latest Column Descriptor (note: a table may have more than one Column 
 Descriptor in the case of schema evolution).  Partitions and Indexes can 
 reference the same Column Descriptors as their parent table.
 Currently, the COLUMNS table in the metastore has roughly (number of 
 partitions + number of tables) * (average number of columns pertable) rows.  
 We can reduce this to (number of tables) * (average number of columns per 
 table) rows, while incurring a small cost proportional to the number of 
 tables to store the Column Descriptors.
 Please see the latest review board for additional implementation details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2556) upgrade script 008-HIVE-2246.mysql.sql contains syntax errors

2011-11-08 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146793#comment-13146793
 ] 

Hudson commented on HIVE-2556:
--

Integrated in Hive-trunk-h0.21 #1070 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1070/])
HIVE-2556. upgrade script 008-HIVE-2246.mysql.sql contains syntax errors. 
(Ning Zhang via pauly)

pauly : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1199593
Files : 
* /hive/trunk/metastore/scripts/upgrade/mysql/008-HIVE-2246.mysql.sql


 upgrade script 008-HIVE-2246.mysql.sql contains syntax errors
 -

 Key: HIVE-2556
 URL: https://issues.apache.org/jira/browse/HIVE-2556
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.8.0
Reporter: Ning Zhang
Assignee: Ning Zhang
 Fix For: 0.8.0, 0.9.0

 Attachments: D309.1.patch, HIVE-2556.patch


 source script_name gives syntax errors. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-trunk-h0.21 - Build # 1070 - Failure

2011-11-08 Thread Apache Jenkins Server
Changes for Build #1070
[pauly] HIVE-2556. upgrade script 008-HIVE-2246.mysql.sql contains syntax 
errors. (Ning Zhang via pauly)




5 tests failed.
REGRESSION:  
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisherOneStat

Error Message:
null

Stack Trace:
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.Utilities.prepareWithRetry(Utilities.java:2176)
at 
org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsAggregator.cleanUp(JDBCStatsAggregator.java:215)
at 
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.setUp(TestStatsPublisherEnhanced.java:60)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)


REGRESSION:  
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisher

Error Message:
null

Stack Trace:
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.Utilities.prepareWithRetry(Utilities.java:2176)
at 
org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsAggregator.cleanUp(JDBCStatsAggregator.java:215)
at 
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.setUp(TestStatsPublisherEnhanced.java:60)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)


REGRESSION:  
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisherMultipleUpdates

Error Message:
null

Stack Trace:
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.Utilities.prepareWithRetry(Utilities.java:2176)
at 
org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsAggregator.cleanUp(JDBCStatsAggregator.java:215)
at 
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.setUp(TestStatsPublisherEnhanced.java:60)
at junit.framework.TestCase.runBare(TestCase.java:132)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)


REGRESSION:  
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.testStatsPublisherMultipleUpdatesSubsetStatistics

Error Message:
null

Stack Trace:
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.Utilities.prepareWithRetry(Utilities.java:2176)
at 
org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsAggregator.cleanUp(JDBCStatsAggregator.java:215)
at 
org.apache.hadoop.hive.ql.exec.TestStatsPublisherEnhanced.setUp(TestStatsPublisherEnhanced.java:60)
at junit.framework.TestCase.runBare(TestCase.java:132)
at 

[jira] [Commented] (HIVE-2553) Use hashing instead of list traversal for IN operator for primitive types

2011-11-08 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146808#comment-13146808
 ] 

jirapos...@reviews.apache.org commented on HIVE-2553:
-


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2733/#review3123
---



trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java
https://reviews.apache.org/r/2733/#comment6938

Can you confirm if ConstantObjectInspector is supported by all 
non-primitive types? 

Also it would be nice to add more unit tests for cases where IN() take 
parameters of list of all supported primitive constants.


- Ning


On 2011-11-08 18:03:51, Robert Surówka wrote:
bq.  
bq.  ---
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/2733/
bq.  ---
bq.  
bq.  (Updated 2011-11-08 18:03:51)
bq.  
bq.  
bq.  Review request for Ning Zhang.
bq.  
bq.  
bq.  Summary
bq.  ---
bq.  
bq.  Introduction of Hashing for IN operator for constant values
bq.  
bq.  
bq.  This addresses bug HIVE-2553.
bq.  https://issues.apache.org/jira/browse/HIVE-2553
bq.  
bq.  
bq.  Diffs
bq.  -
bq.  
bq.trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TableScanOperator.java 
1199066 
bq.
trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFIn.java 
1199066 
bq.
trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/StructTypeInfo.java 
1199066 
bq.  
bq.  Diff: https://reviews.apache.org/r/2733/diff
bq.  
bq.  
bq.  Testing
bq.  ---
bq.  
bq.  Worked on some sample queries
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Robert
bq.  
bq.



 Use hashing instead of list traversal for IN operator for primitive types
 -

 Key: HIVE-2553
 URL: https://issues.apache.org/jira/browse/HIVE-2553
 Project: Hive
  Issue Type: Improvement
Reporter: Robert Surówka
Assignee: Robert Surówka
Priority: Minor
 Attachments: HIVE-2553.1.patch, HIVE-2553.2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2565) Add Java linter to Hive

2011-11-08 Thread Marek Sapota (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek Sapota reassigned HIVE-2565:
--

Assignee: Marek Sapota

 Add Java linter to Hive
 ---

 Key: HIVE-2565
 URL: https://issues.apache.org/jira/browse/HIVE-2565
 Project: Hive
  Issue Type: Bug
Reporter: Marek Sapota
Assignee: Marek Sapota

 Add a linter that will be run at `arc diff` and will check for too long 
 lines, trailing whitespace, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2565) Add Java linter to Hive

2011-11-08 Thread Marek Sapota (Created) (JIRA)
Add Java linter to Hive
---

 Key: HIVE-2565
 URL: https://issues.apache.org/jira/browse/HIVE-2565
 Project: Hive
  Issue Type: Bug
Reporter: Marek Sapota


Add a linter that will be run at `arc diff` and will check for too long lines, 
trailing whitespace, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2565) Add Java linter to Hive

2011-11-08 Thread Phabricator (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2565:
--

Attachment: D345.1.patch

mareksapotafb requested code review of HIVE-2565 [jira] Add Java linter to 
Hive.
Reviewers: JIRA

  Alter .arcconfig to use JavaLintEngine

  Add a linter that will be run at `arc diff` and will check for too long 
lines, trailing whitespace, etc.

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D345

AFFECTED FILES
  .arcconfig

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/693/

Tip: use the X-Herald-Rules header to filter Herald messages in your client.


 Add Java linter to Hive
 ---

 Key: HIVE-2565
 URL: https://issues.apache.org/jira/browse/HIVE-2565
 Project: Hive
  Issue Type: Bug
Reporter: Marek Sapota
Assignee: Marek Sapota
 Attachments: D345.1.patch


 Add a linter that will be run at `arc diff` and will check for too long 
 lines, trailing whitespace, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2565) Add Java linter to Hive

2011-11-08 Thread Phabricator (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13146830#comment-13146830
 ] 

Phabricator commented on HIVE-2565:
---

mareksapotafb has added reviewers to the revision HIVE-2565 [jira] Add Java 
linter to Hive.
Added Reviewers: jsichi

REVISION DETAIL
  https://reviews.facebook.net/D345


 Add Java linter to Hive
 ---

 Key: HIVE-2565
 URL: https://issues.apache.org/jira/browse/HIVE-2565
 Project: Hive
  Issue Type: Bug
Reporter: Marek Sapota
Assignee: Marek Sapota
 Attachments: D345.1.patch


 Add a linter that will be run at `arc diff` and will check for too long 
 lines, trailing whitespace, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira