Hive-trunk-h0.21 - Build # 943 - Failure

2011-09-08 Thread Apache Jenkins Server
Changes for Build #943
[heyongqiang] HIVE-2429: skip corruption bug that cause data not decompressed 
(Ramkumar Vadali via He Yongqiang)




2 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_smb_mapjoin_8

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try "ant test ... 
-Dtest.silent=false" to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
more logs.
at junit.framework.Assert.fail(Assert.java:47)
at 
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_smb_mapjoin_8(TestMinimrCliDriver.java:578)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:154)
at junit.framework.TestCase.runBare(TestCase.java:127)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:118)
at junit.framework.TestSuite.runTest(TestSuite.java:208)
at junit.framework.TestSuite.run(TestSuite.java:203)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)


REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try "ant test ... 
-Dtest.silent=false" to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
more logs.
at junit.framework.Assert.fail(Assert.java:47)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:7924)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:154)
at junit.framework.TestCase.runBare(TestCase.java:127)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:118)
at junit.framework.TestSuite.runTest(TestSuite.java:208)
at junit.framework.TestSuite.run(TestSuite.java:203)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build #943)

Status: Failure

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/943/ to 
view the results.


[jira] [Commented] (HIVE-2429) skip corruption bug that cause data not decompressed

2011-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100924#comment-13100924
 ] 

Hudson commented on HIVE-2429:
--

Integrated in Hive-trunk-h0.21 #943 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/943/])
HIVE-2429: skip corruption bug that cause data not decompressed (Ramkumar 
Vadali via He Yongqiang)

heyongqiang : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166922
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java
* /hive/trunk/ql/src/test/queries/clientpositive/rcfile_toleratecorruptions.q
* 
/hive/trunk/ql/src/test/results/clientpositive/rcfile_toleratecorruptions.q.out


> skip corruption bug that cause data not decompressed
> 
>
> Key: HIVE-2429
> URL: https://issues.apache.org/jira/browse/HIVE-2429
> Project: Hive
>  Issue Type: Bug
>Reporter: He Yongqiang
>Assignee: Ramkumar Vadali
> Attachments: HIVE-2429.patch
>
>
> This is a regression of https://issues.apache.org/jira/browse/HIVE-2404

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-1694) Accelerate GROUP BY execution using indexes

2011-09-08 Thread Prajakta Kalmegh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prajakta Kalmegh updated HIVE-1694:
---

Attachment: HIVE-1694.6.patch

> Accelerate GROUP BY execution using indexes
> ---
>
> Key: HIVE-1694
> URL: https://issues.apache.org/jira/browse/HIVE-1694
> Project: Hive
>  Issue Type: New Feature
>  Components: Indexing, Query Processor
>Affects Versions: 0.7.0
>Reporter: Nikhil Deshpande
>Assignee: Prajakta Kalmegh
> Attachments: HIVE-1694.1.patch.txt, HIVE-1694.2.patch.txt, 
> HIVE-1694.3.patch.txt, HIVE-1694.4.patch, HIVE-1694.5.patch, 
> HIVE-1694.6.patch, HIVE-1694_2010-10-28.diff, demo_q1.hql, demo_q2.hql
>
>
> The index building patch (Hive-417) is checked into trunk, this JIRA issue 
> tracks supporting indexes in Hive compiler & execution engine for SELECT 
> queries.
> This is in ref. to John's comment at
> https://issues.apache.org/jira/browse/HIVE-417?focusedCommentId=12884869&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12884869
> on creating separate JIRA issue for tracking index usage in optimizer & query 
> execution.
> The aim of this effort is to use indexes to accelerate query execution (for 
> certain class of queries). E.g.
> - Filters and range scans (already being worked on by He Yongqiang as part of 
> HIVE-417?)
> - Joins (index based joins)
> - Group By, Order By and other misc cases
> The proposal is multi-step:
> 1. Building index based operators, compiler and execution engine changes
> 2. Optimizer enhancements (e.g. cost-based optimizer to compare and choose 
> between index scans, full table scans etc.)
> This JIRA initially focuses on the first step. This JIRA is expected to hold 
> the information about index based plans & operator implementations for above 
> mentioned cases. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1694) Accelerate GROUP BY execution using indexes

2011-09-08 Thread jirapos...@reviews.apache.org (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100839#comment-13100839
 ] 

jirapos...@reviews.apache.org commented on HIVE-1694:
-


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1194/
---

(Updated 2011-09-09 01:14:16.218940)


Review request for hive and John Sichi.


Summary
---

This patch has defined a new AggregateIndexHandler which is used to optimize 
the query plan for groupby queries. 


This addresses bug HIVE-1694.
https://issues.apache.org/jira/browse/HIVE-1694


Diffs (updated)
-

  ql/src/test/results/clientpositive/ql_rewrite_gbtoidx.q.out PRE-CREATION 
  ql/src/test/queries/clientpositive/ql_rewrite_gbtoidx.q PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereTaskDispatcher.java
 699519b 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereProcessor.java
 dcdfb9e 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndexCtx.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndex.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteParseContextGenerator.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteGBUsingIndex.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
 PRE-CREATION 
  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 66ee0be 
  data/files/lineitem.txt PRE-CREATION 
  data/files/tbl.txt PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/index/AggregateIndexHandler.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndex.java 591c9ff 
  ql/src/java/org/apache/hadoop/hive/ql/index/bitmap/BitmapIndexHandler.java 
5053576 
  ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java 
7a00c00 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java bec8787 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/IndexUtils.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java 590d69a 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyCtx.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/1194/diff


Testing
---


Thanks,

Prajakta



> Accelerate GROUP BY execution using indexes
> ---
>
> Key: HIVE-1694
> URL: https://issues.apache.org/jira/browse/HIVE-1694
> Project: Hive
>  Issue Type: New Feature
>  Components: Indexing, Query Processor
>Affects Versions: 0.7.0
>Reporter: Nikhil Deshpande
>Assignee: Prajakta Kalmegh
> Attachments: HIVE-1694.1.patch.txt, HIVE-1694.2.patch.txt, 
> HIVE-1694.3.patch.txt, HIVE-1694.4.patch, HIVE-1694.5.patch, 
> HIVE-1694_2010-10-28.diff, demo_q1.hql, demo_q2.hql
>
>
> The index building patch (Hive-417) is checked into trunk, this JIRA issue 
> tracks supporting indexes in Hive compiler & execution engine for SELECT 
> queries.
> This is in ref. to John's comment at
> https://issues.apache.org/jira/browse/HIVE-417?focusedCommentId=12884869&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12884869
> on creating separate JIRA issue for tracking index usage in optimizer & query 
> execution.
> The aim of this effort is to use indexes to accelerate query execution (for 
> certain class of queries). E.g.
> - Filters and range scans (already being worked on by He Yongqiang as part of 
> HIVE-417?)
> - Joins (index based joins)
> - Group By, Order By and other misc cases
> The proposal is multi-step:
> 1. Building index based operators, compiler and execution engine changes
> 2. Optimizer enhancements (e.g. cost-based optimizer to compare and choose 
> between index scans, full table scans etc.)
> This JIRA initially focuses on the first step. This JIRA is expected to hold 
> the information about index based plans & operator implementations for above 
> mentioned cases. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Review Request: HIVE-1694: Accelerate GROUP BY execution using indexes

2011-09-08 Thread Prajakta Kalmegh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1194/
---

(Updated 2011-09-09 01:14:16.218940)


Review request for hive and John Sichi.


Summary
---

This patch has defined a new AggregateIndexHandler which is used to optimize 
the query plan for groupby queries. 


This addresses bug HIVE-1694.
https://issues.apache.org/jira/browse/HIVE-1694


Diffs (updated)
-

  ql/src/test/results/clientpositive/ql_rewrite_gbtoidx.q.out PRE-CREATION 
  ql/src/test/queries/clientpositive/ql_rewrite_gbtoidx.q PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereTaskDispatcher.java
 699519b 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/index/IndexWhereProcessor.java
 dcdfb9e 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndexCtx.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteQueryUsingAggregateIndex.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteParseContextGenerator.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteGBUsingIndex.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
 PRE-CREATION 
  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 66ee0be 
  data/files/lineitem.txt PRE-CREATION 
  data/files/tbl.txt PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/index/AggregateIndexHandler.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/index/HiveIndex.java 591c9ff 
  ql/src/java/org/apache/hadoop/hive/ql/index/bitmap/BitmapIndexHandler.java 
5053576 
  ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java 
7a00c00 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java bec8787 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/IndexUtils.java PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java 590d69a 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyCtx.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/1194/diff


Testing
---


Thanks,

Prajakta



[jira] [Updated] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2182:
-

Status: Open  (was: Patch Available)

I am getting the failure below when running the new test with latest trunk.  
Did you update the .q.out?

{noformat}
[junit] diff -a -I file: -I pfile: -I hdfs: -I /tmp/ -I invalidscheme: -I 
lastUpdateTime -I lastAccessTime -I [Oo]wner -I CreateTime -I LastAccessTime -I 
Location -I LOCATION ' -I transient_lastDdlTime -I last_modified_ -I 
java.lang.RuntimeException -I at org -I at sun -I at java -I at junit -I Caused 
by: -I LOCK_QUERYID: -I grantTime -I [.][.][.] [0-9]* more -I job_[0-9]*_[0-9]* 
-I USING 'java -cp 
/data/users/jsichi/open/test-trunk/build/ql/test/logs/clientnegative/udfnull.q.out
 
/data/users/jsichi/open/test-trunk/ql/src/test/results/clientnegative/udfnull.q.out
[junit] 8,18c8
[junit] < PREHOOK: Output: 
file:/tmp/jsichi/hive_2011-09-08_16-48-29_269_6749666372366482183/-mr-1
[junit] < Execution failed with exit status: 2
[junit] < Obtaining error information
[junit] < 
[junit] < Task failed!
[junit] < Task ID:
[junit] <   Stage-1
[junit] < 
[junit] < Logs:
[junit] < 
[junit] < /data/users/jsichi/open/test-trunk/build/ql/tmp//hive.log
[junit] ---
[junit] > PREHOOK: Output: 
file:/tmp/root/hive_2011-05-25_10-05-57_126_4632621650656424226/-mr-1
[junit] Exception: Client execution results failed with error code = 1
[junit] See build/ql/tmp/hive.log, or try "ant test ... 
-Dtest.silent=false" to get more logs.
[junit] Cleaning up TestNegativeCliDriver
[junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 5.496 sec
[junit] Test org.apache.hadoop.hive.cli.TestNegativeCliDriver FAILED
{noformat}


> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.1.patch, HIVE-2182.2.patch, HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java

[jira] [Updated] (HIVE-2402) Function like with empty string is throwing null pointer exception

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2402:
-

   Resolution: Fixed
Fix Version/s: 0.9.0
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks Chinna!


> Function like with empty string is throwing null pointer exception
> --
>
> Key: HIVE-2402
> URL: https://issues.apache.org/jira/browse/HIVE-2402
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Fix For: 0.9.0
>
> Attachments: HIVE-2402.1.patch, HIVE-2402.patch
>
>
> select emp.ename from emp where ename like ''
> This query is throwing null pointer exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2439) Upgrade antlr version to 3.4

2011-09-08 Thread Ashutosh Chauhan (JIRA)
Upgrade antlr version to 3.4


 Key: HIVE-2439
 URL: https://issues.apache.org/jira/browse/HIVE-2439
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.8.0
Reporter: Ashutosh Chauhan


Upgrade antlr version to 3.4

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HIVE-2429) skip corruption bug that cause data not decompressed

2011-09-08 Thread He Yongqiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Yongqiang resolved HIVE-2429.


Resolution: Fixed

committed, thanks Ramkumar Vadali!

> skip corruption bug that cause data not decompressed
> 
>
> Key: HIVE-2429
> URL: https://issues.apache.org/jira/browse/HIVE-2429
> Project: Hive
>  Issue Type: Bug
>Reporter: He Yongqiang
>Assignee: Ramkumar Vadali
> Attachments: HIVE-2429.patch
>
>
> This is a regression of https://issues.apache.org/jira/browse/HIVE-2404

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Apache Software Foundation Branding Requirements

2011-09-08 Thread John Sichi
Hey, the Apache Hive project is responsible for coming into compliance with 
these:

http://www.apache.org/foundation/marks/pmcs.html

I've created a JIRA issue for tracking this, with sub-tasks for the various 
work items:

https://issues.apache.org/jira/browse/HIVE-2432

Our quarterly reports from the PMC to the ASF board will continue to include 
status updates on these until they are all resolved.

If you are interested in helping out with any of that, please assign the 
corresponding sub-tasks to yourself.

JVS



[jira] [Created] (HIVE-2438) add trademark attributions to Hive homepage

2011-09-08 Thread John Sichi (JIRA)
add trademark attributions to Hive homepage
---

 Key: HIVE-2438
 URL: https://issues.apache.org/jira/browse/HIVE-2438
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi


http://www.apache.org/foundation/marks/pmcs.html#attributions

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2437) update project website navigation links

2011-09-08 Thread John Sichi (JIRA)
update project website navigation links
---

 Key: HIVE-2437
 URL: https://issues.apache.org/jira/browse/HIVE-2437
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi


http://www.apache.org/foundation/marks/pmcs.html#navigation

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2435) Update project naming and description in Hive wiki

2011-09-08 Thread John Sichi (JIRA)
Update project naming and description in Hive wiki
--

 Key: HIVE-2435
 URL: https://issues.apache.org/jira/browse/HIVE-2435
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi
Assignee: John Sichi




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2436) Update project naming and description in Hive website

2011-09-08 Thread John Sichi (JIRA)
Update project naming and description in Hive website
-

 Key: HIVE-2436
 URL: https://issues.apache.org/jira/browse/HIVE-2436
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi


http://www.apache.org/foundation/marks/pmcs.html#naming

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2435) Update project naming and description in Hive wiki

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2435:
-

Description: http://www.apache.org/foundation/marks/pmcs.html#naming

> Update project naming and description in Hive wiki
> --
>
> Key: HIVE-2435
> URL: https://issues.apache.org/jira/browse/HIVE-2435
> Project: Hive
>  Issue Type: Sub-task
>Reporter: John Sichi
>Assignee: John Sichi
>
> http://www.apache.org/foundation/marks/pmcs.html#naming

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2434) add a TM to Hive logo image

2011-09-08 Thread John Sichi (JIRA)
add a TM to Hive logo image
---

 Key: HIVE-2434
 URL: https://issues.apache.org/jira/browse/HIVE-2434
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi


http://www.apache.org/foundation/marks/pmcs.html#graphics

And maybe the feather?


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2433) add DOAP file for Hive

2011-09-08 Thread John Sichi (JIRA)
add DOAP file for Hive
--

 Key: HIVE-2433
 URL: https://issues.apache.org/jira/browse/HIVE-2433
 Project: Hive
  Issue Type: Sub-task
Reporter: John Sichi


http://www.apache.org/foundation/marks/pmcs.html#metadata

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HIVE-2432) Bring project into compliance with Apache Software Foundation Branding Requirements

2011-09-08 Thread John Sichi (JIRA)
Bring project into compliance with Apache Software Foundation Branding 
Requirements
---

 Key: HIVE-2432
 URL: https://issues.apache.org/jira/browse/HIVE-2432
 Project: Hive
  Issue Type: Improvement
Reporter: John Sichi
Assignee: John Sichi


http://www.apache.org/foundation/marks/pmcs.html

I will be creating sub-tasks for the various work items needed.


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2217) add Query text for debugging in lock data

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi reassigned HIVE-2217:


Assignee: Jiayan Jiang

> add Query text for debugging in lock data
> -
>
> Key: HIVE-2217
> URL: https://issues.apache.org/jira/browse/HIVE-2217
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.7.1
>Reporter: Namit Jain
>Assignee: Jiayan Jiang
> Attachments: hive_diff2
>
>
> Currently, the queryId is stored in the lock data - 
> Query text would improve the debuggability

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-2250) "DESCRIBE EXTENDED table_name" shows inconsistent compression information.

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi reassigned HIVE-2250:


Assignee: subramanian raghunathan

> "DESCRIBE EXTENDED table_name" shows inconsistent compression information.
> --
>
> Key: HIVE-2250
> URL: https://issues.apache.org/jira/browse/HIVE-2250
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Diagnosability
>Affects Versions: 0.7.0
> Environment: RHEL, Full Cloudera stack
>Reporter: Travis Powell
>Assignee: subramanian raghunathan
>Priority: Critical
> Attachments: HIVE-2250.patch
>
>
> Commands executed in this order:
> user@node # hive
> hive> SET hive.exec.compress.output=true; 
> hive> SET io.seqfile.compression.type=BLOCK;
> hive> CREATE TABLE table_name ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t' STORED AS SEQUENCEFILE;
> hive> CREATE TABLE staging_table ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t';
> hive> LOAD DATA LOCAL INPATH 'file:///root/input/' OVERWRITE INTO TABLE 
> staging_table;
> hive> INSERT OVERWRITE TABLE table_name SELECT * FROM staging_table;
> (Map reduce job to change to sequence file...)
> hive> DESCRIBE EXTENDED table_name;
> Detailed Table Information  Table(tableName:table_name, 
> dbName:benchmarking, owner:root, createTime:1309480053, lastAccessTime:0, 
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:session_key, 
> type:string, comment:null), FieldSchema(name:remote_address, type:string, 
> comment:null), FieldSchema(name:canister_lssn, type:string, comment:null), 
> FieldSchema(name:canister_session_id, type:bigint, comment:null), 
> FieldSchema(name:tltsid, type:string, comment:null), FieldSchema(name:tltuid, 
> type:string, comment:null), FieldSchema(name:tltvid, type:string, 
> comment:null), FieldSchema(name:canister_server, type:string, comment:null), 
> FieldSchema(name:session_timestamp, type:string, comment:null), 
> FieldSchema(name:session_duration, type:string, comment:null), 
> FieldSchema(name:hit_count, type:bigint, comment:null), 
> FieldSchema(name:http_user_agent, type:string, comment:null), 
> FieldSchema(name:extractid, type:bigint, comment:null), 
> FieldSchema(name:site_link, type:string, comment:null), FieldSchema(name:dt, 
> type:string, comment:null), FieldSchema(name:hour, type:int, comment:null)], 
> location:hdfs://hadoop2/user/hive/warehouse/benchmarking.db/table_name, 
> inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
> outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
> serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> parameters:{serialization.format=   , field.delim=
> *** SEE ABOVE: Compression is set to FALSE, even though contents of table is 
> compressed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-198) Parse errors report incorrectly.

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-198:


Status: Open  (was: Patch Available)

Could you add a test case, and also submit a review board request?

https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-ReviewProcess


> Parse errors report incorrectly.
> 
>
> Key: HIVE-198
> URL: https://issues.apache.org/jira/browse/HIVE-198
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: S. Alex Smith
>Assignee: Aviv Eyal
>  Labels: parse
> Attachments: PraseErrorMessage.patch
>
>
> The following two queries fail:
> CREATE TABLE output_table(userid, bigint);
> CREATE TABLE output_table(userid bigint, age int, sex string, location 
> string);
> each giving the error message "FAILED: Parse Error: line 1:16 mismatched 
> input 'TABLE' expecting KW_TEMPORARY"
> Although one might not catch it from the error message, the problem with the 
> first is that there is a comma between "userid" and "bigint", and the problem 
> with the second is that "location" is a reserved keyword.  Reported errors 
> should more accurately describe the nature of the error, such as "no type 
> given for column 'userid'" or "'location' is not a valid column name".

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HIVE-198) Parse errors report incorrectly.

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi reassigned HIVE-198:
---

Assignee: Aviv Eyal

> Parse errors report incorrectly.
> 
>
> Key: HIVE-198
> URL: https://issues.apache.org/jira/browse/HIVE-198
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: S. Alex Smith
>Assignee: Aviv Eyal
>  Labels: parse
> Attachments: PraseErrorMessage.patch
>
>
> The following two queries fail:
> CREATE TABLE output_table(userid, bigint);
> CREATE TABLE output_table(userid bigint, age int, sex string, location 
> string);
> each giving the error message "FAILED: Parse Error: line 1:16 mismatched 
> input 'TABLE' expecting KW_TEMPORARY"
> Although one might not catch it from the error message, the problem with the 
> first is that there is a comma between "userid" and "bigint", and the problem 
> with the second is that "location" is a reserved keyword.  Reported errors 
> should more accurately describe the nature of the error, such as "no type 
> given for column 'userid'" or "'location' is not a valid column name".

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-0.8.0-SNAPSHOT-h0.21 - Build # 14 - Fixed

2011-09-08 Thread Apache Jenkins Server
Changes for Build #13
[amareshwari] HIVE-2431. svn merge -r 1166527:1166528 from trunk


Changes for Build #14



All tests passed

The Apache Jenkins build system has built Hive-0.8.0-SNAPSHOT-h0.21 (build #14)

Status: Fixed

Check console output at 
https://builds.apache.org/job/Hive-0.8.0-SNAPSHOT-h0.21/14/ to view the results.


[jira] [Updated] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2182:
---

Status: Patch Available  (was: Open)

> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.1.patch, HIVE-2182.2.patch, HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107)
>... 31 more
> {noformat}
> Instead of null pointer exception it should throw meaning full exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread jirapos...@reviews.apache.org (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100631#comment-13100631
 ] 

jirapos...@reviews.apache.org commented on HIVE-2182:
-


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1759/
---

Review request for hive and John Sichi.


Summary
---

while executing UDF if the implementation jar is not present in MR class path 
it is throwing nullpointer exception instead of throwing nullpointer exception 
throwing meaning full exception with the required details.


This addresses bug HIVE-2182.
https://issues.apache.org/jira/browse/HIVE-2182


Diffs
-

  trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java 
1166865 
  trunk/ql/src/test/queries/clientnegative/udfnull.q PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/udfnull.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/1759/diff


Testing
---

Added unit tests


Thanks,

chinna



> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.1.patch, HIVE-2182.2.patch, HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107)
>... 31 more
> {noformat}
> Instead of null pointer exception it should throw meaning full exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Review Request: HIVE-2182 Avoid null pointer exception when executing UDF

2011-09-08 Thread chinnarao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1759/
---

Review request for hive and John Sichi.


Summary
---

while executing UDF if the implementation jar is not present in MR class path 
it is throwing nullpointer exception instead of throwing nullpointer exception 
throwing meaning full exception with the required details.


This addresses bug HIVE-2182.
https://issues.apache.org/jira/browse/HIVE-2182


Diffs
-

  trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFBridge.java 
1166865 
  trunk/ql/src/test/queries/clientnegative/udfnull.q PRE-CREATION 
  trunk/ql/src/test/results/clientnegative/udfnull.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/1759/diff


Testing
---

Added unit tests


Thanks,

chinna



[jira] [Updated] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2182:
---

Attachment: HIVE-2182.2.patch

> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.1.patch, HIVE-2182.2.patch, HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107)
>... 31 more
> {noformat}
> Instead of null pointer exception it should throw meaning full exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Hive-trunk-h0.21

2011-09-08 Thread John Sichi
The build name is a misnomer, but I don't think it has anything to do with the 
build behavior you are seeing.  Maybe the hadoop.security.version property 
override is not getting correctly inherited by the shims sub-build?

JVS

On Sep 6, 2011, at 2:01 AM, Amareshwari Sri Ramadasu wrote:

> Hi,
> 
> One question (might be dumb):
> Why does the build name has h0.21? It does not build it with Hadoop 0.21.0, 
> it is 0.20.1.
> 
> I'm asking this because when I'm trying to build hive with 0.23.0-SNAPSHOT 
> version, it happily says it could find the dependency at the mirror and 
> downloads 0.20.3-CDH3-SNAPSHOT. So, was wondering if it is the same reason 
> that 0.23 is treated as 0.20.3. Also, this happens only in branch 0.7 and not 
> trunk. Is there any issue that fixed this problem?
> 
> Here is the output for the command ant package :
> $ ant package -Dhadoop.version=0.23.0-SNAPSHOT 
> -Dhadoop.security.version=0.23.0-SNAPSHOT
> 
> ivy-retrieve-hadoop-source:
> [ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
> [ivy:retrieve] :: loading settings :: file = 
> /Users/amarsri/workspace/hive-0.7/ivy/ivysettings.xml
> [ivy:retrieve] :: resolving dependencies :: org.apache.hive#hive-shims;0.7.1
> [ivy:retrieve] confs: [default]
> [ivy:retrieve] found hadoop#core;0.23.0-SNAPSHOT in hadoop-source
> [ivy:retrieve] downloading 
> http://mirror.facebook.net/facebook/hive-deps/hadoop/core/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-0.20.3-CDH3-SNAPSHOT.tar.gz
>  ...
> [ivy:retrieve] [SUCCESSFUL ] 
> hadoop#core;0.23.0-SNAPSHOT!hadoop.tar.gz(source) (356259ms)
> [ivy:retrieve] :: resolution report :: resolve 10275ms :: artifacts dl 
> 356265ms
>-
>|  |modules||   artifacts   |
>|   conf   | number| search|dwnlded|evicted|| number|dwnlded|
>-
>|  default |   1   |   0   |   0   |   0   ||   1   |   1   |
>-
> [ivy:retrieve] :: retrieving :: org.apache.hive#hive-shims
> [ivy:retrieve] confs: [default]
> [ivy:retrieve] 1 artifacts copied, 0 already retrieved (56655kB/1654ms)
> 
> 
> Thanks
> Amareshwari
> 
> On 9/5/11 1:29 AM, "Apache Jenkins Server"  wrote:
> 
> Changes for Build #931
> [jvs] HIVE-1989. Recognize transitivity of predicates on join keys
> (Charles Chen via jvs)
> 
> 
> Changes for Build #932
> 
> Changes for Build #933
> 
> 
> 
> All tests passed
> 
> The Apache Jenkins build system has built Hive-trunk-h0.21 (build #933)
> 
> Status: Fixed
> 
> Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/933/ 
> to view the results.
> 



[jira] [Commented] (HIVE-2223) support grouping on complex types in Hive

2011-09-08 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100570#comment-13100570
 ] 

John Sichi commented on HIVE-2223:
--

Jonathan, fill in the bug field in Review Board with HIVE-2223 so that the 
comments from there will automatically get propagated here.


> support grouping on complex types in Hive
> -
>
> Key: HIVE-2223
> URL: https://issues.apache.org/jira/browse/HIVE-2223
> Project: Hive
>  Issue Type: New Feature
>Reporter: Kate Ting
>Assignee: Jonathan Chang
>Priority: Minor
> Attachments: HIVE-2223.patch
>
>
> Creating a query with a GROUP BY statement when an array type column is part 
> of the column list is not yet supported:
> CREATE TABLE test_group_by ( key INT, group INT, terms ARRAY);
> SELECT key, terms, count(group) FROM test_group_by GROUP BY key, terms;
> ...
> "Hash code on complex types not supported yet."
> java.lang.RuntimeException: Error while closing operators
> at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:232)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:356)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> at org.apache.hadoop.mapred.Child.main(Child.java:170)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: Hash code on complex types not supported yet.
> at 
> org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:799)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:462)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:470)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:470)
> at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:470)
> at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:211)
> ... 4 more
> Caused by: java.lang.RuntimeException: Hash code on complex types not 
> supported yet.
> at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.hashCode(ObjectInspectorUtils.java:348)
> at 
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:187)
> at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:386)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:598)
> at 
> org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:746)
> at 
> org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:780)
> ... 9 more

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2402) Function like with empty string is throwing null pointer exception

2011-09-08 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100569#comment-13100569
 ] 

John Sichi commented on HIVE-2402:
--

+1.  Will commit when tests pass.


> Function like with empty string is throwing null pointer exception
> --
>
> Key: HIVE-2402
> URL: https://issues.apache.org/jira/browse/HIVE-2402
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2402.1.patch, HIVE-2402.patch
>
>
> select emp.ename from emp where ename like ''
> This query is throwing null pointer exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi updated HIVE-2182:
-

Status: Open  (was: Patch Available)

Can you add the test case back in?  Also create a review board request?

> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.1.patch, HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107)
>... 31 more
> {noformat}
> Instead of null pointer exception it should throw meaning full exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: antlr license

2011-09-08 Thread John Sichi
If it's not a difficult upgrade, seems like it would be a good idea.

JVS

On Sep 8, 2011, at 10:51 AM, Ashutosh Chauhan wrote:

> I stumbled upon this one
> http://www.antlr.org/wiki/display/ANTLR3/ANTLR+3.4+Release+Notes From these
> release notes 3.0-3.3 antlr version has circular dependency on 2.x antlr
> whose license wasn't very clean. Wondering whether we should upgrade to
> antlr 3.4. I am not a license expert so I don't know if it is required or we
> can live as it is.
> 
> Thanks,
> Ashutosh



[jira] [Updated] (HIVE-2402) Function like with empty string is throwing null pointer exception

2011-09-08 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2402:
---

Status: Patch Available  (was: Open)

> Function like with empty string is throwing null pointer exception
> --
>
> Key: HIVE-2402
> URL: https://issues.apache.org/jira/browse/HIVE-2402
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2402.1.patch, HIVE-2402.patch
>
>
> select emp.ename from emp where ename like ''
> This query is throwing null pointer exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




antlr license

2011-09-08 Thread Ashutosh Chauhan
I stumbled upon this one
http://www.antlr.org/wiki/display/ANTLR3/ANTLR+3.4+Release+Notes From these
release notes 3.0-3.3 antlr version has circular dependency on 2.x antlr
whose license wasn't very clean. Wondering whether we should upgrade to
antlr 3.4. I am not a license expert so I don't know if it is required or we
can live as it is.

Thanks,
Ashutosh


[jira] [Commented] (HIVE-2402) Function like with empty string is throwing null pointer exception

2011-09-08 Thread jirapos...@reviews.apache.org (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100495#comment-13100495
 ] 

jirapos...@reviews.apache.org commented on HIVE-2402:
-


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1757/
---

Review request for hive and John Sichi.


Summary
---

By default patterntype is complex so it is expecting Pattern object but in this 
scenario pattern object is null so it is throwing nullpointer exception. 
Default patterntype can be NONE and in parseSimplePattern() it will be assigned 
with right type.


This addresses bug HIVE-2402.
https://issues.apache.org/jira/browse/HIVE-2402


Diffs
-

  trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFLike.java 1165244 
  trunk/ql/src/test/queries/clientpositive/udf_like.q 1165244 
  trunk/ql/src/test/results/clientpositive/udf_like.q.out 1165244 

Diff: https://reviews.apache.org/r/1757/diff


Testing
---

Added unit testcase.


Thanks,

chinna



> Function like with empty string is throwing null pointer exception
> --
>
> Key: HIVE-2402
> URL: https://issues.apache.org/jira/browse/HIVE-2402
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2402.1.patch, HIVE-2402.patch
>
>
> select emp.ename from emp where ename like ''
> This query is throwing null pointer exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Review Request: Function like with empty string is throwing null pointer exception

2011-09-08 Thread chinnarao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1757/
---

Review request for hive and John Sichi.


Summary
---

By default patterntype is complex so it is expecting Pattern object but in this 
scenario pattern object is null so it is throwing nullpointer exception. 
Default patterntype can be NONE and in parseSimplePattern() it will be assigned 
with right type.


This addresses bug HIVE-2402.
https://issues.apache.org/jira/browse/HIVE-2402


Diffs
-

  trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFLike.java 1165244 
  trunk/ql/src/test/queries/clientpositive/udf_like.q 1165244 
  trunk/ql/src/test/results/clientpositive/udf_like.q.out 1165244 

Diff: https://reviews.apache.org/r/1757/diff


Testing
---

Added unit testcase.


Thanks,

chinna



[jira] [Commented] (HIVE-2402) Function like with empty string is throwing null pointer exception

2011-09-08 Thread Chinna Rao Lalam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100491#comment-13100491
 ] 

Chinna Rao Lalam commented on HIVE-2402:


By default patterntype is complex so it is expecting Pattern object but in this 
scenario pattern object is null so it is throwing nullpointer exception. 
Default patterntype can be NONE and in parseSimplePattern() it will be assigned 
with right type.

> Function like with empty string is throwing null pointer exception
> --
>
> Key: HIVE-2402
> URL: https://issues.apache.org/jira/browse/HIVE-2402
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2402.1.patch, HIVE-2402.patch
>
>
> select emp.ename from emp where ename like ''
> This query is throwing null pointer exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2402) Function like with empty string is throwing null pointer exception

2011-09-08 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2402:
---

Attachment: HIVE-2402.1.patch

> Function like with empty string is throwing null pointer exception
> --
>
> Key: HIVE-2402
> URL: https://issues.apache.org/jira/browse/HIVE-2402
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2402.1.patch, HIVE-2402.patch
>
>
> select emp.ename from emp where ename like ''
> This query is throwing null pointer exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2415) disallow partition column names when doing replace columns

2011-09-08 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100473#comment-13100473
 ] 

Ashutosh Chauhan commented on HIVE-2415:


We discussed this a bit in yesterday's contributor meeting. The general view 
was all of these checks make sense in HiveMetaStore. For the backdoor you are 
describing, John suggested there could be an "admin" mode which will skip these 
checks in metastore. So, to make changes generally not permitted you will run 
in "admin" mode and all other times you will run in regular user mode. 

> disallow partition column names when doing replace columns
> --
>
> Key: HIVE-2415
> URL: https://issues.apache.org/jira/browse/HIVE-2415
> Project: Hive
>  Issue Type: Bug
>Reporter: He Yongqiang
>Assignee: He Yongqiang
> Attachments: HIVE-2415.1.patch
>
>
> alter table replace columns allows to add a column with the same name as 
> partition column, which introduced inconsistency. 
> We should disallow this. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2217) add Query text for debugging in lock data

2011-09-08 Thread Jiayan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayan Jiang updated HIVE-2217:
---

Attachment: hive_diff2

> add Query text for debugging in lock data
> -
>
> Key: HIVE-2217
> URL: https://issues.apache.org/jira/browse/HIVE-2217
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.7.1
>Reporter: Namit Jain
> Attachments: hive_diff2
>
>
> Currently, the queryId is stored in the lock data - 
> Query text would improve the debuggability

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2217) add Query text for debugging in lock data

2011-09-08 Thread Jiayan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayan Jiang updated HIVE-2217:
---

Attachment: (was: hive_diff2)

> add Query text for debugging in lock data
> -
>
> Key: HIVE-2217
> URL: https://issues.apache.org/jira/browse/HIVE-2217
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.7.1
>Reporter: Namit Jain
> Attachments: hive_diff2
>
>
> Currently, the queryId is stored in the lock data - 
> Query text would improve the debuggability

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Review Request: HIVE-2223: Add group by support for keys of type ARRAY and MAP.

2011-09-08 Thread Igor Kabiljo

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1745/#review1814
---


I suppose, hashCode should return the same value as List.hashCode and 
Map.hashCode would return (for consistency)

- Igor


On 2011-09-08 04:51:03, Jonathan Chang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/1745/
> ---
> 
> (Updated 2011-09-08 04:51:03)
> 
> 
> Review request for hive.
> 
> 
> Summary
> ---
> 
> Adds hash codes for List and Map object inspectors.
> 
> 
> Diffs
> -
> 
> 
> Diff: https://reviews.apache.org/r/1745/diff
> 
> 
> Testing
> ---
> 
> Added unittest.
> 
> 
> Thanks,
> 
> Jonathan
> 
>



[jira] [Commented] (HIVE-1884) Potential risk of resource leaks in Hive

2011-09-08 Thread Florin Diaconeasa (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100257#comment-13100257
 ] 

Florin Diaconeasa commented on HIVE-1884:
-

Hive is running in server mode as daemon.

> Potential risk of resource leaks in Hive
> 
>
> Key: HIVE-1884
> URL: https://issues.apache.org/jira/browse/HIVE-1884
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Metastore, Query Processor, Server Infrastructure
>Affects Versions: 0.3.0, 0.4.0, 0.4.1, 0.5.0, 0.6.0
> Environment: Hive 0.6.0, Hadoop 0.20.1
> SUSE Linux Enterprise Server 11 (i586)
>Reporter: Mohit Sikri
>Assignee: Chinna Rao Lalam
> Fix For: 0.8.0
>
> Attachments: HIVE-1884.1.PATCH, HIVE-1884.2.patch, HIVE-1884.3.patch, 
> HIVE-1884.4.patch, HIVE-1884.5.patch
>
>
> h3.There are couple of resource leaks.
> h4.For example,
> In CliDriver.java, Method :- processReader() the buffered reader is not 
> closed.
> h3.Also there are risk(s) of  resource(s) getting leaked , in such cases we 
> need to re factor the code to move closing of resources in finally block.
> h4. For Example :- 
> In Throttle.java   Method:- checkJobTracker() , the following code snippet 
> might cause resource leak.
> {code}
> InputStream in = url.openStream();
> in.read(buffer);
> in.close();
> {code}
> Ideally and as per the best coding practices it should be like below
> {code}
> InputStream in=null;
> try   {
> in = url.openStream();
> int numRead = in.read(buffer);
> }
> finally {
>IOUtils.closeStream(in);
> }
> {code}
> Similar cases, were found in ExplainTask.java, DDLTask.java etc.Need to re 
> factor all such occurrences.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1884) Potential risk of resource leaks in Hive

2011-09-08 Thread Florin Diaconeasa (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100250#comment-13100250
 ] 

Florin Diaconeasa commented on HIVE-1884:
-

Hello,

Could this lead to input files being ignored?

We have a query in which we are doing several "UNION ALL". Apparently, 
sometimes Hive ignores one of the SELECTs. Actually not sure if it ignores the 
SELECT or it doesn't see the input files for that select. There are 6 queries 
which are united using UNION ALL.

This happened several times with different SELECTs from that big query and the 
query i valid. This leads me to think it's either related to this issue or a 
memory leak.

Setup: Hadoop 0.20.1, Hive 0.6, Debian 5.0 x64

Thank you,

Flo

> Potential risk of resource leaks in Hive
> 
>
> Key: HIVE-1884
> URL: https://issues.apache.org/jira/browse/HIVE-1884
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Metastore, Query Processor, Server Infrastructure
>Affects Versions: 0.3.0, 0.4.0, 0.4.1, 0.5.0, 0.6.0
> Environment: Hive 0.6.0, Hadoop 0.20.1
> SUSE Linux Enterprise Server 11 (i586)
>Reporter: Mohit Sikri
>Assignee: Chinna Rao Lalam
> Fix For: 0.8.0
>
> Attachments: HIVE-1884.1.PATCH, HIVE-1884.2.patch, HIVE-1884.3.patch, 
> HIVE-1884.4.patch, HIVE-1884.5.patch
>
>
> h3.There are couple of resource leaks.
> h4.For example,
> In CliDriver.java, Method :- processReader() the buffered reader is not 
> closed.
> h3.Also there are risk(s) of  resource(s) getting leaked , in such cases we 
> need to re factor the code to move closing of resources in finally block.
> h4. For Example :- 
> In Throttle.java   Method:- checkJobTracker() , the following code snippet 
> might cause resource leak.
> {code}
> InputStream in = url.openStream();
> in.read(buffer);
> in.close();
> {code}
> Ideally and as per the best coding practices it should be like below
> {code}
> InputStream in=null;
> try   {
> in = url.openStream();
> int numRead = in.read(buffer);
> }
> finally {
>IOUtils.closeStream(in);
> }
> {code}
> Similar cases, were found in ExplainTask.java, DDLTask.java etc.Need to re 
> factor all such occurrences.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hive-0.8.0-SNAPSHOT-h0.21 - Build # 13 - Failure

2011-09-08 Thread Apache Jenkins Server
Changes for Build #13
[amareshwari] HIVE-2431. svn merge -r 1166527:1166528 from trunk




1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try "ant test ... 
-Dtest.silent=false" to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
more logs.
at junit.framework.Assert.fail(Assert.java:47)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:7852)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:154)
at junit.framework.TestCase.runBare(TestCase.java:127)
at junit.framework.TestResult$1.protect(TestResult.java:106)
at junit.framework.TestResult.runProtected(TestResult.java:124)
at junit.framework.TestResult.run(TestResult.java:109)
at junit.framework.TestCase.run(TestCase.java:118)
at junit.framework.TestSuite.runTest(TestSuite.java:208)
at junit.framework.TestSuite.run(TestSuite.java:203)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)




The Apache Jenkins build system has built Hive-0.8.0-SNAPSHOT-h0.21 (build #13)

Status: Failure

Check console output at 
https://builds.apache.org/job/Hive-0.8.0-SNAPSHOT-h0.21/13/ to view the results.


[jira] [Commented] (HIVE-2431) upgrading thrift version didn't upgrade libthrift.jar symlink correctly

2011-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100242#comment-13100242
 ] 

Hudson commented on HIVE-2431:
--

Integrated in Hive-0.8.0-SNAPSHOT-h0.21 #13 (See 
[https://builds.apache.org/job/Hive-0.8.0-SNAPSHOT-h0.21/13/])
HIVE-2431. svn merge -r 1166527:1166528 from trunk

amareshwari : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166529
Files : 
* /hive/branches/branch-0.8/build.xml


> upgrading thrift version didn't upgrade libthrift.jar symlink correctly
> ---
>
> Key: HIVE-2431
> URL: https://issues.apache.org/jira/browse/HIVE-2431
> Project: Hive
>  Issue Type: Bug
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Fix For: 0.8.0
>
> Attachments: HIVE-2431.patch
>
>
> libthrift.jar and libfb303.jar are symlinks to the current thrift version. 
> With the upgrade to 0.7, there's a bug in the symlink creation. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2431) upgrading thrift version didn't upgrade libthrift.jar symlink correctly

2011-09-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100208#comment-13100208
 ] 

Hudson commented on HIVE-2431:
--

Integrated in Hive-trunk-h0.21 #941 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/941/])
HIVE-2431. Fixes symlink creation for libthrift.jar after thrift version 
upgrade. (Ning Zhang via amareshwari)

amareshwari : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1166528
Files : 
* /hive/trunk/build.xml


> upgrading thrift version didn't upgrade libthrift.jar symlink correctly
> ---
>
> Key: HIVE-2431
> URL: https://issues.apache.org/jira/browse/HIVE-2431
> Project: Hive
>  Issue Type: Bug
>Reporter: Ning Zhang
>Assignee: Ning Zhang
> Fix For: 0.8.0
>
> Attachments: HIVE-2431.patch
>
>
> libthrift.jar and libfb303.jar are symlinks to the current thrift version. 
> With the upgrade to 0.7, there's a bug in the symlink creation. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2182:
---

Status: Patch Available  (was: Open)

> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.1.patch, HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107)
>... 31 more
> {noformat}
> Instead of null pointer exception it should throw meaning full exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread Chinna Rao Lalam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinna Rao Lalam updated HIVE-2182:
---

Attachment: HIVE-2182.1.patch

> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.1.patch, HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107)
>... 31 more
> {noformat}
> Instead of null pointer exception it should throw meaning full exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2182) Avoid null pointer exception when executing UDF

2011-09-08 Thread Chinna Rao Lalam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100117#comment-13100117
 ] 

Chinna Rao Lalam commented on HIVE-2182:


While de serializing the udfClass variable is becoming null because the class 
is not present in MR classpath.
In user logs it is throwing the following exception

java.lang.ClassNotFoundException: com.samples.hive.udf.Grade
Continuing ...

So i have introduced one new variable to hold the udf class name like 
"udfClassName".

Now in the exception the class name will be displayed in the following way


Caused by: org.apache.hadoop.hive.ql.exec.UDFArgumentException: The UDF 
Implementation class 'com.samples.hive.udf.Grade' is Not in class path
at 
org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:141)
at 
org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
at 
org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:896)

> Avoid null pointer exception when executing UDF
> ---
>
> Key: HIVE-2182
> URL: https://issues.apache.org/jira/browse/HIVE-2182
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.5.0, 0.8.0
> Environment: Hadoop 0.20.1, Hive0.8.0 and SUSE Linux Enterprise 
> Server 10 SP2 (i586) - Kernel 2.6.16.60-0.21-smp (5)
>Reporter: Chinna Rao Lalam
>Assignee: Chinna Rao Lalam
> Attachments: HIVE-2182.patch
>
>
> For using UDF's executed following steps
> {noformat}
> add jar /home/udf/udf.jar;
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> But from the above steps if we miss the first step (add jar) and execute 
> remaining steps
> {noformat}
> create temporary function grade as 'udf.Grade';
> select m.userid,m.name,grade(m.maths,m.physics,m.chemistry) from marks m;
> {noformat}
> In tasktracker it is throwing this exception
> {noformat}
> Caused by: java.lang.RuntimeException: Map operator initialization failed
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:121)
>... 18 more
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115)
>at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFBridge.initialize(GenericUDFBridge.java:126)
>at 
> org.apache.hadoop.hive.ql.exec.ExprNodeGenericFuncEvaluator.initialize(ExprNodeGenericFuncEvaluator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:878)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:904)
>at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:60)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:433)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:389)
>at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.initializeOp(TableScanOperator.java:133)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.MapOperator.initializeOp(MapOperator.java:444)
>at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:357)
>at 
> org.apache.hadoop.hive.ql.exec.ExecMapper.configure(ExecMapper.java:98)
>... 18 more
> Caused by: java.lang.NullPointerException
>at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107)
>... 31 more
> {noformat}
> Instead of null pointer exception it should throw meaning full exception

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2250) "DESCRIBE EXTENDED table_name" shows inconsistent compression information.

2011-09-08 Thread subramanian raghunathan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13100116#comment-13100116
 ] 

subramanian raghunathan commented on HIVE-2250:
---

Handled the following scenarios 

Create table 
Create table like
Alter table fileformat

Based on the Inputformat if its type is secquenceFileFormat , then compression 
flag is set to true and vice versa.

> "DESCRIBE EXTENDED table_name" shows inconsistent compression information.
> --
>
> Key: HIVE-2250
> URL: https://issues.apache.org/jira/browse/HIVE-2250
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Diagnosability
>Affects Versions: 0.7.0
> Environment: RHEL, Full Cloudera stack
>Reporter: Travis Powell
>Priority: Critical
> Attachments: HIVE-2250.patch
>
>
> Commands executed in this order:
> user@node # hive
> hive> SET hive.exec.compress.output=true; 
> hive> SET io.seqfile.compression.type=BLOCK;
> hive> CREATE TABLE table_name ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t' STORED AS SEQUENCEFILE;
> hive> CREATE TABLE staging_table ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t';
> hive> LOAD DATA LOCAL INPATH 'file:///root/input/' OVERWRITE INTO TABLE 
> staging_table;
> hive> INSERT OVERWRITE TABLE table_name SELECT * FROM staging_table;
> (Map reduce job to change to sequence file...)
> hive> DESCRIBE EXTENDED table_name;
> Detailed Table Information  Table(tableName:table_name, 
> dbName:benchmarking, owner:root, createTime:1309480053, lastAccessTime:0, 
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:session_key, 
> type:string, comment:null), FieldSchema(name:remote_address, type:string, 
> comment:null), FieldSchema(name:canister_lssn, type:string, comment:null), 
> FieldSchema(name:canister_session_id, type:bigint, comment:null), 
> FieldSchema(name:tltsid, type:string, comment:null), FieldSchema(name:tltuid, 
> type:string, comment:null), FieldSchema(name:tltvid, type:string, 
> comment:null), FieldSchema(name:canister_server, type:string, comment:null), 
> FieldSchema(name:session_timestamp, type:string, comment:null), 
> FieldSchema(name:session_duration, type:string, comment:null), 
> FieldSchema(name:hit_count, type:bigint, comment:null), 
> FieldSchema(name:http_user_agent, type:string, comment:null), 
> FieldSchema(name:extractid, type:bigint, comment:null), 
> FieldSchema(name:site_link, type:string, comment:null), FieldSchema(name:dt, 
> type:string, comment:null), FieldSchema(name:hour, type:int, comment:null)], 
> location:hdfs://hadoop2/user/hive/warehouse/benchmarking.db/table_name, 
> inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
> outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
> serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> parameters:{serialization.format=   , field.delim=
> *** SEE ABOVE: Compression is set to FALSE, even though contents of table is 
> compressed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2250) "DESCRIBE EXTENDED table_name" shows inconsistent compression information.

2011-09-08 Thread subramanian raghunathan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subramanian raghunathan updated HIVE-2250:
--

Attachment: HIVE-2250.patch

> "DESCRIBE EXTENDED table_name" shows inconsistent compression information.
> --
>
> Key: HIVE-2250
> URL: https://issues.apache.org/jira/browse/HIVE-2250
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Diagnosability
>Affects Versions: 0.7.0
> Environment: RHEL, Full Cloudera stack
>Reporter: Travis Powell
>Priority: Critical
> Attachments: HIVE-2250.patch
>
>
> Commands executed in this order:
> user@node # hive
> hive> SET hive.exec.compress.output=true; 
> hive> SET io.seqfile.compression.type=BLOCK;
> hive> CREATE TABLE table_name ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t' STORED AS SEQUENCEFILE;
> hive> CREATE TABLE staging_table ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t';
> hive> LOAD DATA LOCAL INPATH 'file:///root/input/' OVERWRITE INTO TABLE 
> staging_table;
> hive> INSERT OVERWRITE TABLE table_name SELECT * FROM staging_table;
> (Map reduce job to change to sequence file...)
> hive> DESCRIBE EXTENDED table_name;
> Detailed Table Information  Table(tableName:table_name, 
> dbName:benchmarking, owner:root, createTime:1309480053, lastAccessTime:0, 
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:session_key, 
> type:string, comment:null), FieldSchema(name:remote_address, type:string, 
> comment:null), FieldSchema(name:canister_lssn, type:string, comment:null), 
> FieldSchema(name:canister_session_id, type:bigint, comment:null), 
> FieldSchema(name:tltsid, type:string, comment:null), FieldSchema(name:tltuid, 
> type:string, comment:null), FieldSchema(name:tltvid, type:string, 
> comment:null), FieldSchema(name:canister_server, type:string, comment:null), 
> FieldSchema(name:session_timestamp, type:string, comment:null), 
> FieldSchema(name:session_duration, type:string, comment:null), 
> FieldSchema(name:hit_count, type:bigint, comment:null), 
> FieldSchema(name:http_user_agent, type:string, comment:null), 
> FieldSchema(name:extractid, type:bigint, comment:null), 
> FieldSchema(name:site_link, type:string, comment:null), FieldSchema(name:dt, 
> type:string, comment:null), FieldSchema(name:hour, type:int, comment:null)], 
> location:hdfs://hadoop2/user/hive/warehouse/benchmarking.db/table_name, 
> inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
> outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
> serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> parameters:{serialization.format=   , field.delim=
> *** SEE ABOVE: Compression is set to FALSE, even though contents of table is 
> compressed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HIVE-2250) "DESCRIBE EXTENDED table_name" shows inconsistent compression information.

2011-09-08 Thread subramanian raghunathan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

subramanian raghunathan updated HIVE-2250:
--

Status: Patch Available  (was: Open)

> "DESCRIBE EXTENDED table_name" shows inconsistent compression information.
> --
>
> Key: HIVE-2250
> URL: https://issues.apache.org/jira/browse/HIVE-2250
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Diagnosability
>Affects Versions: 0.7.0
> Environment: RHEL, Full Cloudera stack
>Reporter: Travis Powell
>Priority: Critical
> Attachments: HIVE-2250.patch
>
>
> Commands executed in this order:
> user@node # hive
> hive> SET hive.exec.compress.output=true; 
> hive> SET io.seqfile.compression.type=BLOCK;
> hive> CREATE TABLE table_name ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t' STORED AS SEQUENCEFILE;
> hive> CREATE TABLE staging_table ( [...] ) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '\t';
> hive> LOAD DATA LOCAL INPATH 'file:///root/input/' OVERWRITE INTO TABLE 
> staging_table;
> hive> INSERT OVERWRITE TABLE table_name SELECT * FROM staging_table;
> (Map reduce job to change to sequence file...)
> hive> DESCRIBE EXTENDED table_name;
> Detailed Table Information  Table(tableName:table_name, 
> dbName:benchmarking, owner:root, createTime:1309480053, lastAccessTime:0, 
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:session_key, 
> type:string, comment:null), FieldSchema(name:remote_address, type:string, 
> comment:null), FieldSchema(name:canister_lssn, type:string, comment:null), 
> FieldSchema(name:canister_session_id, type:bigint, comment:null), 
> FieldSchema(name:tltsid, type:string, comment:null), FieldSchema(name:tltuid, 
> type:string, comment:null), FieldSchema(name:tltvid, type:string, 
> comment:null), FieldSchema(name:canister_server, type:string, comment:null), 
> FieldSchema(name:session_timestamp, type:string, comment:null), 
> FieldSchema(name:session_duration, type:string, comment:null), 
> FieldSchema(name:hit_count, type:bigint, comment:null), 
> FieldSchema(name:http_user_agent, type:string, comment:null), 
> FieldSchema(name:extractid, type:bigint, comment:null), 
> FieldSchema(name:site_link, type:string, comment:null), FieldSchema(name:dt, 
> type:string, comment:null), FieldSchema(name:hour, type:int, comment:null)], 
> location:hdfs://hadoop2/user/hive/warehouse/benchmarking.db/table_name, 
> inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat, 
> outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat, 
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
> serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> parameters:{serialization.format=   , field.delim=
> *** SEE ABOVE: Compression is set to FALSE, even though contents of table is 
> compressed.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira