[jira] [Commented] (HIVE-3725) Add support for pulling HBase columns with prefixes

2015-09-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726885#comment-14726885
 ] 

Lefty Leverenz commented on HIVE-3725:
--

Thanks [~swarnim], the doc looks good.  I added a link to this JIRA issue.

> Add support for pulling HBase columns with prefixes
> ---
>
> Key: HIVE-3725
> URL: https://issues.apache.org/jira/browse/HIVE-3725
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.9.0
>Reporter: Swarnim Kulkarni
>Assignee: Swarnim Kulkarni
> Fix For: 0.12.0
>
> Attachments: HIVE-3725.1.patch.txt, HIVE-3725.2.patch.txt, 
> HIVE-3725.3.patch.txt, HIVE-3725.4.patch.txt, HIVE-3725.patch.3.txt
>
>
> Current HBase Hive integration supports reading many values from the same row 
> by specifying a column family. And specifying just the column family can pull 
> in all qualifiers within the family.
> We should add in support to be able to specify a prefix for the qualifier and 
> all columns that start with the prefix would automatically get pulled in. A 
> wildcard support would be ideal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11708) Logical operators raises ClassCastExceptions with NULL

2015-09-02 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726968#comment-14726968
 ] 

Lars Francke commented on HIVE-11708:
-

That's true. Good example, thank you.

I'll try to take a look at this and will assign to me but if anyone gets to it 
before me please don't hesitate to reassign.

> Logical operators raises ClassCastExceptions with NULL
> --
>
> Key: HIVE-11708
> URL: https://issues.apache.org/jira/browse/HIVE-11708
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 1.2.1
>Reporter: Satoshi Tagomori
>
> According to Language Manual UDF, logical operators returns NULL if one of 
> arguments is NULL.
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators
> But query below fails with ClassCastException.
> {code}
> SELECT COUNT(*) AS c
> FROM tbl
> WHERE 1=1 AND NULL
> {code}
> Exception (on 0.13):
> {noformat}
> 15/08/27 08:56:23 ERROR ql.Driver: FAILED: ClassCastException 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
> java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.initialize(GenericUDFOPAnd.java:52)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:116)
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:231)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:934)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1128)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:184)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:9716)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:9672)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3208)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3005)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8228)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8183)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9015)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9281)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:427)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:323)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:980)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1045)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
> {noformat}
> I confirmed that Hive 1.2.1 of HDP2.3 Sandbox also raises this exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11482) Add retrying thrift client for HiveServer2

2015-09-02 Thread Akshay Goyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay Goyal updated HIVE-11482:

Attachment: HIVE-11482.02.patch

> Add retrying thrift client for HiveServer2
> --
>
> Key: HIVE-11482
> URL: https://issues.apache.org/jira/browse/HIVE-11482
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
> Attachments: HIVE-11482.01.patch, HIVE-11482.02.patch
>
>
> Similar to 
> https://github.com/apache/hive/blob/master/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java,
>  this improvement request is to add a retrying thrift client for HiveServer2 
> to do retries upon thrift exceptions.
> Here are few commits done on a forked branch that can be picked - 
> https://github.com/InMobi/hive/commit/7fb957fb9c2b6000d37c53294e256460010cb6b7
> https://github.com/InMobi/hive/commit/11e4b330f051c3f58927a276d562446761c9cd6d
> https://github.com/InMobi/hive/commit/241386fd870373a9253dca0bcbdd4ea7e665406c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10810) Document Beeline/CLI changes

2015-09-02 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726911#comment-14726911
 ] 

Ferdinand Xu commented on HIVE-10810:
-

Hi [~xuefuz] [~sladymon] [~leftylev], do you think the current wiki is 
comprehensive for beeline-cli migration? If so, I'd like to resolve this issue. 
And we can continue updating that wiki if we find something missed in the 
future. Thank you!

> Document Beeline/CLI changes
> 
>
> Key: HIVE-10810
> URL: https://issues.apache.org/jira/browse/HIVE-10810
> Project: Hive
>  Issue Type: Sub-task
>  Components: CLI
>Reporter: Xuefu Zhang
>Assignee: Ferdinand Xu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11691) Update debugging info on Developer FAQ

2015-09-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726988#comment-14726988
 ] 

Lefty Leverenz commented on HIVE-11691:
---

Looks great!  Thanks Swarnim.  I just made a few editorial changes and added 
some links.

+1 but someone else should review for technical accuracy.

> Update debugging info on Developer FAQ
> --
>
> Key: HIVE-11691
> URL: https://issues.apache.org/jira/browse/HIVE-11691
> Project: Hive
>  Issue Type: Task
>Reporter: Swarnim Kulkarni
>Assignee: Swarnim Kulkarni
>
> The debugging info currently on [1] is very inadequate. This should be 
> updated for future developers. There is some info here[2] but it is pretty 
> sparse too and related to Ant.
> [1] https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ
> [2] https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-09-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11383:
---
Attachment: HIVE-11383.14.patch

> Upgrade Hive to Calcite 1.4
> ---
>
> Key: HIVE-11383
> URL: https://issues.apache.org/jira/browse/HIVE-11383
> Project: Hive
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
> HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
> HIVE-11383.14.patch, HIVE-11383.2.patch, HIVE-11383.3.patch, 
> HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.4.patch, 
> HIVE-11383.5.patch, HIVE-11383.6.patch, HIVE-11383.7.patch, 
> HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.4.0-incubating.
> There is currently a snapshot release, which is close to what will be in 1.4. 
> I have checked that Hive compiles against the new snapshot, fixing one issue. 
> The patch is attached.
> Next step is to validate that Hive runs against the new Calcite, and post any 
> issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
> can you please do that.
> [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
> the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11708) Logical operators raises ClassCastExceptions with NULL

2015-09-02 Thread Satoshi Tagomori (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726982#comment-14726982
 ] 

Satoshi Tagomori commented on HIVE-11708:
-

Great! (y)

> Logical operators raises ClassCastExceptions with NULL
> --
>
> Key: HIVE-11708
> URL: https://issues.apache.org/jira/browse/HIVE-11708
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 1.2.1
>Reporter: Satoshi Tagomori
>Assignee: Lars Francke
>
> According to Language Manual UDF, logical operators returns NULL if one of 
> arguments is NULL.
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators
> But query below fails with ClassCastException.
> {code}
> SELECT COUNT(*) AS c
> FROM tbl
> WHERE 1=1 AND NULL
> {code}
> Exception (on 0.13):
> {noformat}
> 15/08/27 08:56:23 ERROR ql.Driver: FAILED: ClassCastException 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
> java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.initialize(GenericUDFOPAnd.java:52)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:116)
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:231)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:934)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1128)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:184)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:9716)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:9672)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3208)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3005)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8228)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8183)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9015)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9281)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:427)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:323)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:980)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1045)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
> {noformat}
> I confirmed that Hive 1.2.1 of HDP2.3 Sandbox also raises this exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11691) Update debugging info on Developer FAQ

2015-09-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726998#comment-14726998
 ] 

Lefty Leverenz commented on HIVE-11691:
---

By the way, the debugging section in the Developer Guide still needs to be 
updated for Maven instead of Ant, but that's not within the scope of this issue.

* [Developer Guide -- Debugging Hive Code | 
https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide#DeveloperGuide-DebuggingHiveCode]

> Update debugging info on Developer FAQ
> --
>
> Key: HIVE-11691
> URL: https://issues.apache.org/jira/browse/HIVE-11691
> Project: Hive
>  Issue Type: Task
>Reporter: Swarnim Kulkarni
>Assignee: Swarnim Kulkarni
>
> The debugging info currently on [1] is very inadequate. This should be 
> updated for future developers. There is some info here[2] but it is pretty 
> sparse too and related to Ant.
> [1] https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ
> [2] https://cwiki.apache.org/confluence/display/Hive/DeveloperGuide



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11640) Shell command doesn't work for new CLI[Beeline-cli branch]

2015-09-02 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-11640:

Attachment: HIVE-11640.7-beeline-cli.patch

Resolve nullempty issue for the unit test

> Shell command doesn't work for new CLI[Beeline-cli branch]
> --
>
> Key: HIVE-11640
> URL: https://issues.apache.org/jira/browse/HIVE-11640
> Project: Hive
>  Issue Type: Sub-task
>  Components: CLI
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Attachments: HIVE-11640.1-beeline-cli.patch, 
> HIVE-11640.2-beeline-cli.patch, HIVE-11640.3-beeline-cli.patch, 
> HIVE-11640.4-beeline-cli.patch, HIVE-11640.5-beeline-cli.patch, 
> HIVE-11640.7-beeline-cli.patch
>
>
> The shell command doesn't work for the new CLI and "Error: Method not 
> supported (state=,code=0)" was thrown during the execution for option f and e.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726882#comment-14726882
 ] 

Hive QA commented on HIVE-11634:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753565/HIVE-11634.6.patch

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 9386 tests 
executed
*Failed tests:*
{noformat}
TestContribNegativeCliDriver - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pointlookup
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pointlookup2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mapjoin_decimal
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mergejoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_char_mapjoin1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_leftsemi_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_null_projection
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_context
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5143/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5143/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5143/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753565 - PreCommit-HIVE-TRUNK-Build

> Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
> --
>
> Key: HIVE-11634
> URL: https://issues.apache.org/jira/browse/HIVE-11634
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
> HIVE-11634.3.patch, HIVE-11634.4.patch, HIVE-11634.5.patch, HIVE-11634.6.patch
>
>
> Currently, we do not support partition pruning for the following scenario
> {code}
> create table pcr_t1 (key int, value string) partitioned by (ds string);
> insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
> where key < 20 order by key;
> explain extended select ds from pcr_t1 where struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> If we run the above query, we see that all the partitions of table pcr_t1 are 
> present in the filter predicate where as we can prune  partition 
> (ds='2000-04-10'). 
> The optimization is to rewrite the above query into the following.
> {code}
> explain extended select ds from pcr_t1 where  (struct(ds)) IN 
> (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
> is used by partition pruner to prune the columns which otherwise will not be 
> pruned.
> This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11704) Create errata.txt file

2015-09-02 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-11704:
--
Labels: TODOC2.0  (was: )

> Create errata.txt file
> --
>
> Key: HIVE-11704
> URL: https://issues.apache.org/jira/browse/HIVE-11704
> Project: Hive
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>  Labels: TODOC2.0
> Fix For: 2.0.0
>
> Attachments: HIVE-11704.patch
>
>
> As discussed on the email list, we should have a file documenting known 
> problems in the commit messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11704) Create errata.txt file

2015-09-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726861#comment-14726861
 ] 

Lefty Leverenz commented on HIVE-11704:
---

Doc note:  This should be documented in How To Commit, so I added a TODOC2.0 
label.  (Also, the whole Commit section needs to be updated for git.)

* [How To Commit -- Commit | 
https://cwiki.apache.org/confluence/display/Hive/HowToCommit#HowToCommit-Commit]

Thanks, Owen.

> Create errata.txt file
> --
>
> Key: HIVE-11704
> URL: https://issues.apache.org/jira/browse/HIVE-11704
> Project: Hive
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>  Labels: TODOC2.0
> Fix For: 2.0.0
>
> Attachments: HIVE-11704.patch
>
>
> As discussed on the email list, we should have a file documenting known 
> problems in the commit messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11640) Shell command doesn't work for new CLI[Beeline-cli branch]

2015-09-02 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-11640:

Attachment: (was: HIVE-11640.5-beeline-cli.patch)

> Shell command doesn't work for new CLI[Beeline-cli branch]
> --
>
> Key: HIVE-11640
> URL: https://issues.apache.org/jira/browse/HIVE-11640
> Project: Hive
>  Issue Type: Sub-task
>  Components: CLI
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Attachments: HIVE-11640.1-beeline-cli.patch, 
> HIVE-11640.2-beeline-cli.patch, HIVE-11640.3-beeline-cli.patch, 
> HIVE-11640.4-beeline-cli.patch, HIVE-11640.5-beeline-cli.patch
>
>
> The shell command doesn't work for the new CLI and "Error: Method not 
> supported (state=,code=0)" was thrown during the execution for option f and e.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11708) Logical operators raises ClassCastExceptions with NULL

2015-09-02 Thread Satoshi Tagomori (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726877#comment-14726877
 ] 

Satoshi Tagomori commented on HIVE-11708:
-

It also happens with query below:
{code}
SELECT *
FROM testtbl
WHERE 1=1 AND (map('a',1,'b',2))['c']
{code}

I don't have exact idea what's difference between {{(map('a',1,'b',2))['c']}} 
and a field which may contains NULL values.
But this case (NULL value from Hive query computation) seems a bit more serious 
than the scenario of NULL literal for me.


> Logical operators raises ClassCastExceptions with NULL
> --
>
> Key: HIVE-11708
> URL: https://issues.apache.org/jira/browse/HIVE-11708
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 1.2.1
>Reporter: Satoshi Tagomori
>
> According to Language Manual UDF, logical operators returns NULL if one of 
> arguments is NULL.
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators
> But query below fails with ClassCastException.
> {code}
> SELECT COUNT(*) AS c
> FROM tbl
> WHERE 1=1 AND NULL
> {code}
> Exception (on 0.13):
> {noformat}
> 15/08/27 08:56:23 ERROR ql.Driver: FAILED: ClassCastException 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
> java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.initialize(GenericUDFOPAnd.java:52)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:116)
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:231)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:934)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1128)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:184)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:9716)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:9672)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3208)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3005)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8228)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8183)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9015)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9281)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:427)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:323)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:980)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1045)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
> {noformat}
> I confirmed that Hive 1.2.1 of HDP2.3 Sandbox also raises this exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11708) Logical operators raises ClassCastExceptions with NULL

2015-09-02 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14726859#comment-14726859
 ] 

Lars Francke commented on HIVE-11708:
-

I tested this and it looks like it only happens when you use the literal 
"NULL". When you AND two columns together one of which happens to have NULL 
values it seems to work as intended.

Are you seeing the same?

I agree that it's still a bug but if this is the only scenario where it happens 
it'd be an edge case. Or are you seeing this for other queries as well?

> Logical operators raises ClassCastExceptions with NULL
> --
>
> Key: HIVE-11708
> URL: https://issues.apache.org/jira/browse/HIVE-11708
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.13.0, 1.2.1
>Reporter: Satoshi Tagomori
>
> According to Language Manual UDF, logical operators returns NULL if one of 
> arguments is NULL.
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators
> But query below fails with ClassCastException.
> {code}
> SELECT COUNT(*) AS c
> FROM tbl
> WHERE 1=1 AND NULL
> {code}
> Exception (on 0.13):
> {noformat}
> 15/08/27 08:56:23 ERROR ql.Driver: FAILED: ClassCastException 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
> java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableVoidObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.BooleanObjectInspector
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPAnd.initialize(GenericUDFOPAnd.java:52)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:116)
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:231)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:934)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1128)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:184)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:9716)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:9672)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3208)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3005)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8228)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8183)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9015)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9281)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:427)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:323)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:980)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1045)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:916)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906)
> {noformat}
> I confirmed that Hive 1.2.1 of HDP2.3 Sandbox also raises this exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11647) Bump hbase version to 1.1.1

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727575#comment-14727575
 ] 

Hive QA commented on HIVE-11647:




{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753657/HIVE-11647.1.patch.txt

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5150/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5150/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5150/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-cli ---
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/cli/target
[INFO] Deleting /data/hive-ptest/working/apache-github-source-source/cli 
(includes = [datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-cli ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-cli ---
[WARNING] Invalid project model for artifact 
[pentaho-aggdesigner-algorithm:org.pentaho:5.1.5-jhyde]. It will be ignored by 
the remote resources Mojo.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-cli 
---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/data/hive-ptest/working/apache-github-source-source/cli/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-cli ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-cli ---
[INFO] Compiling 4 source files to 
/data/hive-ptest/working/apache-github-source-source/cli/target/classes
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[80,16]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[81,16]
 sun.misc.SignalHandler is internal proprietary API and may be removed in a 
future release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[80,16]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[81,16]
 sun.misc.SignalHandler is internal proprietary API and may be removed in a 
future release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[80,16]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[81,16]
 sun.misc.SignalHandler is internal proprietary API and may be removed in a 
future release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[325,5]
 sun.misc.SignalHandler is internal proprietary API and may be removed in a 
future release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[326,5]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[331,29]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[332,54]
 sun.misc.SignalHandler is internal proprietary API and may be removed in a 
future release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[337,28]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[332,19]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java:[393,9]
 sun.misc.Signal is internal proprietary API and may be removed in a future 
release
[WARNING] 
/data/hive-ptest/working/apache-github-source-source/cli/src/java/org/apache/hadoop/hive/cli/RCFileCat.java:
 

[jira] [Commented] (HIVE-11640) Shell command doesn't work for new CLI[Beeline-cli branch]

2015-09-02 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727688#comment-14727688
 ] 

Xuefu Zhang commented on HIVE-11640:


Patch looks good. Just a minor question on RB.

> Shell command doesn't work for new CLI[Beeline-cli branch]
> --
>
> Key: HIVE-11640
> URL: https://issues.apache.org/jira/browse/HIVE-11640
> Project: Hive
>  Issue Type: Sub-task
>  Components: CLI
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Attachments: HIVE-11640.1-beeline-cli.patch, 
> HIVE-11640.2-beeline-cli.patch, HIVE-11640.3-beeline-cli.patch, 
> HIVE-11640.4-beeline-cli.patch, HIVE-11640.5-beeline-cli.patch, 
> HIVE-11640.7-beeline-cli.patch
>
>
> The shell command doesn't work for the new CLI and "Error: Method not 
> supported (state=,code=0)" was thrown during the execution for option f and e.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11705) refactor SARG stripe filtering for ORC into a method

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727564#comment-14727564
 ] 

Hive QA commented on HIVE-11705:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753641/HIVE-11705.01.patch

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9391 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mergejoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_inner_join
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_left_outer_join2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_leftsemi_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_varchar_mapjoin1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_dynamic_partition_pruning
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5148/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5148/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5148/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753641 - PreCommit-HIVE-TRUNK-Build

> refactor SARG stripe filtering for ORC into a method
> 
>
> Key: HIVE-11705
> URL: https://issues.apache.org/jira/browse/HIVE-11705
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11705.01.patch, HIVE-11705.patch
>
>
> For footer cache PPD to metastore, we'd need a method to do the PPD. Tiny 
> item to create it on OrcInputFormat.
> For metastore path, these methods will be called from expression proxy 
> similar to current objectstore expr filtering; it will change to have 
> serialized sarg and column list to come from request instead of conf; 
> includedCols/etc. will also come from request instead of assorted java 
> objects. 
> The types and stripe stats will need to be extracted from HBase. This is a 
> little bit of a problem, since ideally we want to be inside HBase 
> filter/coprocessor/ I'd need to take a look to see if this is possible... 
> since that filter would need to either deserialize orc, or we would need to 
> store types and stats information in some other, non-ORC manner on write. The 
> latter is probably a better idea, although it's dangerous because there's no 
> sync between this code and ORC itself.
> Meanwhile minimize dependencies for stripe picking to essentials (and conf 
> which is easy to remove).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11657) HIVE-2573 introduces some issues during metastore init (and CLI init)

2015-09-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727709#comment-14727709
 ] 

Sushanth Sowmyan commented on HIVE-11657:
-

+1.

> HIVE-2573 introduces some issues during metastore init (and CLI init)
> -
>
> Key: HIVE-11657
> URL: https://issues.apache.org/jira/browse/HIVE-11657
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Critical
> Attachments: HIVE-11657.patch
>
>
> HIVE-2573 introduced static reload functions call.
> It has a few problems:
> 1) When metastore client is initialized using an externally supplied config 
> (i.e. Hive.get(HiveConf)), it still gets called during static init using the 
> main service config. In my case, even though I have uris in the supplied 
> config to connect to remote MS (which eventually happens), the static call 
> creates objectstore, which is undesirable.
> 2) It breaks compat - old metastores do not support this call so new clients 
> will fail, and there's no workaround like not using a new feature because the 
> static call is always made



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11668) make sure directsql calls pre-query init when needed

2015-09-02 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727705#comment-14727705
 ] 

Sushanth Sowmyan commented on HIVE-11668:
-

Change looks good, and I've tested it out on mysql to make sure there are no 
surprises. +1.

> make sure directsql calls pre-query init when needed
> 
>
> Key: HIVE-11668
> URL: https://issues.apache.org/jira/browse/HIVE-11668
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11668.01.patch, HIVE-11668.02.patch, 
> HIVE-11668.patch
>
>
> See comments in HIVE-11123



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11617) Explain plan for multiple lateral views is very slow

2015-09-02 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727497#comment-14727497
 ] 

Aihua Xu commented on HIVE-11617:
-

Failed cases are not related. 

> Explain plan for multiple lateral views is very slow
> 
>
> Key: HIVE-11617
> URL: https://issues.apache.org/jira/browse/HIVE-11617
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-11617.2.patch, HIVE-11617.patch, HIVE-11617.patch
>
>
> The following explain job will be very slow or never finish if there are many 
> lateral views involved. High CPU usage is also noticed.
> {noformat}
> CREATE TABLE `t1`(`pattern` array);
>   
> explain select * from t1 
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1
> lateral view explode(pattern) tbl1 as col1;
> {noformat}
> From jstack, the job is busy with preorder tree traverse. 
> {noformat}
> at java.util.regex.Matcher.getTextLength(Matcher.java:1234)
> at java.util.regex.Matcher.reset(Matcher.java:308)
> at java.util.regex.Matcher.(Matcher.java:228)
> at java.util.regex.Pattern.matcher(Pattern.java:1088)
> at org.apache.hadoop.hive.ql.lib.RuleRegExp.cost(RuleRegExp.java:67)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:72)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
> at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:56)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> org.apache.hadoop.hive.ql.lib.PreOrderWalker.walk(PreOrderWalker.java:61)
> at 
> 

[jira] [Commented] (HIVE-11712) Duplicate groupby keys cause ClassCastException

2015-09-02 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727726#comment-14727726
 ] 

Xuefu Zhang commented on HIVE-11712:


+1

> Duplicate groupby keys cause ClassCastException
> ---
>
> Key: HIVE-11712
> URL: https://issues.apache.org/jira/browse/HIVE-11712
> Project: Hive
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11712.1.patch
>
>
> With duplicate groupby keys, we could use wrong object inspectors for some 
> groupby expressions, and lead to ClassCastException, for example, 
> {noformat}
> explain
> SELECT distinct s1.customer_name as x, s1.customer_name as y
> FROM default.testv1_staples s1 join default.src s2 on s1.customer_name = 
> s2.key
> HAVING (
> (SUM(s1.customer_balance) <= 4074689.00041)
> AND (AVG(s1.discount) <= 822)
> AND (COUNT(s2.value) > 4)
> {noformat}
> will lead to
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableShortObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage$AbstractGenericUDAFAverageEvaluator.init(GenericUDAFAverage.java:374)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getGenericUDAFInfo(SemanticAnalyzer.java:3887)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanGroupByOperator1(SemanticAnalyzer.java:4354)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggrNoSkew(SemanticAnalyzer.java:5644)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8977)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9849)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9742)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10178)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10189)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10106)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HIVE-11689) minor flow changes to ORC split generation

2015-09-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reopened HIVE-11689:
-

looks like I committed the wrong patch

> minor flow changes to ORC split generation
> --
>
> Key: HIVE-11689
> URL: https://issues.apache.org/jira/browse/HIVE-11689
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.0.0
>
> Attachments: HIVE-11689.01.patch, HIVE-11689.01.patch, 
> HIVE-11689.patch
>
>
> There are two changes that would help future work on split PPD into HBase 
> metastore. 
> 1) Move non-HDFS split strategy determination logic into main thread from 
> threadpool.
> 2) Instead of iterating thru the futures and waiting, use CompletionService 
> to get futures in order of completion. That might be useful by itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11671) Optimize RuleRegExp in DPP codepath

2015-09-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727724#comment-14727724
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-11671:
--

Hi [~rajesh.balamohan] Can you please try the failed tests locally and see if 
they are related to this patch.

Thanks
Hari

> Optimize RuleRegExp in DPP codepath
> ---
>
> Key: HIVE-11671
> URL: https://issues.apache.org/jira/browse/HIVE-11671
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HIVE-11671.1.patch, HIVE-11671.2.patch, 
> cpu_with_patch.png, cpu_without_patch.png, mem_with_patch.png, 
> mem_without_patch.png
>
>
> When running a large query with DPP in its codepath, RuleRegExp came up as 
> hotspot. Creating this JIRA to optimize RuleRegExp.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11487) Add getNumPartitionsByFilter api in metastore api

2015-09-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727751#comment-14727751
 ] 

Sergey Shelukhin commented on HIVE-11487:
-

can you use req/resp pattern for a new metastore API? Thrift methods with 
"normal" signature can never be modified in any way due to backward compat, 
which leads to proliferation of different methods in metastore api

> Add getNumPartitionsByFilter api in metastore api
> -
>
> Key: HIVE-11487
> URL: https://issues.apache.org/jira/browse/HIVE-11487
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
> Attachments: HIVE-11487.01.patch, HIVE-11487.02.patch
>
>
> Adding api for getting number of partitions for a filter will be more optimal 
> when we are only interested in the number. getAllPartitions will construct 
> all the partition object which can be time consuming and not required.
> Here is a commit we pushed in a forked repo in our organization - 
> https://github.com/inmobi/hive/commit/68b3534d3e6c4d978132043cec668798ed53e444.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11705) refactor SARG stripe filtering for ORC into a method

2015-09-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727743#comment-14727743
 ] 

Sergey Shelukhin commented on HIVE-11705:
-

test failures are due to HIVE-11689

> refactor SARG stripe filtering for ORC into a method
> 
>
> Key: HIVE-11705
> URL: https://issues.apache.org/jira/browse/HIVE-11705
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-11705.01.patch, HIVE-11705.patch
>
>
> For footer cache PPD to metastore, we'd need a method to do the PPD. Tiny 
> item to create it on OrcInputFormat.
> For metastore path, these methods will be called from expression proxy 
> similar to current objectstore expr filtering; it will change to have 
> serialized sarg and column list to come from request instead of conf; 
> includedCols/etc. will also come from request instead of assorted java 
> objects. 
> The types and stripe stats will need to be extracted from HBase. This is a 
> little bit of a problem, since ideally we want to be inside HBase 
> filter/coprocessor/ I'd need to take a look to see if this is possible... 
> since that filter would need to either deserialize orc, or we would need to 
> store types and stats information in some other, non-ORC manner on write. The 
> latter is probably a better idea, although it's dangerous because there's no 
> sync between this code and ORC itself.
> Meanwhile minimize dependencies for stripe picking to essentials (and conf 
> which is easy to remove).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11482) Add retrying thrift client for HiveServer2

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727916#comment-14727916
 ] 

Hive QA commented on HIVE-11482:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753730/HIVE-11482.02.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 9392 tests executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5152/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5152/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5152/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753730 - PreCommit-HIVE-TRUNK-Build

> Add retrying thrift client for HiveServer2
> --
>
> Key: HIVE-11482
> URL: https://issues.apache.org/jira/browse/HIVE-11482
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
> Attachments: HIVE-11482.02.patch
>
>
> Similar to 
> https://github.com/apache/hive/blob/master/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java,
>  this improvement request is to add a retrying thrift client for HiveServer2 
> to do retries upon thrift exceptions.
> Here are few commits done on a forked branch that can be picked - 
> https://github.com/InMobi/hive/commit/7fb957fb9c2b6000d37c53294e256460010cb6b7
> https://github.com/InMobi/hive/commit/11e4b330f051c3f58927a276d562446761c9cd6d
> https://github.com/InMobi/hive/commit/241386fd870373a9253dca0bcbdd4ea7e665406c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728227#comment-14728227
 ] 

Hive QA commented on HIVE-11383:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753841/HIVE-11383.15.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 9392 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5155/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5155/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5155/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753841 - PreCommit-HIVE-TRUNK-Build

> Upgrade Hive to Calcite 1.4
> ---
>
> Key: HIVE-11383
> URL: https://issues.apache.org/jira/browse/HIVE-11383
> Project: Hive
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
> HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
> HIVE-11383.14.patch, HIVE-11383.15.patch, HIVE-11383.2.patch, 
> HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
> HIVE-11383.4.patch, HIVE-11383.5.patch, HIVE-11383.6.patch, 
> HIVE-11383.7.patch, HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.4.0-incubating.
> There is currently a snapshot release, which is close to what will be in 1.4. 
> I have checked that Hive compiles against the new snapshot, fixing one issue. 
> The patch is attached.
> Next step is to validate that Hive runs against the new Calcite, and post any 
> issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
> can you please do that.
> [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
> the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-09-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-11634:
-
Attachment: HIVE-11634.8.patch

> Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
> --
>
> Key: HIVE-11634
> URL: https://issues.apache.org/jira/browse/HIVE-11634
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
> HIVE-11634.3.patch, HIVE-11634.4.patch, HIVE-11634.5.patch, 
> HIVE-11634.6.patch, HIVE-11634.7.patch, HIVE-11634.8.patch
>
>
> Currently, we do not support partition pruning for the following scenario
> {code}
> create table pcr_t1 (key int, value string) partitioned by (ds string);
> insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
> where key < 20 order by key;
> explain extended select ds from pcr_t1 where struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> If we run the above query, we see that all the partitions of table pcr_t1 are 
> present in the filter predicate where as we can prune  partition 
> (ds='2000-04-10'). 
> The optimization is to rewrite the above query into the following.
> {code}
> explain extended select ds from pcr_t1 where  (struct(ds)) IN 
> (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
> is used by partition pruner to prune the columns which otherwise will not be 
> pruned.
> This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11722) HBaseImport should import basic stats and column stats

2015-09-02 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728277#comment-14728277
 ] 

Alan Gates commented on HIVE-11722:
---

Why should we import these?  It seems easier to regenerate them via analyze.

> HBaseImport should import basic stats and column stats
> --
>
> Key: HIVE-11722
> URL: https://issues.apache.org/jira/browse/HIVE-11722
> Project: Hive
>  Issue Type: Sub-task
>  Components: HBase Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: hbase-metastore-branch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11720) Allow HiveServer2 to set custom http request/response header size

2015-09-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-11720:

Attachment: HIVE-11720.2.patch

> Allow HiveServer2 to set custom http request/response header size
> -
>
> Key: HIVE-11720
> URL: https://issues.apache.org/jira/browse/HIVE-11720
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-11720.1.patch, HIVE-11720.2.patch
>
>
> In HTTP transport mode, authentication information is sent over as part of 
> HTTP headers. Sometimes (observed when Kerberos is used) the default buffer 
> size for the headers is not enough, resulting in an HTTP 413 FULL head error. 
> We can expose those as customizable params.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11650) Create LLAP Monitor Daemon class and launch scripts

2015-09-02 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728298#comment-14728298
 ] 

Kai Sasaki commented on HIVE-11650:
---

[~sershe] I added jetty-server package to ql/pom.xml, build-exec-bundle. But it 
found no jetty-server package as described before. Is it correct way?

{code}
diff --git a/ql/pom.xml b/ql/pom.xml
index 99c22a3..30cf621 100644
--- a/ql/pom.xml
+++ b/ql/pom.xml
@@ -735,6 +735,7 @@
   org.apache.hive:spark-client
   org.apache.hive:hive-storage-api
   joda-time:joda-time
+  org.eclipse.jetty:jetty-server
 
   
   

{code}

> Create LLAP Monitor Daemon class and launch scripts
> ---
>
> Key: HIVE-11650
> URL: https://issues.apache.org/jira/browse/HIVE-11650
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: llap
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HIVE-11650-llap.00.patch, Screen Shot 2015-08-26 at 
> 16.54.35.png
>
>
> This JIRA for creating LLAP Monitor Daemon class and related launching 
> scripts for slider package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728345#comment-14728345
 ] 

Hive QA commented on HIVE-11634:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753845/HIVE-11634.7.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9393 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_pointlookup3
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.hcatalog.streaming.TestStreaming.testEndpointConnection
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5156/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5156/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5156/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753845 - PreCommit-HIVE-TRUNK-Build

> Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
> --
>
> Key: HIVE-11634
> URL: https://issues.apache.org/jira/browse/HIVE-11634
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
> HIVE-11634.3.patch, HIVE-11634.4.patch, HIVE-11634.5.patch, 
> HIVE-11634.6.patch, HIVE-11634.7.patch
>
>
> Currently, we do not support partition pruning for the following scenario
> {code}
> create table pcr_t1 (key int, value string) partitioned by (ds string);
> insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
> where key < 20 order by key;
> explain extended select ds from pcr_t1 where struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> If we run the above query, we see that all the partitions of table pcr_t1 are 
> present in the filter predicate where as we can prune  partition 
> (ds='2000-04-10'). 
> The optimization is to rewrite the above query into the following.
> {code}
> explain extended select ds from pcr_t1 where  (struct(ds)) IN 
> (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
> is used by partition pruner to prune the columns which otherwise will not be 
> pruned.
> This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11647) Bump hbase version to 1.1.1

2015-09-02 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728488#comment-14728488
 ] 

Swarnim Kulkarni commented on HIVE-11647:
-

I am a little confused by this failure, especially the "Forbidden" exception. I 
ran this locally with tests and it all passed.

{noformat}
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Hive HBase Handler 2.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hive-hbase-handler ---
[INFO] Deleting /Users/sk018283/git-repo/apache/hive/hbase-handler/target
[INFO] Deleting /Users/sk018283/git-repo/apache/hive/hbase-handler (includes = 
[datanucleus.log, derby.log], excludes = [])
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-no-snapshots) @ 
hive-hbase-handler ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
hive-hbase-handler ---
Downloading: 
https://s3-us-west-1.amazonaws.com/hive-spark/maven2/spark_2.10-1.3-rc1/org/pentaho/pentaho-aggdesigner/5.1.5-jhyde/pentaho-aggdesigner-5.1.5-jhyde.pom

[WARNING] Invalid project model for artifact 
[pentaho-aggdesigner-algorithm:org.pentaho:5.1.5-jhyde]. It will be ignored by 
the remote resources Mojo.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hive-hbase-handler ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-hbase-handler 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hive-hbase-handler ---
[INFO] Compiling 37 source files to 
/Users/sk018283/git-repo/apache/hive/hbase-handler/target/classes
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/java/org/apache/hadoop/hive/hbase/AbstractHBaseKeyFactory.java:
 Some input files use or override a deprecated API.
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/java/org/apache/hadoop/hive/hbase/AbstractHBaseKeyFactory.java:
 Recompile with -Xlint:deprecation for details.
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java:
 Some input files use unchecked or unsafe operations.
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java:
 Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- avro-maven-plugin:1.7.6:protocol (default) @ hive-hbase-handler ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.7:add-test-source (add-test-sources) @ 
hive-hbase-handler ---
[INFO] Test Source directory: 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/gen/avro/gen-java added.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hive-hbase-handler ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-hbase-handler 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/Users/sk018283/git-repo/apache/hive/hbase-handler/target/tmp
[mkdir] Created dir: 
/Users/sk018283/git-repo/apache/hive/hbase-handler/target/warehouse
[mkdir] Created dir: 
/Users/sk018283/git-repo/apache/hive/hbase-handler/target/tmp/conf
 [copy] Copying 10 files to 
/Users/sk018283/git-repo/apache/hive/hbase-handler/target/tmp/conf
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hive-hbase-handler ---
[INFO] Compiling 16 source files to 
/Users/sk018283/git-repo/apache/hive/hbase-handler/target/test-classes
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/test/org/apache/hadoop/hive/hbase/SampleHBaseKeyFactory2.java:
 Some input files use or override a deprecated API.
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/test/org/apache/hadoop/hive/hbase/SampleHBaseKeyFactory2.java:
 Recompile with -Xlint:deprecation for details.
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/test/org/apache/hadoop/hive/hbase/avro/Address.java:
 Some input files use unchecked or unsafe operations.
[WARNING] 
/Users/sk018283/git-repo/apache/hive/hbase-handler/src/test/org/apache/hadoop/hive/hbase/avro/Address.java:
 Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ 

[jira] [Commented] (HIVE-2987) SELECTing nulls returns nothing

2015-09-02 Thread Swarnim Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728464#comment-14728464
 ] 

Swarnim Kulkarni commented on HIVE-2987:


[~oli...@mineallmeyn.com] This seems to work fine with the latest version of 
hive. Can you give that a shot and post back here so I can investigate further?

> SELECTing nulls returns nothing
> ---
>
> Key: HIVE-2987
> URL: https://issues.apache.org/jira/browse/HIVE-2987
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Affects Versions: 0.9.0
> Environment: Tested using 0.9.0rc1, hbase 0.92.1, hadoop 0.20.2-cdh3u2
>Reporter: Oliver Meyn
>Priority: Critical
>
> Given an hbase table defined as 'test' with a single column family 'a', 
> rowkey of type string, and two "rows" as follows:
> key:1,a:lat=60.0,a:long=50.0,a:precision=10
> key:2,a:lat=54
> And an hive table created overtop of it as follows:
> CREATE EXTERNAL TABLE hbase_test (
>   id STRING,
>   latitude STRING,
>   longitude STRING,
>   precision STRING
> )
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES ("hbase.columns.mapping" = 
> ":key#s,a:lat#s,a:long#s,a:precision#s")
> TBLPROPERTIES(
>   "hbase.table.name" = "test",
>   "hbase.table.default.storage.type" = "binary"
> );
> The query SELECT id, precision FROM hbase_test WHERE id = '2' returns no 
> result.  Expected behaviour is to return:
> '2',NULL
> If the query is changed to include a non-null result, eg SELECT id, latitude, 
> precision FROM hbase_test WHERE id = '2' the result is as expected:
> '2','54',NULL



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11548) HCatLoader should support predicate pushdown.

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728486#comment-14728486
 ] 

Hive QA commented on HIVE-11548:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753879/HIVE-11548.2.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 9385 tests executed
*Failed tests:*
{noformat}
TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5158/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5158/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5158/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753879 - PreCommit-HIVE-TRUNK-Build

> HCatLoader should support predicate pushdown.
> -
>
> Key: HIVE-11548
> URL: https://issues.apache.org/jira/browse/HIVE-11548
> Project: Hive
>  Issue Type: New Feature
>  Components: HCatalog
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-11548.1.patch, HIVE-11548.2.patch
>
>
> When one uses {{HCatInputFormat}}/{{HCatLoader}} to read from file-formats 
> that support predicate pushdown (such as ORC, with 
> {{hive.optimize.index.filter=true}}), one sees that the predicates aren't 
> actually pushed down into the storage layer.
> The forthcoming patch should allow for filter-pushdown, if any of the 
> partitions being scanned with {{HCatLoader}} support the functionality. The 
> patch should technically allow the same for users of {{HCatInputFormat}}, but 
> I don't currently have a neat interface to build a compound 
> predicate-expression. Will add this separately, if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-9900) LLAP: Integrate MiniLLAPCluster into tests

2015-09-02 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728497#comment-14728497
 ] 

Prasanth Jayachandran commented on HIVE-9900:
-

Addressed all the review comments. Also incorporated the changes related to 
local shuffle port issue. You can modify the llap-daemon-site.xml under 
data/conf/llap directory to run with custom configurations (single executor and 
small queue size).  

> LLAP: Integrate MiniLLAPCluster into tests
> --
>
> Key: HIVE-9900
> URL: https://issues.apache.org/jira/browse/HIVE-9900
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Prasanth Jayachandran
> Fix For: llap
>
> Attachments: HIVE-9900.1.patch, HIVE-9900.2.patch, 
> HIVE-MiniLlapCluster.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727756#comment-14727756
 ] 

Hive QA commented on HIVE-11383:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753714/HIVE-11383.14.patch

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9391 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mergejoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_char_mapjoin1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_decimal_6
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_leftsemi_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_null_projection
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5151/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5151/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5151/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753714 - PreCommit-HIVE-TRUNK-Build

> Upgrade Hive to Calcite 1.4
> ---
>
> Key: HIVE-11383
> URL: https://issues.apache.org/jira/browse/HIVE-11383
> Project: Hive
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
> HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
> HIVE-11383.14.patch, HIVE-11383.2.patch, HIVE-11383.3.patch, 
> HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.4.patch, 
> HIVE-11383.5.patch, HIVE-11383.6.patch, HIVE-11383.7.patch, 
> HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.4.0-incubating.
> There is currently a snapshot release, which is close to what will be in 1.4. 
> I have checked that Hive compiles against the new snapshot, fixing one issue. 
> The patch is attached.
> Next step is to validate that Hive runs against the new Calcite, and post any 
> issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
> can you please do that.
> [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
> the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11487) Add getNumPartitionsByFilter api in metastore api

2015-09-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727900#comment-14727900
 ] 

Sergey Shelukhin commented on HIVE-11487:
-

some more comments on RB. 

> Add getNumPartitionsByFilter api in metastore api
> -
>
> Key: HIVE-11487
> URL: https://issues.apache.org/jira/browse/HIVE-11487
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
> Attachments: HIVE-11487.01.patch, HIVE-11487.02.patch
>
>
> Adding api for getting number of partitions for a filter will be more optimal 
> when we are only interested in the number. getAllPartitions will construct 
> all the partition object which can be time consuming and not required.
> Here is a commit we pushed in a forked repo in our organization - 
> https://github.com/inmobi/hive/commit/68b3534d3e6c4d978132043cec668798ed53e444.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11700) exception in logs in Tez test with new logger

2015-09-02 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-11700:
-
Attachment: HIVE-11700.patch

I removed the PerfLogger line the new patch and committed the same to master.

> exception in logs in Tez test with new logger
> -
>
> Key: HIVE-11700
> URL: https://issues.apache.org/jira/browse/HIVE-11700
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-11700.patch, HIVE-11700.patch
>
>
> {noformat}
> 2015-08-31 11:27:47,400 WARN Error while converting string 
> [${sys:hive.ql.log.PerfLogger.level}] to type [class 
> org.apache.logging.log4j.Level]. Using default value [null]. 
> java.lang.IllegalArgumentException: Unknown level constant 
> [${SYS:HIVE.QL.LOG.PERFLOGGER.LEVEL}].
>at org.apache.logging.log4j.Level.valueOf(Level.java:286)
>at 
> org.apache.logging.log4j.core.config.plugins.convert.TypeConverters$LevelConverter.convert(TypeConverters.java:230)
>at 
> org.apache.logging.log4j.core.config.plugins.convert.TypeConverters$LevelConverter.convert(TypeConverters.java:226)
>at 
> org.apache.logging.log4j.core.config.plugins.convert.TypeConverters.convert(TypeConverters.java:336)
>at 
> org.apache.logging.log4j.core.config.plugins.visitors.AbstractPluginVisitor.convert(AbstractPluginVisitor.java:130)
>at 
> org.apache.logging.log4j.core.config.plugins.visitors.PluginAttributeVisitor.visit(PluginAttributeVisitor.java:45)
>at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.generateParameters(PluginBuilder.java:247)
>at 
> org.apache.logging.log4j.core.config.plugins.util.PluginBuilder.build(PluginBuilder.java:136)
>at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createPluginObject(AbstractConfiguration.java:766)
>at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:706)
>at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.createConfiguration(AbstractConfiguration.java:698)
>at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.doConfigure(AbstractConfiguration.java:358)
>at 
> org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:161)
>at 
> org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:361)
>at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:426)
>at 
> org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:442)
>at 
> org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:138)
>at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:147)
>at 
> org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
>at org.apache.logging.log4j.LogManager.getContext(LogManager.java:175)
>at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:102)
>at org.apache.logging.log4j.jcl.LogAdapter.getContext(LogAdapter.java:39)
>at 
> org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
>at 
> org.apache.logging.log4j.jcl.LogFactoryImpl.getInstance(LogFactoryImpl.java:40)
>at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:671)
>at org.apache.hadoop.hive.ql.QTestUtil.(QTestUtil.java:122)
>at 
> org.apache.hadoop.hive.cli.TestMiniTezCliDriver.(TestMiniTezCliDriver.java:33)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11720) Allow HiveServer2 to set custom http request/response header size

2015-09-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-11720:

Description: In HTTP transport mode, authentication information is sent 
over as part of HTTP headers. Sometimes (observed when Kerberos is used) the 
default buffer size for the headers is not enough, resulting in an HTTP 413 
FULL head error. We can expose those as customizable params.  (was: When used 
with Kerberos and in HTTP transport mode, authentication information is sent 
over as part of HTTP headers. Sometimes the default buffer size for the headers 
is not enough, resulting in an HTTP 413 FULL head error. We can expose those as 
customizable params.)

> Allow HiveServer2 to set custom http request/response header size
> -
>
> Key: HIVE-11720
> URL: https://issues.apache.org/jira/browse/HIVE-11720
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>
> In HTTP transport mode, authentication information is sent over as part of 
> HTTP headers. Sometimes (observed when Kerberos is used) the default buffer 
> size for the headers is not enough, resulting in an HTTP 413 FULL head error. 
> We can expose those as customizable params.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-09-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11383:
---
Attachment: HIVE-11383.15.patch

> Upgrade Hive to Calcite 1.4
> ---
>
> Key: HIVE-11383
> URL: https://issues.apache.org/jira/browse/HIVE-11383
> Project: Hive
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
> HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
> HIVE-11383.14.patch, HIVE-11383.15.patch, HIVE-11383.2.patch, 
> HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
> HIVE-11383.4.patch, HIVE-11383.5.patch, HIVE-11383.6.patch, 
> HIVE-11383.7.patch, HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.4.0-incubating.
> There is currently a snapshot release, which is close to what will be in 1.4. 
> I have checked that Hive compiles against the new snapshot, fixing one issue. 
> The patch is attached.
> Next step is to validate that Hive runs against the new Calcite, and post any 
> issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
> can you please do that.
> [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
> the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-09-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11383:
---
Attachment: (was: HIVE-11383.15.patch)

> Upgrade Hive to Calcite 1.4
> ---
>
> Key: HIVE-11383
> URL: https://issues.apache.org/jira/browse/HIVE-11383
> Project: Hive
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
> HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
> HIVE-11383.14.patch, HIVE-11383.2.patch, HIVE-11383.3.patch, 
> HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.4.patch, 
> HIVE-11383.5.patch, HIVE-11383.6.patch, HIVE-11383.7.patch, 
> HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.4.0-incubating.
> There is currently a snapshot release, which is close to what will be in 1.4. 
> I have checked that Hive compiles against the new snapshot, fixing one issue. 
> The patch is attached.
> Next step is to validate that Hive runs against the new Calcite, and post any 
> issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
> can you please do that.
> [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
> the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-11689) minor flow changes to ORC split generation

2015-09-02 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-11689.
-
Resolution: Fixed

Committed the addendum to bring the code in line with the latest patch, fixing 
the tests.

> minor flow changes to ORC split generation
> --
>
> Key: HIVE-11689
> URL: https://issues.apache.org/jira/browse/HIVE-11689
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.0.0
>
> Attachments: HIVE-11689.01.patch, HIVE-11689.01.patch, 
> HIVE-11689.patch
>
>
> There are two changes that would help future work on split PPD into HBase 
> metastore. 
> 1) Move non-HDFS split strategy determination logic into main thread from 
> threadpool.
> 2) Instead of iterating thru the futures and waiting, use CompletionService 
> to get futures in order of completion. That might be useful by itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11383) Upgrade Hive to Calcite 1.4

2015-09-02 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-11383:
---
Attachment: HIVE-11383.15.patch

> Upgrade Hive to Calcite 1.4
> ---
>
> Key: HIVE-11383
> URL: https://issues.apache.org/jira/browse/HIVE-11383
> Project: Hive
>  Issue Type: Bug
>Reporter: Julian Hyde
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-11383.1.patch, HIVE-11383.10.patch, 
> HIVE-11383.11.patch, HIVE-11383.12.patch, HIVE-11383.13.patch, 
> HIVE-11383.14.patch, HIVE-11383.15.patch, HIVE-11383.2.patch, 
> HIVE-11383.3.patch, HIVE-11383.3.patch, HIVE-11383.3.patch, 
> HIVE-11383.4.patch, HIVE-11383.5.patch, HIVE-11383.6.patch, 
> HIVE-11383.7.patch, HIVE-11383.8.patch, HIVE-11383.8.patch, HIVE-11383.9.patch
>
>
> CLEAR LIBRARY CACHE
> Upgrade Hive to Calcite 1.4.0-incubating.
> There is currently a snapshot release, which is close to what will be in 1.4. 
> I have checked that Hive compiles against the new snapshot, fixing one issue. 
> The patch is attached.
> Next step is to validate that Hive runs against the new Calcite, and post any 
> issues to the Calcite list or log Calcite Jira cases. [~jcamachorodriguez], 
> can you please do that.
> [~pxiong], I gather you are dependent on CALCITE-814, which will be fixed in 
> the new Calcite version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-11677) Access to opHandleSet in HiveSession should be synchronized

2015-09-02 Thread Mohit Sabharwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohit Sabharwal resolved HIVE-11677.

Resolution: Fixed

Already fixed in HIVE-4239 patch

> Access to opHandleSet in HiveSession should be synchronized
> ---
>
> Key: HIVE-11677
> URL: https://issues.apache.org/jira/browse/HIVE-11677
> Project: Hive
>  Issue Type: Bug
>Reporter: Mohit Sabharwal
>Assignee: Mohit Sabharwal
>
> In the scenario where multiple threads share the same session, 
> reading/writing to HiveSessionImpl.opHandleSet can lead to a race condition. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11720) Allow HiveServer2 to set custom http request/response header size

2015-09-02 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-11720:

Attachment: HIVE-11720.1.patch

[~thejas] Small patch for review. I'm reviewing HIVE-10432 which will help unit 
testing this. 

> Allow HiveServer2 to set custom http request/response header size
> -
>
> Key: HIVE-11720
> URL: https://issues.apache.org/jira/browse/HIVE-11720
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-11720.1.patch
>
>
> In HTTP transport mode, authentication information is sent over as part of 
> HTTP headers. Sometimes (observed when Kerberos is used) the default buffer 
> size for the headers is not enough, resulting in an HTTP 413 FULL head error. 
> We can expose those as customizable params.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11721) non-ascii characters shows improper with "insert into"

2015-09-02 Thread Jun Yin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Yin updated HIVE-11721:
---
Description: 
Hive: 1.1.0

hive> create table char_255_noascii as select cast("Garçu 谢谢 Kôkaku 
ありがとうございますkidôtai한국어" as char(255));
hive> select * from char_255_noascii;
OK
Garçu 谢谢 Kôkaku ありがとうございますkidôtai>한국어

it shows correct, and also it works good with "LOAD DATA" 
but when I try another way to insert data as below:

hive> create table nonascii(t1 char(255));
OK
Time taken: 0.125 seconds
hive> insert into nonascii values("Garçu 谢谢 Kôkaku ありがとうございますkidôtai한국어");
hive> select * from nonascii;
OK
Gar�u "" K�kaku B�LhFTVD~Ykid�tai\m� 



  was:
Hive: 1.1.0

hive> create table char_255_noascii as select cast("Garçu 谢谢 Kôkaku 
ありがとうございますkidôtai한국어" as char(255));
hive> select * from char_255_noascii;
OK
Garçu 谢谢 Kôkaku ありがとうございますkidôtai>한국어

it shows correct, and also it works good with "LOAD DATA" 
but when I try another way to insert data as bellow:

hive> create table nonascii(t1 char(255));
OK
Time taken: 0.125 seconds
hive> insert into nonascii values("Garçu 谢谢 Kôkaku ありがとうございますkidôtai한국어");
hive> select * from nonascii;
OK
Gar�u "" K�kaku B�LhFTVD~Ykid�tai\m� 




> non-ascii characters shows improper with "insert into"
> --
>
> Key: HIVE-11721
> URL: https://issues.apache.org/jira/browse/HIVE-11721
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 1.1.0
>Reporter: Jun Yin
>
> Hive: 1.1.0
> hive> create table char_255_noascii as select cast("Garçu 谢谢 Kôkaku 
> ありがとうございますkidôtai한국어" as char(255));
> hive> select * from char_255_noascii;
> OK
> Garçu 谢谢 Kôkaku ありがとうございますkidôtai>한국어
> it shows correct, and also it works good with "LOAD DATA" 
> but when I try another way to insert data as below:
> hive> create table nonascii(t1 char(255));
> OK
> Time taken: 0.125 seconds
> hive> insert into nonascii values("Garçu 谢谢 Kôkaku ありがとうございますkidôtai한국어");
> hive> select * from nonascii;
> OK
> Gar�u "" K�kaku B�LhFTVD~Ykid�tai\m� 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11721) non-ascii characters shows improper with "insert into"

2015-09-02 Thread Jun Yin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Yin updated HIVE-11721:
---
Summary: non-ascii characters shows improper with "insert into"  (was: 
noascii characters shows improper)

> non-ascii characters shows improper with "insert into"
> --
>
> Key: HIVE-11721
> URL: https://issues.apache.org/jira/browse/HIVE-11721
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 1.1.0
>Reporter: Jun Yin
>
> Hive: 1.1.0
> hive> create table char_255_noascii as select cast("Garçu 谢谢 Kôkaku 
> ありがとうございますkidôtai한국어" as char(255));
> hive> select * from char_255_noascii;
> OK
> Garçu 谢谢 Kôkaku ありがとうございますkidôtai>한국어
> it shows correct, and also it works good with "LOAD DATA" 
> but when I try another way to insert data as bellow:
> hive> create table nonascii(t1 char(255));
> OK
> Time taken: 0.125 seconds
> hive> insert into nonascii values("Garçu 谢谢 Kôkaku ありがとうございますkidôtai한국어");
> hive> select * from nonascii;
> OK
> Gar�u "" K�kaku B�LhFTVD~Ykid�tai\m� 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11711) Merge hbase-metastore branch to trunk

2015-09-02 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-11711:
--
Attachment: (was: HIVE-11711.1.nogen.patch)

> Merge hbase-metastore branch to trunk
> -
>
> Key: HIVE-11711
> URL: https://issues.apache.org/jira/browse/HIVE-11711
> Project: Hive
>  Issue Type: Sub-task
>  Components: HBase Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 2.0.0
>
> Attachments: HIVE-11711.1.patch
>
>
> Major development of hbase-metastore is done and it's time to merge the 
> branch back into master.
> Currently hbase-metastore is only invoked when running TestMiniTezCliDriver. 
> The instruction for setting up hbase-metastore is captured in 
> https://cwiki.apache.org/confluence/display/Hive/HBaseMetastoreDevelopmentGuide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-10953) Get partial stats instead of complete stats in some queries

2015-09-02 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai resolved HIVE-10953.
---
Resolution: Duplicate

This should be fixed as part of HIVE-11692.

> Get partial stats instead of complete stats in some queries
> ---
>
> Key: HIVE-10953
> URL: https://issues.apache.org/jira/browse/HIVE-10953
> Project: Hive
>  Issue Type: Sub-task
>  Components: HBase Metastore, Metastore
>Affects Versions: hbase-metastore-branch
>Reporter: Daniel Dai
>Assignee: Vaibhav Gumashta
> Fix For: hbase-metastore-branch
>
>
> In ppd_constant_where.q, the result is different than benchmark:
> Result:
> Statistics: Num rows: 0 Data size: 11624 Basic stats: PARTIAL Column stats: 
> NONE
> Benchmark:
> Statistics: Num rows: 1000 Data size: 10624 Basic stats: COMPLETE Column 
> stats: NONE
> This might cause quite a few failures so we need to investigate it first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11671) Optimize RuleRegExp in DPP codepath

2015-09-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728185#comment-14728185
 ] 

Hari Sankar Sivarama Subramaniyan commented on HIVE-11671:
--

Tested  the failures locally and they pass, the failures are not related to 
this patch

Thanks
Hari

> Optimize RuleRegExp in DPP codepath
> ---
>
> Key: HIVE-11671
> URL: https://issues.apache.org/jira/browse/HIVE-11671
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
> Attachments: HIVE-11671.1.patch, HIVE-11671.2.patch, 
> cpu_with_patch.png, cpu_without_patch.png, mem_with_patch.png, 
> mem_without_patch.png
>
>
> When running a large query with DPP in its codepath, RuleRegExp came up as 
> hotspot. Creating this JIRA to optimize RuleRegExp.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11548) HCatLoader should support predicate pushdown.

2015-09-02 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-11548:

Attachment: HIVE-11548.2.patch

Fixed failing tests. {{TestHCatClient}} needed fixing independently of this 
fix. I'm squeezing it into this JIRA.

> HCatLoader should support predicate pushdown.
> -
>
> Key: HIVE-11548
> URL: https://issues.apache.org/jira/browse/HIVE-11548
> Project: Hive
>  Issue Type: New Feature
>  Components: HCatalog
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-11548.1.patch, HIVE-11548.2.patch
>
>
> When one uses {{HCatInputFormat}}/{{HCatLoader}} to read from file-formats 
> that support predicate pushdown (such as ORC, with 
> {{hive.optimize.index.filter=true}}), one sees that the predicates aren't 
> actually pushed down into the storage layer.
> The forthcoming patch should allow for filter-pushdown, if any of the 
> partitions being scanned with {{HCatLoader}} support the functionality. The 
> patch should technically allow the same for users of {{HCatInputFormat}}, but 
> I don't currently have a neat interface to build a compound 
> predicate-expression. Will add this separately, if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11711) Merge hbase-metastore branch to trunk

2015-09-02 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-11711:
--
Attachment: HIVE-11711_nogen.patch

Rename the nogen patch so precommit test can pick the right patch.

> Merge hbase-metastore branch to trunk
> -
>
> Key: HIVE-11711
> URL: https://issues.apache.org/jira/browse/HIVE-11711
> Project: Hive
>  Issue Type: Sub-task
>  Components: HBase Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 2.0.0
>
> Attachments: HIVE-11711.1.patch, HIVE-11711_nogen.patch
>
>
> Major development of hbase-metastore is done and it's time to merge the 
> branch back into master.
> Currently hbase-metastore is only invoked when running TestMiniTezCliDriver. 
> The instruction for setting up hbase-metastore is captured in 
> https://cwiki.apache.org/confluence/display/Hive/HBaseMetastoreDevelopmentGuide.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11634) Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)

2015-09-02 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-11634:
-
Attachment: HIVE-11634.7.patch

> Support partition pruning for IN(STRUCT(partcol, nonpartcol..)...)
> --
>
> Key: HIVE-11634
> URL: https://issues.apache.org/jira/browse/HIVE-11634
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-11634.1.patch, HIVE-11634.2.patch, 
> HIVE-11634.3.patch, HIVE-11634.4.patch, HIVE-11634.5.patch, 
> HIVE-11634.6.patch, HIVE-11634.7.patch
>
>
> Currently, we do not support partition pruning for the following scenario
> {code}
> create table pcr_t1 (key int, value string) partitioned by (ds string);
> insert overwrite table pcr_t1 partition (ds='2000-04-08') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-09') select * from src 
> where key < 20 order by key;
> insert overwrite table pcr_t1 partition (ds='2000-04-10') select * from src 
> where key < 20 order by key;
> explain extended select ds from pcr_t1 where struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> If we run the above query, we see that all the partitions of table pcr_t1 are 
> present in the filter predicate where as we can prune  partition 
> (ds='2000-04-10'). 
> The optimization is to rewrite the above query into the following.
> {code}
> explain extended select ds from pcr_t1 where  (struct(ds)) IN 
> (struct('2000-04-08'), struct('2000-04-09')) and  struct(ds, key) in 
> (struct('2000-04-08',1), struct('2000-04-09',2));
> {code}
> The predicate (struct(ds)) IN (struct('2000-04-08'), struct('2000-04-09'))  
> is used by partition pruner to prune the columns which otherwise will not be 
> pruned.
> This is an extension of the idea presented in HIVE-11573.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11149) Fix issue with Thread unsafe Class HashMap in PerfLogger.java hangs in Multi-thread environment

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728063#comment-14728063
 ] 

Hive QA commented on HIVE-11149:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753742/HIVE-11149.02.patch

{color:red}ERROR:{color} -1 due to 371 failed/errored test(s), 9392 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join0
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join13
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join15
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join16
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join17
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join19
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join20
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join22
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join23
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join24
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join26
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join27
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join29
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join30
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join31
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join32
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join33
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_filters
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_nulls
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_stats2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_join_without_localtask
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_10
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_12
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_14
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_9
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_if_with_path_filter
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucket_map_join_spark4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_2
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_3
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_4
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_5
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_6
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_7
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_bucketcontext_8

[jira] [Commented] (HIVE-11149) Fix issue with Thread unsafe Class HashMap in PerfLogger.java hangs in Multi-thread environment

2015-09-02 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728071#comment-14728071
 ] 

Sergey Shelukhin commented on HIVE-11149:
-

See the comment above... PerfLogger is threadlocal. Where are you seeing the 
issues?

> Fix issue with Thread unsafe Class  HashMap in PerfLogger.java  hangs  in  
> Multi-thread environment
> ---
>
> Key: HIVE-11149
> URL: https://issues.apache.org/jira/browse/HIVE-11149
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 1.2.0
>Reporter: WangMeng
>Assignee: WangMeng
> Attachments: HIVE-11149.01.patch, HIVE-11149.02.patch
>
>
> In  Multi-thread environment,  the Thread unsafe Class HashMap in 
> PerfLogger.java  will  casue massive Java Processes hang  and cost  large 
> amounts of unnecessary CPU and Memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11722) HBaseImport should import basic stats and column stats

2015-09-02 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728167#comment-14728167
 ] 

Daniel Dai commented on HIVE-11722:
---

Stats should be imported as part of schema migration.

> HBaseImport should import basic stats and column stats
> --
>
> Key: HIVE-11722
> URL: https://issues.apache.org/jira/browse/HIVE-11722
> Project: Hive
>  Issue Type: Sub-task
>  Components: HBase Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: hbase-metastore-branch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11149) Fix issue with Thread unsafe Class HashMap in PerfLogger.java hangs in Multi-thread environment

2015-09-02 Thread xu hai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xu hai updated HIVE-11149:
--
Attachment: HIVE-11149.02.patch.txt

Hi all, I uploaded a new patch for this issue.
Please check it. Thanks.

> Fix issue with Thread unsafe Class  HashMap in PerfLogger.java  hangs  in  
> Multi-thread environment
> ---
>
> Key: HIVE-11149
> URL: https://issues.apache.org/jira/browse/HIVE-11149
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 1.2.0
>Reporter: WangMeng
>Assignee: WangMeng
> Attachments: HIVE-11149.01.patch, HIVE-11149.02.patch.txt
>
>
> In  Multi-thread environment,  the Thread unsafe Class HashMap in 
> PerfLogger.java  will  casue massive Java Processes hang  and cost  large 
> amounts of unnecessary CPU and Memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11149) Fix issue with Thread unsafe Class HashMap in PerfLogger.java hangs in Multi-thread environment

2015-09-02 Thread xu hai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xu hai updated HIVE-11149:
--
Attachment: (was: HIVE-11149.02.patch.txt)

> Fix issue with Thread unsafe Class  HashMap in PerfLogger.java  hangs  in  
> Multi-thread environment
> ---
>
> Key: HIVE-11149
> URL: https://issues.apache.org/jira/browse/HIVE-11149
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 1.2.0
>Reporter: WangMeng
>Assignee: WangMeng
> Attachments: HIVE-11149.01.patch, HIVE-11149.02.patch
>
>
> In  Multi-thread environment,  the Thread unsafe Class HashMap in 
> PerfLogger.java  will  casue massive Java Processes hang  and cost  large 
> amounts of unnecessary CPU and Memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11149) Fix issue with Thread unsafe Class HashMap in PerfLogger.java hangs in Multi-thread environment

2015-09-02 Thread xu hai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xu hai updated HIVE-11149:
--
Attachment: HIVE-11149.02.patch

> Fix issue with Thread unsafe Class  HashMap in PerfLogger.java  hangs  in  
> Multi-thread environment
> ---
>
> Key: HIVE-11149
> URL: https://issues.apache.org/jira/browse/HIVE-11149
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 1.2.0
>Reporter: WangMeng
>Assignee: WangMeng
> Attachments: HIVE-11149.01.patch, HIVE-11149.02.patch
>
>
> In  Multi-thread environment,  the Thread unsafe Class HashMap in 
> PerfLogger.java  will  casue massive Java Processes hang  and cost  large 
> amounts of unnecessary CPU and Memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11482) Add retrying thrift client for HiveServer2

2015-09-02 Thread Akshay Goyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay Goyal updated HIVE-11482:

Attachment: (was: HIVE-11482.01.patch)

> Add retrying thrift client for HiveServer2
> --
>
> Key: HIVE-11482
> URL: https://issues.apache.org/jira/browse/HIVE-11482
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
>
> Similar to 
> https://github.com/apache/hive/blob/master/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java,
>  this improvement request is to add a retrying thrift client for HiveServer2 
> to do retries upon thrift exceptions.
> Here are few commits done on a forked branch that can be picked - 
> https://github.com/InMobi/hive/commit/7fb957fb9c2b6000d37c53294e256460010cb6b7
> https://github.com/InMobi/hive/commit/11e4b330f051c3f58927a276d562446761c9cd6d
> https://github.com/InMobi/hive/commit/241386fd870373a9253dca0bcbdd4ea7e665406c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11640) Shell command doesn't work for new CLI[Beeline-cli branch]

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727086#comment-14727086
 ] 

Hive QA commented on HIVE-11640:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753700/HIVE-11640.7-beeline-cli.patch

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 9243 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_join0
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_8
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_metadata_only_queries_with_filters
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BEELINE-Build/32/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-BEELINE-Build/32/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-BEELINE-Build-32/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753700 - PreCommit-HIVE-BEELINE-Build

> Shell command doesn't work for new CLI[Beeline-cli branch]
> --
>
> Key: HIVE-11640
> URL: https://issues.apache.org/jira/browse/HIVE-11640
> Project: Hive
>  Issue Type: Sub-task
>  Components: CLI
>Reporter: Ferdinand Xu
>Assignee: Ferdinand Xu
> Attachments: HIVE-11640.1-beeline-cli.patch, 
> HIVE-11640.2-beeline-cli.patch, HIVE-11640.3-beeline-cli.patch, 
> HIVE-11640.4-beeline-cli.patch, HIVE-11640.5-beeline-cli.patch, 
> HIVE-11640.7-beeline-cli.patch
>
>
> The shell command doesn't work for the new CLI and "Error: Method not 
> supported (state=,code=0)" was thrown during the execution for option f and e.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-10990) Compatibility Hive-1.2 an hbase-1.0.1.1

2015-09-02 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727117#comment-14727117
 ] 

Lefty Leverenz commented on HIVE-10990:
---

Looks good, thanks [~swarnim].  I agree that it isn't needed in the Getting 
Started and Installation docs.

> Compatibility Hive-1.2 an hbase-1.0.1.1
> ---
>
> Key: HIVE-10990
> URL: https://issues.apache.org/jira/browse/HIVE-10990
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, HBase Handler, HiveServer2
>Affects Versions: 1.2.0
>Reporter: gurmukh singh
>Assignee: Swarnim Kulkarni
>
> Hive external table works fine with Hbase.
> Hive-1.2 and hbase-1.0.1.1, hadoop-2.5.2
> Not able to create a table from hive in hbase.
> 1: jdbc:hive2://edge1.dilithium.com:1/def> TBLPROPERTIES 
> ("hbase.table.name" = "xyz");
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
> Error: Error while processing statement: FAILED: Execution Error, return code 
> 1 from org.apache.hadoop.hive.ql.exec.DDLTask. 
> org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
>  (state=08S01,code=1)
> [hdfs@edge1 cluster]$ hive
> 2015-06-12 17:56:49,952 WARN  [main] conf.HiveConf: HiveConf of name 
> hive.metastore.local does not exist
> Logging initialized using configuration in 
> jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/local/cluster/apache-hive-1.2.0-bin/auxlib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/local/cluster/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> hive> CREATE TABLE hbase_table_1(key int, value string)
> > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
> > TBLPROPERTIES ("hbase.table.name" = "xyz");
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> org.apache.hadoop.hbase.HTableDescriptor.addFamily(Lorg/apache/hadoop/hbase/HColumnDescriptor;)V
> ===
> scan complete in 1535ms
> 14 driver classes found
> Compliant Version Driver Class
> no5.1 com.mysql.jdbc.Driver
> no5.1 com.mysql.jdbc.NonRegisteringDriver
> no5.1 com.mysql.jdbc.NonRegisteringReplicationDriver
> no5.1 com.mysql.jdbc.ReplicationDriver
> yes   1.2 org.apache.calcite.avatica.remote.Driver
> yes   1.2 org.apache.calcite.jdbc.Driver
> yes   1.0 org.apache.commons.dbcp.PoolingDriver
> yes   10.11   org.apache.derby.jdbc.AutoloadedDriver
> yes   10.11   org.apache.derby.jdbc.Driver42
> yes   10.11   org.apache.derby.jdbc.EmbeddedDriver
> yes   10.11   org.apache.derby.jdbc.InternalDriver
> no1.2 org.apache.hive.jdbc.HiveDriver
> yes   1.0 org.datanucleus.store.rdbms.datasource.dbcp.PoolingDriver
> no5.1 org.gjt.mm.mysql.Driver



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11482) Add retrying thrift client for HiveServer2

2015-09-02 Thread Akshay Goyal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727055#comment-14727055
 ] 

Akshay Goyal commented on HIVE-11482:
-

Addressed review comments and attached new patch here.

> Add retrying thrift client for HiveServer2
> --
>
> Key: HIVE-11482
> URL: https://issues.apache.org/jira/browse/HIVE-11482
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
> Attachments: HIVE-11482.01.patch, HIVE-11482.02.patch
>
>
> Similar to 
> https://github.com/apache/hive/blob/master/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java,
>  this improvement request is to add a retrying thrift client for HiveServer2 
> to do retries upon thrift exceptions.
> Here are few commits done on a forked branch that can be picked - 
> https://github.com/InMobi/hive/commit/7fb957fb9c2b6000d37c53294e256460010cb6b7
> https://github.com/InMobi/hive/commit/11e4b330f051c3f58927a276d562446761c9cd6d
> https://github.com/InMobi/hive/commit/241386fd870373a9253dca0bcbdd4ea7e665406c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11482) Add retrying thrift client for HiveServer2

2015-09-02 Thread Akshay Goyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay Goyal updated HIVE-11482:

Attachment: (was: HIVE-11482.02.patch)

> Add retrying thrift client for HiveServer2
> --
>
> Key: HIVE-11482
> URL: https://issues.apache.org/jira/browse/HIVE-11482
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
>
> Similar to 
> https://github.com/apache/hive/blob/master/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java,
>  this improvement request is to add a retrying thrift client for HiveServer2 
> to do retries upon thrift exceptions.
> Here are few commits done on a forked branch that can be picked - 
> https://github.com/InMobi/hive/commit/7fb957fb9c2b6000d37c53294e256460010cb6b7
> https://github.com/InMobi/hive/commit/11e4b330f051c3f58927a276d562446761c9cd6d
> https://github.com/InMobi/hive/commit/241386fd870373a9253dca0bcbdd4ea7e665406c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11482) Add retrying thrift client for HiveServer2

2015-09-02 Thread Akshay Goyal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akshay Goyal updated HIVE-11482:

Attachment: HIVE-11482.02.patch

> Add retrying thrift client for HiveServer2
> --
>
> Key: HIVE-11482
> URL: https://issues.apache.org/jira/browse/HIVE-11482
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Amareshwari Sriramadasu
>Assignee: Akshay Goyal
> Attachments: HIVE-11482.02.patch
>
>
> Similar to 
> https://github.com/apache/hive/blob/master/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java,
>  this improvement request is to add a retrying thrift client for HiveServer2 
> to do retries upon thrift exceptions.
> Here are few commits done on a forked branch that can be picked - 
> https://github.com/InMobi/hive/commit/7fb957fb9c2b6000d37c53294e256460010cb6b7
> https://github.com/InMobi/hive/commit/11e4b330f051c3f58927a276d562446761c9cd6d
> https://github.com/InMobi/hive/commit/241386fd870373a9253dca0bcbdd4ea7e665406c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11712) Duplicate groupby keys cause ClassCastException

2015-09-02 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727289#comment-14727289
 ] 

Hive QA commented on HIVE-11712:




{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12753603/HIVE-11712.1.patch

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 9391 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_mergejoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_char_mapjoin1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_inner_join
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_leftsemi_mapjoin
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_dynamic_partition_pruning
org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation
{noformat}

Test results: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5146/testReport
Console output: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/5146/console
Test logs: 
http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-5146/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12753603 - PreCommit-HIVE-TRUNK-Build

> Duplicate groupby keys cause ClassCastException
> ---
>
> Key: HIVE-11712
> URL: https://issues.apache.org/jira/browse/HIVE-11712
> Project: Hive
>  Issue Type: Bug
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
> Fix For: 1.3.0, 2.0.0
>
> Attachments: HIVE-11712.1.patch
>
>
> With duplicate groupby keys, we could use wrong object inspectors for some 
> groupby expressions, and lead to ClassCastException, for example, 
> {noformat}
> explain
> SELECT distinct s1.customer_name as x, s1.customer_name as y
> FROM default.testv1_staples s1 join default.src s2 on s1.customer_name = 
> s2.key
> HAVING (
> (SUM(s1.customer_balance) <= 4074689.00041)
> AND (AVG(s1.discount) <= 822)
> AND (COUNT(s2.value) > 4)
> {noformat}
> will lead to
> {noformat}
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableShortObjectInspector
>  cannot be cast to 
> org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage$AbstractGenericUDAFAverageEvaluator.init(GenericUDAFAverage.java:374)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getGenericUDAFInfo(SemanticAnalyzer.java:3887)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanGroupByOperator1(SemanticAnalyzer.java:4354)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genGroupByPlanMapAggrNoSkew(SemanticAnalyzer.java:5644)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8977)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9849)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9742)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10178)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10189)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10106)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6013) Supporting Quoted Identifiers in Column Names

2015-09-02 Thread Gabriel C Balan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14727417#comment-14727417
 ] 

Gabriel C Balan commented on HIVE-6013:
---

Should commas be allowed inside quoted (back-tick-ed) column names? 

It seems I am able to create a table with commas in the name of the *last* 
column, but queries naming the column explicitly don't work.

{code}
hive> set hive.support.quoted.identifiers;
hive.support.quoted.identifiers=column

hive> create table t1(`a,` string);
OK
Time taken: 0.245 seconds
hive> create table t2(`a,` string, b string);
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: 
MetaException(message:org.apache.hadoop.hive.serde2.SerDeException 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe: columns has 3 elements 
while columns.types has 2 elements!)

hive> describe t1;
OK
a,  string
Time taken: 0.698 seconds, Fetched: 1 row(s)

hive> select `a,` from t1;
FAILED: SemanticException [Error 10004]: Line 1:7 Invalid table alias or column 
reference 'a,': (possible column names are: a)
{code}

> Supporting Quoted Identifiers in Column Names
> -
>
> Key: HIVE-6013
> URL: https://issues.apache.org/jira/browse/HIVE-6013
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.13.0
>
> Attachments: HIVE-6013.1.patch, HIVE-6013.2.patch, HIVE-6013.3.patch, 
> HIVE-6013.4.patch, HIVE-6013.5.patch, HIVE-6013.6.patch, HIVE-6013.7.patch, 
> QuotedIdentifier.html
>
>
> Hive's current behavior on Quoted Identifiers is different from the normal 
> interpretation. Quoted Identifier (using backticks) has a special 
> interpretation for Select expressions(as Regular Expressions). Have 
> documented current behavior and proposed a solution in attached doc.
> Summary of solution is:
> - Introduce 'standard' quoted identifiers for columns only. 
> - At the langauage level this is turned on by a flag.
> - At the metadata level we relax the constraint on column names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)