[ 
https://issues.apache.org/jira/browse/HIVE-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16926512#comment-16926512
 ] 

Hive QA commented on HIVE-15956:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12979943/HIVE-15956.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16751 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/18520/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/18520/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-18520/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12979943 - PreCommit-HIVE-Build

> StackOverflowError when drop lots of partitions
> -----------------------------------------------
>
>                 Key: HIVE-15956
>                 URL: https://issues.apache.org/jira/browse/HIVE-15956
>             Project: Hive
>          Issue Type: Bug
>          Components: Metastore
>    Affects Versions: 1.3.0, 2.2.0
>            Reporter: Niklaus Xiao
>            Assignee: Denys Kuzmenko
>            Priority: Major
>         Attachments: HIVE-15956.2.patch, HIVE-15956.3.patch, HIVE-15956.patch
>
>
> Repro steps:
> 1. Create partitioned table and add 10000 partitions
> {code}
> create table test_partition(id int) partitioned by (dt int);
> alter table test_partition add partition(dt=1);
> alter table test_partition add partition(dt=3);
> alter table test_partition add partition(dt=4);
> ...
> alter table test_partition add partition(dt=10000);
> {code}
> 2. Drop 9000 partitions:
> {code}
> alter table test_partition drop partition(dt<9000);
> {code}
> Step 2 will fail with StackOverflowError:
> {code}
> Exception in thread "pool-7-thread-161" java.lang.StackOverflowError
>     at 
> org.datanucleus.query.expression.ExpressionCompiler.isOperator(ExpressionCompiler.java:819)
>     at 
> org.datanucleus.query.expression.ExpressionCompiler.compileOrAndExpression(ExpressionCompiler.java:190)
>     at 
> org.datanucleus.query.expression.ExpressionCompiler.compileExpression(ExpressionCompiler.java:179)
>     at 
> org.datanucleus.query.expression.ExpressionCompiler.compileOrAndExpression(ExpressionCompiler.java:192)
>     at 
> org.datanucleus.query.expression.ExpressionCompiler.compileExpression(ExpressionCompiler.java:179)
>     at 
> org.datanucleus.query.expression.ExpressionCompiler.compileOrAndExpression(ExpressionCompiler.java:192)
>     at 
> org.datanucleus.query.expression.ExpressionCompiler.compileExpression(ExpressionCompiler.java:179)
> {code}
> {code}
> Exception in thread "pool-7-thread-198" java.lang.StackOverflowError
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:83)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
>     at 
> org.datanucleus.query.expression.DyadicExpression.bind(DyadicExpression.java:87)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to