[ 
https://issues.apache.org/jira/browse/FLINK-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14717934#comment-14717934
 ] 

ASF GitHub Bot commented on FLINK-2545:
---------------------------------------

Github user ChengXiangLi commented on the pull request:

    https://github.com/apache/flink/pull/1067#issuecomment-135612599
  
    Thanks for the remind, @zentol and @StephanEwen , I should be too hurry to 
open this PR. I tried to fix the exception in bloom filter in this PR and 
verify other potential issues in hash table behind negative count number 
separately, obviously, there is no need to do in that way. So let's wait for 
Greg's response now.


> NegativeArraySizeException while creating hash table bloom filters
> ------------------------------------------------------------------
>
>                 Key: FLINK-2545
>                 URL: https://issues.apache.org/jira/browse/FLINK-2545
>             Project: Flink
>          Issue Type: Bug
>          Components: Distributed Runtime
>    Affects Versions: master
>            Reporter: Greg Hogan
>            Assignee: Chengxiang Li
>
> The following exception occurred a second time when I immediately re-ran my 
> application, though after recompiling and restarting Flink the subsequent 
> execution ran without error.
> java.lang.Exception: The data preparation for task '...' , caused an error: 
> null
>       at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:465)
>       at 
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:354)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NegativeArraySizeException
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucket(MutableHashTable.java:1160)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucketsInPartition(MutableHashTable.java:1143)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.spillPartition(MutableHashTable.java:1117)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertBucketEntry(MutableHashTable.java:946)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:868)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildInitialTable(MutableHashTable.java:692)
>       at 
> org.apache.flink.runtime.operators.hash.MutableHashTable.open(MutableHashTable.java:455)
>       at 
> org.apache.flink.runtime.operators.hash.ReusingBuildSecondHashMatchIterator.open(ReusingBuildSecondHashMatchIterator.java:93)
>       at 
> org.apache.flink.runtime.operators.JoinDriver.prepare(JoinDriver.java:195)
>       at 
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:459)
>       ... 3 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to