[
https://issues.apache.org/jira/browse/FLINK-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14716292#comment-14716292
]
ASF GitHub Bot commented on FLINK-2545:
---------------------------------------
Github user zentol commented on the pull request:
https://github.com/apache/flink/pull/1067#issuecomment-135342976
Have you verified that returning at that position does not cause other
issues? This is essentially just swallowing the thrown exception, hoping
nothing else goes wrong.
I don't see how this actually fixes the issue. The count being negatives
tells us there is something wrong with the bucket count being set. Resolving
that would be a fix.
> NegativeArraySizeException while creating hash table bloom filters
> ------------------------------------------------------------------
>
> Key: FLINK-2545
> URL: https://issues.apache.org/jira/browse/FLINK-2545
> Project: Flink
> Issue Type: Bug
> Components: Distributed Runtime
> Affects Versions: master
> Reporter: Greg Hogan
> Assignee: Chengxiang Li
>
> The following exception occurred a second time when I immediately re-ran my
> application, though after recompiling and restarting Flink the subsequent
> execution ran without error.
> java.lang.Exception: The data preparation for task '...' , caused an error:
> null
> at
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:465)
> at
> org.apache.flink.runtime.operators.RegularPactTask.invoke(RegularPactTask.java:354)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:581)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NegativeArraySizeException
> at
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucket(MutableHashTable.java:1160)
> at
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildBloomFilterForBucketsInPartition(MutableHashTable.java:1143)
> at
> org.apache.flink.runtime.operators.hash.MutableHashTable.spillPartition(MutableHashTable.java:1117)
> at
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertBucketEntry(MutableHashTable.java:946)
> at
> org.apache.flink.runtime.operators.hash.MutableHashTable.insertIntoTable(MutableHashTable.java:868)
> at
> org.apache.flink.runtime.operators.hash.MutableHashTable.buildInitialTable(MutableHashTable.java:692)
> at
> org.apache.flink.runtime.operators.hash.MutableHashTable.open(MutableHashTable.java:455)
> at
> org.apache.flink.runtime.operators.hash.ReusingBuildSecondHashMatchIterator.open(ReusingBuildSecondHashMatchIterator.java:93)
> at
> org.apache.flink.runtime.operators.JoinDriver.prepare(JoinDriver.java:195)
> at
> org.apache.flink.runtime.operators.RegularPactTask.run(RegularPactTask.java:459)
> ... 3 more
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)