[ 
https://issues.apache.org/jira/browse/FLINK-2240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14660355#comment-14660355
 ] 

ASF GitHub Bot commented on FLINK-2240:
---------------------------------------

Github user vasia commented on the pull request:

    https://github.com/apache/flink/pull/888#issuecomment-128445757
  
    Hi,
    this looks great indeed!
    
    Just out of curiosity, why did you write your own bloom filter 
implementation and not use a ready one, e.g. from guava? I'm wondering because 
in #923 we also want to use a bloom filter for an approximate algorithm 
implementation.
    
    Thanks!


> Use BloomFilter to minimize probe side records which are spilled to disk in 
> Hybrid-Hash-Join
> --------------------------------------------------------------------------------------------
>
>                 Key: FLINK-2240
>                 URL: https://issues.apache.org/jira/browse/FLINK-2240
>             Project: Flink
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Chengxiang Li
>            Assignee: Chengxiang Li
>            Priority: Minor
>             Fix For: 0.10
>
>
> In Hybrid-Hash-Join, while small table does not fit into memory, part of the 
> small table data would be spilled to disk, and the counterpart partition of 
> big table data would be spilled to disk in probe phase as well. If we build a 
> BloomFilter while spill small table to disk during build phase, and use it to 
> filter the big table records which tend to be spilled to disk, this may 
> greatly  reduce the spilled big table file size, and saved the disk IO cost 
> for writing and further reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to